+- +-

+- You

Welcome, Guest.
Please login or register.
 
 
 
Forgot your password?

+- Site Data

Members
Total Members: 87
Latest: brewski
New This Month: 0
New This Week: 0
New Today: 0
Stats
Total Posts: 112818
Total Topics: 4374
Most Online Today: 2
Most Online Ever: 55
(April 18, 2016, 06:09:38 pm)
Users Online
Members: 0
Guests: 0
Total: 0

Author Topic: Diego Destroys Western Philosophy: The Thread  (Read 560 times)

Reign in Treet

  • God-King
  • Tyler Perry
  • ******
  • Posts: 960
Re: Diego Destroys Western Philosophy: The Thread
« Reply #20 on: May 13, 2018, 10:15:58 am »
Alright, I finally have some time.

It is extremely incorrect to say that morality and the human mind must "work together". It is equally incorrect to say that because only the human mind is capable of realizing what morality is that morality should not wish to change the human mind.  In the study of science, we don't care much about what "works together" with the mind. If science yields a position and the mind refuses to accept it, the mind is wrong. The mind must conform to findings in science and mathematics. Reality constructs itself in a way and we are forced to accept it, whether we "like it" or not. The "lesser minded" on science aren't granted relevance in the scientific debate. The same should be true with morality.

Morality, like anything else, can be approached with a scientific eye. As we understand more about psychology, neuroscience, and human evolution, you will see that certain principles, such as collective well-being, are very evolved into us. Collectivism can be seen everywhere in nature, and we can study the benefits of it on animal civilization. Therefore, all facets of moral questions don't purely rely on the human mind for creation. They, like science, only rely on observation and interpretation of the evidence.

There are several champions of these subjects that I'm rather fond of. Sam Harris, a proponent of maximizing well-being, makes compelling arguments that morality entirely falls into the domain of science, and while I partially disagree, I do agree that as we know more about neuroscience and psychology, more about the mechanisms of morality will become less metaphysical and more scientific, thus expanding the glove you think should fit perfectly. Once again, humanity will have to reshape its mind to fit the evidence.

That being said, the evidence yielded from the fields mentioned above does not list humanity's primary function as self-interested. Rather, that is mostly a cultural indoctrination stemming from individualistic societies, like the one you and I live in. I also would like to add that the large amount of historical evidence that can be examined across all generations showing how subservient human beings can be yields the same skepticism in regards to your statement.

You saying that a rational egoist will not do things to harm others through some mechanism that sounds remarkably like a golden rule derivation.  However, that is not the logical position at all. Rather, a calculation of how likely is a negative result likely to occur would ensue. Take a person who has a self-interest in murder. That person may not commit said murder in the middle of NYC due to the fact that he would obviously get caught and spending the rest of his life in jail would be very contrary to most's self-interest. However, given a hypothetical circumstance where said person could commit the murder with a guarantee of never being caught, that person ought to do it. It fulfills his self-interest. The other person may not like it too well but that person's suffering need not be taken into account by the first. Your position relies on assumptions of equal power distribution and equal probability of recurrence when a the cost of a "wrong" act is calculated.

Let's examine the world in terms of systems. Systems work because all of their individual mechanisms perform the functions they are supposed to perform. If there is some guiding principle that says the responsibility of each component is to help ensure the grand system works as best as possible, then the system will maintain itself just fine. However, let's say each component works for the sake of itself and the functioning of the system of a whole is just a consequence of each component pursuing its own self interest. It's not too difficult to derive that when parts of the system acknowledge that they can pursue their best interests in opposite of the system's overall interest, that those components will and the system is not as stable. The continuation of the system is no longer necessary based upon the rules that have been established.

You say freedom is a good thing and that calling it irrational is rather silly, but you offer no facts or numbers to support that. Defense of freedom is very valuable at times, very detrimental at others. It is up to a cold, dispassionate calculation to discern which. Blind defense of freedom is irrational, and I think I can cite plenty of examples to prove that. You, on the other hand, don't cite examples for your defenses. When freedom detracts or impedes from the better-functioning of the larger system, it becomes a vice.

Utilitarianism works so long as people follow the rules that get established and follow the cold calculations even when they don't serve their own interests. If they don't and don't constantly act in the image of the "greater good", they are not utilitarian at all. So the "corrupted leaders" you are assuredly use a "greater good" mask for something else. Rather, I would think they would fall more in-line with the rules of your philosophy. Given that much power, why shouldn't they pursue things out of self-interest? What consequence to themselves could they incur that would make it not in their own interest? In my theory, that person would be forced to take the weight of the suffering he would cause into account and that would assuredly make "self-interest" obsolete.

For the record, fields of science, medical research, and mathematics are prime examples of how "humanitarian" people can be, so that's not a good example. Most scientists know they will never win a Nobel. Most don't get jobs at prestigious universities. However, they still spend 10 years of their life getting a PhD and work in academic research for much less pay then they can get in industry programming computers or designing automobile components. I would say "for the betterment of the world" is a philosophy championed by many in these fields.

ChillinDylan Godsend

  • God-King
  • Wes Anderson
  • **********
  • Posts: 7478
Re: Diego Destroys Western Philosophy: The Thread
« Reply #21 on: May 13, 2018, 10:19:13 am »
Alright, I finally have some time.

It is extremely incorrect to say that morality and the human mind must "work together". It is equally incorrect to say that because only the human mind is capable of realizing what morality is that morality should not wish to change the human mind.  In the study of science, we don't care much about what "works together" with the mind. If science yields a position and the mind refuses to accept it, the mind is wrong. The mind must conform to findings in science and mathematics. Reality constructs itself in a way and we are forced to accept it, whether we "like it" or not. The "lesser minded" on science aren't granted relevance in the scientific debate. The same should be true with morality.

Morality, like anything else, can be approached with a scientific eye. As we understand more about psychology, neuroscience, and human evolution, you will see that certain principles, such as collective well-being, are very evolved into us. Collectivism can be seen everywhere in nature, and we can study the benefits of it on animal civilization. Therefore, all facets of moral questions don't purely rely on the human mind for creation. They, like science, only rely on observation and interpretation of the evidence.

There are several champions of these subjects that I'm rather fond of. Sam Harris, a proponent of maximizing well-being, makes compelling arguments that morality entirely falls into the domain of science, and while I partially disagree, I do agree that as we know more about neuroscience and psychology, more about the mechanisms of morality will become less metaphysical and more scientific, thus expanding the glove you think should fit perfectly. Once again, humanity will have to reshape its mind to fit the evidence.

That being said, the evidence yielded from the fields mentioned above does not list humanity's primary function as self-interested. Rather, that is mostly a cultural indoctrination stemming from individualistic societies, like the one you and I live in. I also would like to add that the large amount of historical evidence that can be examined across all generations showing how subservient human beings can be yields the same skepticism in regards to your statement.

You saying that a rational egoist will not do things to harm others through some mechanism that sounds remarkably like a golden rule derivation.  However, that is not the logical position at all. Rather, a calculation of how likely is a negative result likely to occur would ensue. Take a person who has a self-interest in murder. That person may not commit said murder in the middle of NYC due to the fact that he would obviously get caught and spending the rest of his life in jail would be very contrary to most's self-interest. However, given a hypothetical circumstance where said person could commit the murder with a guarantee of never being caught, that person ought to do it. It fulfills his self-interest. The other person may not like it too well but that person's suffering need not be taken into account by the first. Your position relies on assumptions of equal power distribution and equal probability of recurrence when a the cost of a "wrong" act is calculated.

Let's examine the world in terms of systems. Systems work because all of their individual mechanisms perform the functions they are supposed to perform. If there is some guiding principle that says the responsibility of each component is to help ensure the grand system works as best as possible, then the system will maintain itself just fine. However, let's say each component works for the sake of itself and the functioning of the system of a whole is just a consequence of each component pursuing its own self interest. It's not too difficult to derive that when parts of the system acknowledge that they can pursue their best interests in opposite of the system's overall interest, that those components will and the system is not as stable. The continuation of the system is no longer necessary based upon the rules that have been established.

You say freedom is a good thing and that calling it irrational is rather silly, but you offer no facts or numbers to support that. Defense of freedom is very valuable at times, very detrimental at others. It is up to a cold, dispassionate calculation to discern which. Blind defense of freedom is irrational, and I think I can cite plenty of examples to prove that. You, on the other hand, don't cite examples for your defenses. When freedom detracts or impedes from the better-functioning of the larger system, it becomes a vice.

Utilitarianism works so long as people follow the rules that get established and follow the cold calculations even when they don't serve their own interests. If they don't and don't constantly act in the image of the "greater good", they are not utilitarian at all. So the "corrupted leaders" you are assuredly use a "greater good" mask for something else. Rather, I would think they would fall more in-line with the rules of your philosophy. Given that much power, why shouldn't they pursue things out of self-interest? What consequence to themselves could they incur that would make it not in their own interest? In my theory, that person would be forced to take the weight of the suffering he would cause into account and that would assuredly make "self-interest" obsolete.

For the record, fields of science, medical research, and mathematics are prime examples of how "humanitarian" people can be, so that's not a good example. Most scientists know they will never win a Nobel. Most don't get jobs at prestigious universities. However, they still spend 10 years of their life getting a PhD and work in academic research for much less pay then they can get in industry programming computers or designing automobile components. I would say "for the betterment of the world" is a philosophy championed by many in these fields.

You had your monthly beer this morning, didn't you?

Tut

  • God-King
  • Paul Thomas Anderson
  • **********
  • Posts: 6690
  • It's all over now, baby blue...
  • Location: Nice try, NSA
Re: Diego Destroys Western Philosophy: The Thread
« Reply #22 on: May 13, 2018, 04:05:57 pm »
I will debunk this insanity piece by piece.

Alright, I finally have some time.

It is extremely incorrect to say that morality and the human mind must "work together". It is equally incorrect to say that because only the human mind is capable of realizing what morality is that morality should not wish to change the human mind.  In the study of science, we don't care much about what "works together" with the mind. If science yields a position and the mind refuses to accept it, the mind is wrong. The mind must conform to findings in science and mathematics. Reality constructs itself in a way and we are forced to accept it, whether we "like it" or not. The "lesser minded" on science aren't granted relevance in the scientific debate. The same should be true with morality.

Morality, like anything else, can be approached with a scientific eye. As we understand more about psychology, neuroscience, and human evolution, you will see that certain principles, such as collective well-being, are very evolved into us. Collectivism can be seen everywhere in nature, and we can study the benefits of it on animal civilization. Therefore, all facets of moral questions don't purely rely on the human mind for creation. They, like science, only rely on observation and interpretation of the evidence.

Except unlike science and math, there is no morality found in nature. The human mind creates it. Science and math do not have to conform to the mind because in both fields, there is an external truth that we are capable of reasoning out. When such an external truth does not exist, the only thing we can rely on is the human mind.

Any argument against this would effectively have to claim that morality is as objective as math, which is demonstrably untrue.

There are several champions of these subjects that I'm rather fond of. Sam Harris, a proponent of maximizing well-being, makes compelling arguments that morality entirely falls into the domain of science, and while I partially disagree, I do agree that as we know more about neuroscience and psychology, more about the mechanisms of morality will become less metaphysical and more scientific, thus expanding the glove you think should fit perfectly. Once again, humanity will have to reshape its mind to fit the evidence.

That being said, the evidence yielded from the fields mentioned above does not list humanity's primary function as self-interested. Rather, that is mostly a cultural indoctrination stemming from individualistic societies, like the one you and I live in. I also would like to add that the large amount of historical evidence that can be examined across all generations showing how subservient human beings can be yields the same skepticism in regards to your statement.

I didn't think I'd have to argue about this one, because it's extremely self-evident. I'm not sure what you'll accept as "proof," but I think explanatory power is fairly important when assessing the truth of a generalized statement such as "people act in their own self-interest." Going off of that, I know of very few instances in which people voluntarily act in ways that are diametrically opposed to their interests. Nearly all individual actions can be explained by self-interest, even if it may not always be rational. Individuals join groups not because they believe the group will be improved, but because they believe the group will defend them. Parents lay down their lives for their children due to their selfish desire for their progeny to live. I continue to feed my cat because I selfishly enjoy cuddling with him. I could go on.

I would argue, in fact, that it takes a tremendous amount of brainwashing to force individuals to act against their own interests. I would cite collectivist institutions such as organized religion and the military as an example of this. Meanwhile, if we look at the individuals in our society who have experienced the least collectivist indoctrination-- small children-- we find that they are some of the most egotistical, self-interested people on the planet. After a couple decades of schooling, this eventually changes. It takes a village to indoctrinate a child.

You saying that a rational egoist will not do things to harm others through some mechanism that sounds remarkably like a golden rule derivation.  However, that is not the logical position at all. Rather, a calculation of how likely is a negative result likely to occur would ensue. Take a person who has a self-interest in murder. That person may not commit said murder in the middle of NYC due to the fact that he would obviously get caught and spending the rest of his life in jail would be very contrary to most's self-interest. However, given a hypothetical circumstance where said person could commit the murder with a guarantee of never being caught, that person ought to do it. It fulfills his self-interest. The other person may not like it too well but that person's suffering need not be taken into account by the first. Your position relies on assumptions of equal power distribution and equal probability of recurrence when a the cost of a "wrong" act is calculated.

I've already addressed this. A rational person understands that if he obtains power and infringes on the rights of another-- silences them, harms them, kills them-- then another person in power will be able to do the same to him. I don't accept the premise that committing murder, even with a "guarantee" of not being caught, is the rational thing to do. The golden rule is guided by caring for others. This mentality is guided by caring for oneself. While the outcome may be the same, the rationale is different.

I also did not assume an equal probability of recurrence. I simply implied the possibility of recurrence. Utilitarian probability calculus is not necessary here.

Let's examine the world in terms of systems. Systems work because all of their individual mechanisms perform the functions they are supposed to perform. If there is some guiding principle that says the responsibility of each component is to help ensure the grand system works as best as possible, then the system will maintain itself just fine. However, let's say each component works for the sake of itself and the functioning of the system of a whole is just a consequence of each component pursuing its own self interest. It's not too difficult to derive that when parts of the system acknowledge that they can pursue their best interests in opposite of the system's overall interest, that those components will and the system is not as stable. The continuation of the system is no longer necessary based upon the rules that have been established.

This paragraph makes a number of assumptions, none of which I'm comfortable with. Most importantly, it assumes that the system is worth preserving and perpetuating. You seem to place value on the system based simply on the virtue that it is a system. You respect order. I don't-- at least, not inherently. In fact, looking throughout history, I see very few systems that I would consider worthy of preserving. Order is not inherently a virtue. This is an enormous fallacy on your part.

I won't even get into the fact that humans aren't components in a grander system. We're the individuals who created the system in the first place; we're not gears and cogs that were assembled for the purpose of something larger than ourselves.

From what I know of you, you're interested in engineering, statistics, and mathematics in general. I recall you've also been involved in organized religion. You think of things in patterns and rules as most humans do, but you also seem to believe that order is inherently superior to disorder. It's a very Hobbesian worldview-- but even Hobbes thought that revolution was sometimes necessary. I don't think I've ever encountered someone with this much of an obligate bias towards order and authority. You are the archetype of the liquid person.

You say freedom is a good thing and that calling it irrational is rather silly, but you offer no facts or numbers to support that. Defense of freedom is very valuable at times, very detrimental at others. It is up to a cold, dispassionate calculation to discern which. Blind defense of freedom is irrational, and I think I can cite plenty of examples to prove that. You, on the other hand, don't cite examples for your defenses. When freedom detracts or impedes from the better-functioning of the larger system, it becomes a vice.

I will give you the benefit of the doubt and assume this was tongue-in-cheek. You've offered no numbers or facts either in your comments. While you say you can cite examples, you don't cite any. You also say that if freedom detracts from a system, it becomes a vice. This is at best not an inherent truth, and at worst an outright unjustifiable opinion.

I think your biggest stumble in this debate is the assumption that I share your belief in society/humanity moving towards something. I don't. At least, not at the expense of individual freedom. Liberty is the end-in-itself; the things it produces (innovation, capitalism, choice, competition, scientific advancement) are merely by-products of something that is already morally justified. There is absolutely nothing to bolster your statement that a system takes precedent over freedom aside from your own personal opinion about what we should be striving for. Again, I note how inherently subjective utilitarianism is. Because your goal is subjective, you'll have a hell of a time convincing everyone else who lives in your "perfect system."

Utilitarianism works so long as people follow the rules that get established and follow the cold calculations even when they don't serve their own interests. If they don't and don't constantly act in the image of the "greater good", they are not utilitarian at all. So the "corrupted leaders" you are assuredly use a "greater good" mask for something else. Rather, I would think they would fall more in-line with the rules of your philosophy. Given that much power, why shouldn't they pursue things out of self-interest? What consequence to themselves could they incur that would make it not in their own interest? In my theory, that person would be forced to take the weight of the suffering he would cause into account and that would assuredly make "self-interest" obsolete.

I'm glad you raised this point. Here's how I think we elect corrupt leaders.

1) A collectivist system indoctrinates our youth. It tells them to respect authority. It tells them to follow rules that infringe on their natural human behavior. Most importantly, it tells them that orderly systems are more important than individuals.
 
2) These children grow up believing in something greater than themselves. They believe humanity is striving towards a goal. Despite the indoctrination process, no two individuals have the exact same goal in mind. Some believe in an Islamic caliphate. Some become racists, having grown up thinking only in terms of collectives. Others, like yourself, see technological advancement as the ultimate goal. In any case, they put this goal on a pedestal, as their parents, teachers, and priests have told them that "it's important to believe in something greater than yourself."

3) This new generation becomes politically active. Like every generation, it is completely divided. Gradually, they coalesce around various leaders. Some of these leaders are sponge-people who genuinely believe in a "greater good" for their people-- Mao and Hitler, for example. Others are self-interested, but not rational-- Kim Jong Un. In the first case, the goal takes precedent, and carnage ensues. In the second, the chosen leader and his cronies set about ransacking the country, irrationally believing that their actions have no consequences.

4) The people, despite everything, avoid revolution. It is antithetical to everything that's been drilled into their brains. They trust the system. They're glad that the trains run on time. They respect order and authority, even when all evidence points to the undeniable fact that what is happening is against their own interests. When made aware of this, they meekly concede the fact, but maintain that this does not matter, as their own interests aren't important against the will of the collective.

If the people truly acted in their own interests, they would not follow these leaders in the first place. It is ideologies like yours that cause them to blindly support dictators, tyrants, and thieves.

Tut

  • God-King
  • Paul Thomas Anderson
  • **********
  • Posts: 6690
  • It's all over now, baby blue...
  • Location: Nice try, NSA
Re: Diego Destroys Western Philosophy: The Thread
« Reply #23 on: June 02, 2018, 02:54:32 pm »
There are several champions of these subjects that I'm rather fond of. Sam Harris, a proponent of maximizing well-being, makes compelling arguments that morality entirely falls into the domain of science, and while I partially disagree, I do agree that as we know more about neuroscience and psychology, more about the mechanisms of morality will become less metaphysical and more scientific, thus expanding the glove you think should fit perfectly. Once again, humanity will have to reshape its mind to fit the evidence.

Treet has abandoned this thread after leaving this insane comment. However, I'd like to hear some more people weigh in on this. Kale, if you want to jump in here, I'd welcome it. Same goes for Neville.

Kale Pasta

  • God-King
  • David Lynch
  • **********
  • Posts: 4714
  • And the path was a circle, round and round
Re: Diego Destroys Western Philosophy: The Thread
« Reply #24 on: June 02, 2018, 05:45:03 pm »
There are several champions of these subjects that I'm rather fond of. Sam Harris, a proponent of maximizing well-being, makes compelling arguments that morality entirely falls into the domain of science, and while I partially disagree, I do agree that as we know more about neuroscience and psychology, more about the mechanisms of morality will become less metaphysical and more scientific, thus expanding the glove you think should fit perfectly. Once again, humanity will have to reshape its mind to fit the evidence.

Treet has abandoned this thread after leaving this insane comment. However, I'd like to hear some more people weigh in on this. Kale, if you want to jump in here, I'd welcome it. Same goes for Neville.
Not gonna read through these walls of text at the moment but, for the record, I try to maintain a logical, utilitarian outlook on things in life. My personal favorite example is a car ride (since this situation happens to me all the time at school): let's say that there are four people in a car, three are going one place and one person wants to get dropped off somewhere else (in the other direction). The walk would take ten minutes, but driving only takes two. Driving that person saves them eight minutes, but essentially blows 12 minutes of time for the other three people, so I argue against this type of thing all the time in my life. Hopefully I explained that adequately.

Robert Neville

  • God-King
  • Zack Snyder
  • ******
  • Posts: 1868
Re: Diego Destroys Western Philosophy: The Thread
« Reply #25 on: June 02, 2018, 08:24:41 pm »
Well, I did promise to reply to this, didn't I? Wonder if addressing the arguments from the beginning is still feasible. Anyway:

Utilitarianism always reminds me of Rousseau's "Dictatorship of the Majority." If the best course of action is the one that maximizes well-being for the largest amount of people, what's to stop us from treating a very small minority with terrible injustice in order to achieve that goal? I also dislike that it tries to consider all viewpoints equally, and that people use it as a justification for veganism.

I don't think since utilitarianism was first launched do you have ideals such as "all viewpoints are equal" or "all pains and pleasures are treated equal". My brand of utilitarianism would be an axis which "minimizes human suffering". To truly understand utilitarian philosophy, you first have to make a set of assumptions which the whole idea relies upon, but that is true with any worldview, be it theistic or secular. My main assumption is that one will be rigorous, honest, and logical when weighting the benefits and suffering of actions.

I would say most utilitarianism doesn't forsake the unforgivable nature of human cruelty. To think of it like "10 people raped 1 person. Ten people got pleasure. 1 suffered. The pleasure outweighs the suffering. The action was right." is a very narrow and incorrect view of the subject. (I'm not saying that's necessarily your view; it's just a common misconception.) The "sophistication of pleasures" is typically an idea that is present in utilitarian philosophy, so I adamantly disagree with "all viewpoints are equal" when not even all pleasures are equal. When deciding a course of action, the suffering of other humans must be taken into account first, and that suffering must be given much more weight than "simple, life-pleasures" that others may gain.

I had similar views about the subject as you do until I read into it more. Once I truly understand the power and benefit it has as a worldview, I "slowly" began adopting it and started to peel away the "higher ethics" layers I had built over myself the past couple years. So please, let us discuss.

Well, I am not sure how many people here have heard of it, but what Diego initially said ("what's to stop us from treating a very small minority with terrible injustice in order to achieve that goal?) has been most famously rendered in fiction as Ursula K. Le Guin's The Ones Who Walk Away From Omelas, and has in fact been advocated for in (sort of) real life in spite of that. LessWrong, once just a forum like ours that for a few years hovered somewhere up high between genius and madness and produced an enormous volume of texts that may or may not be worth trawling through, also strongly supports utilitarianism and structures their philosophy (that they want to base all sentient AIs on) from it. Given that its founder believes (or believed) that torturing one person for 50 years is worth it if it will prevent all people who'll ever live from getting dust specks in their eyes), it may be a good thing their project seems to have stalled.

So, we have established that some people do believe in this interpretation of utilitarianism. However, it's not very relevant here, as Treet (thankfully) doesn't. For him "the unforgivable nature of human cruelty" must not be forsaken. Though he sees his "brand of utilitarianism" as "an axis which "minimizes human suffering", and LW's Yudkowsky also sought the same (since pleasure was not in the equation above) the reference to "sophistication of pleasures" suggests he would recognise the "sophistication" of suffering as well, and reject such ideas.

However, for me the larger problem with this interpretation of utilitarianism is not just whether or not the problem of "sophistication of pleasures (and suffering)", or even whether you can create an objective scale of pleasure and suffering, which is an objection Diego raises. Personally, I am inherently cautious of any set of ideas that intends to pin down all of the potential events and actions in the world down to a single axis. To me, the desire for this kind of simplification first and foremost suggests the unwillingness or inability to struggle with the complexity of the world, and so a single principle must be stuck to instead. It's again the thinking that drives people to literalist religions, etc. except that instead of offloading all their anxieties onto a religious screed, they dump them off onto a single axis that'll always tell them what is right. (A variant of this is "ethical altruism", which actually happens to be more of a libertarian thing, but still appears rooted in the same thinking. Now;

I suppose I just shy away from any philosophy that claims to work towards a "greater good." This goes for utilitarianism, Marxism, transhumanism, etc. As soon as a philosophy sets up a goal for itself to work towards, and places that goal above individuals, it becomes very easy for its adherents to justify their actions in the name of that goal. If you as an individual see the "greater good" as above yourself, what right do you have to question the methods used to achieve that greater good? This is the sort of groupthink that causes ethnic cleansing and ISIS. I'm of the opinion that the ends never justify the means-- and utilitarianism is all about a specific end (minimizing suffering/maximizing pleasure). Though I will give it credit in that its "means" are less dangerous than the other two schools of thought I just mentioned.

I think an important point here is that humans are inherently predisposed to work towards a goal. This seems particularly true once a person truly realises their mortality and that both their lifespan, and that of any other human around them, is comparatively short. I believe that it's that point that many people make a decision to place their goal above other people, and that goal does not have to be "greater good" - we know all too many stories of bog-standard exploitation and other horrors whose perpetrators were not motivated by such ideas, and at most used them to justify the aftermath. Slavery is an obvious example. Sure, it was a somewhat popular idea amongst Southern slaveowners they were "doing good" as it was "better" for a Negro to serve white man, etc. but that was a post facto justification. I don't think I have actually heard of people explicitly trying to acquire as many slaves as possible for the "greater good" of "helping" them. Same goes for the all-too-many examples of modern slavery. The common thread is that people believed their goals to be more important than other people, whether they were selfish or genuinely thought of as "the greater good".

Treet's next post overlaps with what I just wrote in quite a lot of ways. Here's the more interesting part:

As far as not having "the right to question," in no philosophy do I hear that as a legitimate point. Though I'm not keen on "transhumanism", I am a humanist/utilitarian and our worldview centers on challenging and being skeptical of everything and driving the world to be its best. Now, you can counter with "How do you define 'best' for everyone?" and I'll acknowledge that as a reasonable point. I can answer this and your hierarchy of pleasures point simultaneously. These questions all go back to a logical foundation of the philosophy. I would define the pushing the world to its best as what drives forward knowledge and advancements in medicine, science, mathematics, technology, and even morality as they help the human race address issues such as poverty, sickness, violence, and world hunger aka alleviation of human suffering.

Utilitarianism is hard to define, mostly because it should be a dynamic philosophy, constantly shifting with the needs of the people and of the times. Please don't try to look at it all in one lumped-parameter model.

The bolded part is great (though like Diego, I also have concerns about "advancements in morality", even if they are a little different.) However, something crucial is missing: advancement over what timeframe, and for how long? If there's actually a true bedrock to my worldview, something I'll continue to contemplate and which will continue to colour my thoughts on just about any issue even as my other views may change, it's this: how do we ensure advancements we make (or we think we make) are sustainable, and are not lost all-too-soon through sheer overreach?

As in, we know well know that our current civilisation is unsustainable. Whether it's the measures like carbon footprint, the "1,4 Earths" thing, or the "Earth Overshoot Day" when humanity blows through its sustainable resources (which was apparently in December in 1987, but is now marked in August), it all illustrates the same point: a lot of the advancements we consider integral to the current civilisation are bound to be sooner or later stripped away simply as we run out of resources to support many of the things we are used to. To me, contemplating future advancements without first ensuring the current ones are not lost in time is deluded.

I have a similar position on a lot of the futuristic ideas, like the push for automation; when I see headlines like "robots will replace job X for Y millions of people", I always think "and for what number of years? How long are these robots going to be maintained? How many robot brains, joints, etc. can be built before their production eats into the supply of REEs and the like that was supposed to go to solar panels and the like, thus delaying our supposed climate transition even further? (Not to mention that the mere presence of millions of robots will inherently demand more power and thus make it even more difficult to go carbon neutral/negative.) I want to eventually lay out this line of thought on Quora, but I always feel I need more proof.

Then, Diego takes a particular issue with the "dynamic philosophy" part:

Oh, I absolutely understand that the doctrines of utilitarianism are not codified or set in stone. As with egoism (my personal favorite school of thought), there are as many takes on it as there are people who adhere to it. Still, I think those differences are amplified in utilitarianism, because it all comes down to your definition of being "rigorous, honest, and logical" in determining the greater good. Because utilitarianism is inherently concerned with the well-being of others, it invites its adherents to make normative statements about how others should live their lives. I've always found this to conflict heavily with my own worldview.

The "normative statements" part happens to be my problem with the mainly libertarian-supported "ethical altruism". It's the concept mainly centered around Zuckerberg's friend Dustin Moskowitz (that one guy in The Social Network who was basically irrelevant to the plot.), and seems to have other libertarian-ish backers, like Dominic Cummings (a pollster who ran the Brexit campaign because he thought the EU was too protectionist, and didn't foresee that most Leave voters wanted more of it, not less.) His 80,000 hours attempts to assess the impact of all career paths so that people could choose most helpful one and then "earn to give". (I think the main reason it's libertarian-endorsed is because their argument not only pushes everyone further towards the conventional career "rat race" over the "inefficient" political activism, etc., but also implicitly suggests such "earning to give" is superior to taxation and government programs: the aforementioned Brexit pollster actually tweeted to EU to cancel their program for aid in Africa and give all that money to Moskowitz because his think tank would know better what to do with it.)

The key problem there (besides the libertarianism-related arguments) is that these suggestions are only valid so long as they remain narrow, fringe ideas, and the world is comparatively static around them. If a large number of people begin to follow "80,000 hours" career guidelines to the letter (especially if they actually switch from their current, non-ideal careers to the suggest ones en masse), it'll lead to an oversupply in those fields, meaning that a) average wages will go down and your ability to "earn to give" will plummet correspondingly and b) you have an ever-further chance of staying unemployed or underemployed in that pathway, especially if you were midway through training for it when it stopped being "ideal", meaning that both ethical altruists concepts (the good of the "earning to give" and of the job itself) are thrown out.

In short, their current model seems to assume too much on imperfect information, and if it stops being comparatively fringe, it must be constantly adjusted (screwing over many people with sunken costs in once-advised paths), or it must simply acknowledge that for a lot of professions there's a minimum number of people necessary for maintaining the society as it is, and a maximum number beyond which the extra presence is counterproductive. So far, and according to their Twitter, they seem more concerned with (inefficiently) advocating voting reform in US then any such modifications.

Now, back to the original argument(s).

With my ISIS reference, I was acknowledging that a Muslim utilitarian and a Christian utilitarian would have fundamentally different approaches to the philosophy. Both would see their religions as a way for society to achieve a greater good, because only through their respective religions could people attain salvation. Therefore, others would have to convert in order for pleasure to be maximized. But we don't have to go off on a religion tangent here-- generally speaking, because utilitarianism is so concerned with the lives of others, it implicitly gives its adherents a free pass to evangelize their own lifestyles.

I might recall your claim that Paul Ryan cannot be an Objectivist and a (believing) Christian, and say it applies here as well. These two monotheistic religions are centered on the idea God/Allah always knows best, and the most you can personally do is follow their Commandments/surahs to the most. I mean, utilitarian logic states that if there's a way to launch all people into infinite pleasure (which is what heaven is supposed to be, or is at least the closest thing to one) you must definitely do it, and you definitely shouldn't condemn anyone to infinite suffering (hell). The people's good actions and transgressions are not relevant; saints and sinners feel the same pleasure and the same pain, and the idea that sins must always be punished, even with eternal hellfire is necessary, is clearly deontological. Hence, if God/Allah is clearly not an utilitarian (or else he would have just sent everyone to heaven/never cast anyone out of it), how can you follow him, placing his deontological will above yours, yet claim to be one?

An example: If I were to become a utilitarian, I might have to support a ban on marijuana. I think its overall impact on people is decidedly negative, and in order to minimize that negative for everyone, I would be morally obligated to prevent them from consuming it in order to maximize pleasure. It could even be argued that drug use in general (alcohol included) is antithetical to your goals of advancing medicine, science, mathematics, etc, as it wastes labor and destroys lives. But I would never support banning these substances. Just as I don't recognize the rights of others to ban things for me, I don't think I have any right to ban things for them.


What this analysis is misses is that your decision to place this ban and enforce it, sooner or later morphs into an implicit decision to divert resources away from "advancing medicine, science, mathematics" in order to maintain its enforcement (both through paying officers to bust pot dealers, and through keeping people in jail for pot offences instead of letting them contribute to society through their previously held careers) which is a waste of labour in itself. If the waste resulting from enforcement of the marijuana ban is greater than the benefits said ban brings, it is entirely logical, especially for the sensible utilitarians, to get rid of it.

In fact, I believe that no matter what you might think about rights and such, most people who have voted to legalise marijuana in every successful referendum on the subject because they thought much like I do. Otherwise, they would have supported legalising hard drugs and the like as well, as that is also an argument about freedom. However, they don't, because to them, the enforcement of bans on hard drugs brings more benefits then costs, and the enforcement of ban on pot does not. Freedom barely enters into it.


Robert Neville

  • God-King
  • Zack Snyder
  • ******
  • Posts: 1868
Re: Diego Destroys Western Philosophy: The Thread
« Reply #26 on: June 02, 2018, 08:43:57 pm »
Part 2: CreateAForum stated my original post exceeeded the 20,000 character limit.


Then, the argument about (im)mutability of philosophies, which soon morphs into this:

The only definitions of morality we can conceive of stem from one thing-- the human mind. Every moral system we can develop is limited by it. In fact, if it were not for our advanced minds, we would be incapable of understanding the concept of morality in the first place. It stands to reason, then, that the human mind and morality must work together rather than against one another. Any philosophy that makes statements about how humans should behave is inherently illogical, as it has nothing to compare human instinct to. This is one of many reasons why I find normative philosophies to be brainwashing cults. Unless your normative statement is "humans should act according to their nature," you're building your philosophy on a foundation of sand; an external ideal for human behavior that does not exist in nature.

One on hand, this is logical. On the other hand, I think you (and Treet too, for that matter) overestimate the degree to which a moral system can "work against the human mind" in the first place. If a system is completely antithetical to the human mind, that mind simply wouldn't adopt it in the first place; indeed, a hypothetical philosophy that is antithetical to all human minds couldn't even be invented by any human mind. Hence, any philosophy that exists, and is adopted by people, must work with at least some part of the human mind. (or rather, works for with at least some neurological arrangements of the human mind.) "Brainwashing" is itself a loaded propaganda term; moreover, it must work with at least some elements of the human instinct to work; if it cannot latch onto to any instinct, it'll fail, and never be absorbed. The statement "humans should act according to their nature" becomes a "set of all sets" kind of dilemma, because adopting some form of a normative philosophy is itself part of that very human nature for a very large number of humans, as we have seen time, and time, and time again.


The human mind is responsible for human nature, so let's discuss human nature itself. Humans are many things-- empathetic, cruel, humorous, vindictive, neurotic-- but our defining trait is our pure selfishness.Unless our minds have been bleached out with some collectivist chemical (religion, military hierarchies, veganism), we are inherently self-interested. It is this trait, I would argue, that has given us the advancements you described as the ultimate goal of your philosophy. Technology, literature, engineering, medicine, and so on-- they all stem primarily from our selfish desire to succeed, improve our own lives, and assert our supremacy. And in our messy race to the top, we've created the most advanced civilization the world has ever seen. I would expect nothing less from a structure as wonderful as the human mind. Ironically, being concerned for the well-being of others (a central tenet of utilitarianism) comes into direct conflict with this rational selfishness. But then that's a discussion for another time.


The bolded part happens to be pure ideological interpretation of the data, rather than accepted conclusion. If a philosophy needs to "fit human nature", it should be one as scientifically observed, not a singular conception on it. The observational evidence is clear: people are often subconsciously primed to sacrifice themselves, and it may be a genetic trait. (Which also does not imply it would be "bred out", or the like, since a lot of the traits are "riders" that crop up only alongside other traits when particular genes combine, rather than be linked to a specific gene that can be cleanly eliminated while leaving everything else unaffected.) Again; if something is a "fundamental" part of human nature, a mere man-made ideology cannot make it go away.

The goal in constructing an ethical system, therefore, is to create one that fits the human mind like a glove. There is no empirical justification for doing otherwise (forcing a square peg into a round hole; changing human nature in order to impose some imagined external morality). The ideal system, as I see it, would conform fully to the mold of human nature, infringing on self-determination as little as possible. If individuals infringe on one another's self-determination, then it is certainly morally correct to punish them accordingly. But if their actions do not directly affect others (damaging their own bodies with substance abuse), there is no moral grounds for changing their behavior through coercion.

Like I said, human nature can only change if it "wants" to be changed - i.e. if it happens, it's only when a person chooses to make one part of it that's already present stronger over another.

Most importantly, someone who is both logical and selfish (the two great traits of the human mind) will infringe on the rights of others as little as possible. They will understand that any action they take could be reversed and turned on them-- if they can censor speech they disagree with, those they disagree with can censor them. If they can take property from others, others can take property from them. Rational egoists know that actions set precedents, and will avoid harming others-- not out of some respect for the bogus "golden rule," but out of their own innate self-interest.

This idea assumes that every person has an equal chance of both taking an action towards someone, and being affecting by an identical action towards themselves. (Later on, you say that you don't imply an "equal" possibility in response to Treet, but I think that's just you not fully understanding the implications.) In practice, it doesn't work that way. Rational egotists who have nothing to say for themselves have no logical reason not to censor the speech they disagree with, because all they need is for the speech they do agree with to be broadcast ever louder. Those who have little property now, and little chance of obtaining more in the future normally, have little to lose by taking it from others - even someone takes it away down the line, they'll at worst be back where they started, and no-one can take away their subjectively good memories of their experiences they already had with the things they have taken.

The man in your scenarios has every right to harm himself regardless of the environment. His wife and children are not owed his labor. They are individuals as well, not objects for him to look after. This is why I prefer egoism-- it is immutable from situation to situation.

And it is also entirely in the wife's and children's self-interest to make sure they are owed his labour, and that the man's right to harm himself is denied to him. In fact, it would actually be altruistic of them to allow the man to harm himself at their own expense, and it would be both rational and egotistic to deny that opportunity to him, especially if they have no intent to ever harm themselves in this manner. Once the society has significantly more "wives" and "children" then the "self-harming men" (i.e. practically always outside of times of great upheaval like civil wars), then the right to self-harm gets naturally curbed in full accordance with human nature.

So, do you see it now? Your philosophy is the one that naturally destroys itself.

Now, you have already decided to address Treet's comments point-by-point, so you saved me the trouble of quoting it. Instead...

It is extremely incorrect to say that morality and the human mind must "work together". It is equally incorrect to say that because only the human mind is capable of realizing what morality is that morality should not wish to change the human mind.  In the study of science, we don't care much about what "works together" with the mind. If science yields a position and the mind refuses to accept it, the mind is wrong. The mind must conform to findings in science and mathematics. Reality constructs itself in a way and we are forced to accept it, whether we "like it" or not. The "lesser minded" on science aren't granted relevance in the scientific debate. The same should be true with morality.

Morality, like anything else, can be approached with a scientific eye. As we understand more about psychology, neuroscience, and human evolution, you will see that certain principles, such as collective well-being, are very evolved into us. Collectivism can be seen everywhere in nature, and we can study the benefits of it on animal civilization. Therefore, all facets of moral questions don't purely rely on the human mind for creation. They, like science, only rely on observation and interpretation of the evidence.

Except unlike science and math, there is no morality found in nature. The human mind creates it. Science and math do not have to conform to the mind because in both fields, there is an external truth that we are capable of reasoning out. When such an external truth does not exist, the only thing we can rely on is the human mind.

Any argument against this would effectively have to claim that morality is as objective as math, which is demonstrably untrue.

It is true that I do not think morality is objective, or can ever be objective, but the reasons are a little different. Mathematics, or any other hard science, has a range of objective constants in it, and much of the work in developing any hard science lies in figuring out said constants, be they e=mc2, or pi=3.14159... Like you said, morality comes from the human mind, and there little-to-no constants in the human mind. People are born different, and I don't think there's any philosophy that can actually be accepted by all human minds without exception. Any one that tries will simply be rejected by those people whose minds it can never fit: at most, they might be coerced into pretending to follow it, while their mind twists it into something somewhat acceptable for them.

There are several champions of these subjects that I'm rather fond of. Sam Harris, a proponent of maximizing well-being, makes compelling arguments that morality entirely falls into the domain of science, and while I partially disagree, I do agree that as we know more about neuroscience and psychology, more about the mechanisms of morality will become less metaphysical and more scientific, thus expanding the glove you think should fit perfectly. Once again, humanity will have to reshape its mind to fit the evidence.

That being said, the evidence yielded from the fields mentioned above does not list humanity's primary function as self-interested. Rather, that is mostly a cultural indoctrination stemming from individualistic societies, like the one you and I live in. I also would like to add that the large amount of historical evidence that can be examined across all generations showing how subservient human beings can be yields the same skepticism in regards to your statement.

What I said to you also applies to Treet: I think he rather overestimates the degree to which the mind can be reshaped in the first place as well. He also overestimates the power of "cultural indoctrination", because again, culture can only change minds if it already fits at least some parts of said mind. Some "individualism" is always present (bar some mentally abnormal edge case or two), and same goes for "collectivism."

Robert Neville

  • God-King
  • Zack Snyder
  • ******
  • Posts: 1868
Re: Diego Destroys Western Philosophy: The Thread
« Reply #27 on: June 02, 2018, 08:44:14 pm »
Part 3!


I didn't think I'd have to argue about this one, because it's extremely self-evident. I'm not sure what you'll accept as "proof," but I think explanatory power is fairly important when assessing the truth of a generalized statement such as "people act in their own self-interest." Going off of that, I know of very few instances in which people voluntarily act in ways that are diametrically opposed to their interests. Nearly all individual actions can be explained by self-interest, even if it may not always be rational. Individuals join groups not because they believe the group will be improved, but because they believe the group will defend them. Parents lay down their lives for their children due to their selfish desire for their progeny to live. I continue to feed my cat because I selfishly enjoy cuddling with him. I could go on.

The thing is, though, that is again an interpretation, rather than something backed up by objective data. "Individuals join groups because they believe group will defend them" is a conflicting interpretation to "people believe the group will be improved". It cannot debunk the other, or vice versa, at least not in your argument, because it is based on the same phenomenon (people joining groups) without providing any additional evidence in its favour.

I would argue, in fact, that it takes a tremendous amount of brainwashing to force individuals to act against their own interests. I would cite collectivist institutions such as organized religion and the military as an example of this. Meanwhile, if we look at the individuals in our society who have experienced the least collectivist indoctrination-- small children-- we find that they are some of the most egotistical, self-interested people on the planet. After a couple decades of schooling, this eventually changes. It takes a village to indoctrinate a child.

Again, neither organised religion nor the military would have survived if they went so counter to humans' innate interests. The fact that they persist for thousands of years, while Objectivism is less than a century old, and any concept of "rational egotism" or ideology adjacent to it is far more tenuous, and only seems to exist when the social conditions are sufficiently favorable to it. (i.e. the society has become rich enough to support a layer of such people without collapsing.)

Citing small children as an example is particularly funny, though, because it confirms that in the nature/nurture debate, you fall on the nurture side far beyond the established science. Simply put, it's an accepted fact that brains grow and develop a lot throughout childhood and teenage years, and only stop doing so at 21. Occam's razor suggests that fact that small children are particularly egotistic has far more (i.e. practically everything) to do with their brains not having developed theory of mind yet.

You saying that a rational egoist will not do things to harm others through some mechanism that sounds remarkably like a golden rule derivation.  However, that is not the logical position at all. Rather, a calculation of how likely is a negative result likely to occur would ensue. Take a person who has a self-interest in murder. That person may not commit said murder in the middle of NYC due to the fact that he would obviously get caught and spending the rest of his life in jail would be very contrary to most's self-interest. However, given a hypothetical circumstance where said person could commit the murder with a guarantee of never being caught, that person ought to do it. It fulfills his self-interest. The other person may not like it too well but that person's suffering need not be taken into account by the first. Your position relies on assumptions of equal power distribution and equal probability of recurrence when a the cost of a "wrong" act is calculated.

I've already addressed this. A rational person understands that if he obtains power and infringes on the rights of another-- silences them, harms them, kills them-- then another person in power will be able to do the same to him. I don't accept the premise that committing murder, even with a "guarantee" of not being caught, is the rational thing to do. The golden rule is guided by caring for others. This mentality is guided by caring for oneself. While the outcome may be the same, the rationale is different.

I also did not assume an equal probability of recurrence. I simply implied the possibility of recurrence. Utilitarian probability calculus is not necessary here.

It should be noted that what you consider "rationality" is in fact extreme loss aversion. You believe that a mere possibility of recurrence is enough to make it rational not infringe on others' freedom; apparently, it doesn't matter how small it is. By the same logic, rational people would never ****, or invest in stocks (another, though less random, way of gambling, really), but they do, and every so often, they win (not too often with gambling, far more often with stocks.) Equally, choosing to "infringe on the rights of others" then becomes a ****, and rational people/most normal people will take that **** if they see that the odds are good enough.

This is also basic game theory: it's fine to say that "never" infringing on others's freedom (in the way you define it, that is), is always the right thing to do, and keeping to that principle will protect you from recurrence. In practice, you can keep to that principle, not exercise your power over someone else, and then someone with yet more power can still infringe on your freedom and power and take what you had. You may then think to yourself that someone with yet more power will that do the same to him but that doesn't help you at all after the fact. Hence, the rational thing is again to exercise power when you think you can get away with it without recurrence, and not worry about others if they are already too powerful for your behaviour to affect them in any way. This is also the way human societies have shaped themselves into what they are now, and is what I think Treet was getting at with his "systems" argument.


Let's examine the world in terms of systems. Systems work because all of their individual mechanisms perform the functions they are supposed to perform. If there is some guiding principle that says the responsibility of each component is to help ensure the grand system works as best as possible, then the system will maintain itself just fine. However, let's say each component works for the sake of itself and the functioning of the system of a whole is just a consequence of each component pursuing its own self interest. It's not too difficult to derive that when parts of the system acknowledge that they can pursue their best interests in opposite of the system's overall interest, that those components will and the system is not as stable. The continuation of the system is no longer necessary based upon the rules that have been established.

This paragraph makes a number of assumptions, none of which I'm comfortable with. Most importantly, it assumes that the system is worth preserving and perpetuating. You seem to place value on the system based simply on the virtue that it is a system. You respect order. I don't-- at least, not inherently. In fact, looking throughout history, I see very few systems that I would consider worthy of preserving. Order is not inherently a virtue. This is an enormous fallacy on your part.

Game theory again: if people believe the alternative to the current order is a worse order (which is what "chaos" ultimately becomes), then they'll keep preserving it; if not, they'll rebel, and if enough of that think that, they'll succeed (all in line with Hobbesian thinking). All evidence available to us suggests that it is very much human nature to arrange themselves into systems, even if they are as primitive as the tribes the humanity started out as. Order of one system may not be a virtue in itself - however, it is not being compared to "true freedom", but rather to the alternative systems that'll spring up once that system is gone. A civil war may weaken the state, and even completely destroy it, but then the power will immediately flow to warlords squabbling over the wreckage; each one of their domains is again a system, with its own "order", one that is far more limited in territory then the state was, but is far less constrained in terms of what warlord can do to the people he commands. (And that's without remembering that given enough time, a new state, or several of them, will again be established, even if by former warlords.)

You say freedom is a good thing and that calling it irrational is rather silly, but you offer no facts or numbers to support that. Defense of freedom is very valuable at times, very detrimental at others. It is up to a cold, dispassionate calculation to discern which. Blind defense of freedom is irrational, and I think I can cite plenty of examples to prove that. You, on the other hand, don't cite examples for your defenses. When freedom detracts or impedes from the better-functioning of the larger system, it becomes a vice.

I will give you the benefit of the doubt and assume this was tongue-in-cheek. You've offered no numbers or facts either in your comments. While you say you can cite examples, you don't cite any. You also say that if freedom detracts from a system, it becomes a vice. This is at best not an inherent truth, and at worst an outright unjustifiable opinion.

I think your biggest stumble in this debate is the assumption that I share your belief in society/humanity moving towards something. I don't. At least, not at the expense of individual freedom. Liberty is the end-in-itself; the things it produces (innovation, capitalism, choice, competition, scientific advancement) are merely by-products of something that is already morally justified. There is absolutely nothing to bolster your statement that a system takes precedent over freedom aside from your own personal opinion about what we should be striving for. Again, I note how inherently subjective utilitarianism is. Because your goal is subjective, you'll have a hell of a time convincing everyone else who lives in your "perfect system."

Again, your worldview conflicts with the available anthropological evidence. It's generally accepted by anthropologists that the people in hunter-gatherer tribes had far more liberty than the people in the settled civilisations, and it's only relatively recently that our civilisation advanced to the point it's no longer the case. Yet, settled agrarian civilisation formed out of hunter-gatherer tribes every single time the conditions allowed for it, and then proceed to defeat and/or absorb the remaining hunter-gatherers, in the process reducing their liberty. This is literally human nature as observed by milennia of anthropological record.

The same anthropological record also shows that for most people, "Liberty as the end-in-itself" is not enough. Time and time again, they opt to set goals grander than themselves (or, more often, join others' goals that are grander than themselves) at the expense of their liberty. If this ran so counter to human nature, it wouldn't be happening so consistently.

I do agree, though, that Treet's utilitarian idea of a system is subjective, and can never be fully objective for all people, because some proportion of people will always be born with minds going against that. That is why I aim for sustainable, rather than "perfect", so that the order can accept a few fluctuations.

Utilitarianism works so long as people follow the rules that get established and follow the cold calculations even when they don't serve their own interests. If they don't and don't constantly act in the image of the "greater good", they are not utilitarian at all. So the "corrupted leaders" you are assuredly use a "greater good" mask for something else. Rather, I would think they would fall more in-line with the rules of your philosophy. Given that much power, why shouldn't they pursue things out of self-interest? What consequence to themselves could they incur that would make it not in their own interest? In my theory, that person would be forced to take the weight of the suffering he would cause into account and that would assuredly make "self-interest" obsolete.

I'm glad you raised this point. Here's how I think we elect corrupt leaders.

1) A collectivist system indoctrinates our youth. It tells them to respect authority. It tells them to follow rules that infringe on their natural human behavior. Most importantly, it tells them that orderly systems are more important than individuals.
 
2) These children grow up believing in something greater than themselves. They believe humanity is striving towards a goal. Despite the indoctrination process, no two individuals have the exact same goal in mind. Some believe in an Islamic caliphate. Some become racists, having grown up thinking only in terms of collectives. Others, like yourself, see technological advancement as the ultimate goal. In any case, they put this goal on a pedestal, as their parents, teachers, and priests have told them that "it's important to believe in something greater than yourself."

3) This new generation becomes politically active. Like every generation, it is completely divided. Gradually, they coalesce around various leaders. Some of these leaders are sponge-people who genuinely believe in a "greater good" for their people-- Mao and Hitler, for example. Others are self-interested, but not rational-- Kim Jong Un. In the first case, the goal takes precedent, and carnage ensues. In the second, the chosen leader and his cronies set about ransacking the country, irrationally believing that their actions have no consequences.

4) The people, despite everything, avoid revolution. It is antithetical to everything that's been drilled into their brains. They trust the system. They're glad that the trains run on time. They respect order and authority, even when all evidence points to the undeniable fact that what is happening is against their own interests. When made aware of this, they meekly concede the fact, but maintain that this does not matter, as their own interests aren't important against the will of the collective.

If the people truly acted in their own interests, they would not follow these leaders in the first place. It is ideologies like yours that cause them to blindly support dictators, tyrants, and thieves.

Once again, the final post is mainly ideology not rooted in fact that I have already addressed elsewhere, such as the unscientific prioritisation of nurture over nature, and ignorance of game theory. I'll also say that the way you have brought Kim Jong-Un into this is wildly off-mark: for one, there is enough information available to make description of him as "not rational" questionable at best. The reason his addition is weird, though, is that he is a third-generation inheritor of power, while your example compares him with those who directly seized it through their personal efforts. In fact, Kim Jong-Un is a system succeeding at staying in place and smoothly producing a successor it was always supposed to; Hitler is a failure of a system (though not a complete one, since it still used existing institutions, even if it was to end them), and Mao is the result of the old system completely collapsing, in part through the rebellion you so champion elsewhere.

Tut

  • God-King
  • Paul Thomas Anderson
  • **********
  • Posts: 6690
  • It's all over now, baby blue...
  • Location: Nice try, NSA
Re: Diego Destroys Western Philosophy: The Thread
« Reply #28 on: June 02, 2018, 09:10:36 pm »
I'm about to go out, but I wanted to respond quickly to two things here.

Well, I am not sure how many people here have heard of it, but what Diego initially said ("what's to stop us from treating a very small minority with terrible injustice in order to achieve that goal?) has been most famously rendered in fiction as Ursula K. Le Guin's The Ones Who Walk Away From Omelas, and has in fact been advocated for in (sort of) real life in spite of that. LessWrong, once just a forum like ours that for a few years hovered somewhere up high between genius and madness and produced an enormous volume of texts that may or may not be worth trawling through, also strongly supports utilitarianism and structures their philosophy (that they want to base all sentient AIs on) from it. Given that its founder believes (or believed) that torturing one person for 50 years is worth it if it will prevent all people who'll ever live from getting dust specks in their eyes), it may be a good thing their project seems to have stalled.

So, we have established that some people do believe in this interpretation of utilitarianism. However, it's not very relevant here, as Treet (thankfully) doesn't. For him "the unforgivable nature of human cruelty" must not be forsaken. Though he sees his "brand of utilitarianism" as "an axis which "minimizes human suffering", and LW's Yudkowsky also sought the same (since pleasure was not in the equation above) the reference to "sophistication of pleasures" suggests he would recognise the "sophistication" of suffering as well, and reject such ideas.

However, for me the larger problem with this interpretation of utilitarianism is not just whether or not the problem of "sophistication of pleasures (and suffering)", or even whether you can create an objective scale of pleasure and suffering, which is an objection Diego raises. Personally, I am inherently cautious of any set of ideas that intends to pin down all of the potential events and actions in the world down to a single axis. To me, the desire for this kind of simplification first and foremost suggests the unwillingness or inability to struggle with the complexity of the world, and so a single principle must be stuck to instead. It's again the thinking that drives people to literalist religions, etc. except that instead of offloading all their anxieties onto a religious screed, they dump them off onto a single axis that'll always tell them what is right. (A variant of this is "ethical altruism", which actually happens to be more of a libertarian thing, but still appears rooted in the same thinking. Now;

The subjective nature of utilitarianism is absolutely relevant to this discussion, even if Treet doesn't adhere to its worst possible tenets. Utilitarianism, as a philosophy, depends more heavily on the individual adherent's preconceived notions than most other philosophies (Marxism, for example) do. I think that's an indisputable fact. Because one's conception of the "greater good" is inherently subjective, the applicability of utilitarian philosophy to various situations must be called into question. My example of Muslim/Christian utilitarians was mostly hypothetical, but again, if you boil the philosophy down to "create the most pleasure and the least pain for the most people," there's nothing inherently contradictory in it with Christian doctrine-- which would explain why Treet finds the ideology so appealing.

I have the same problems as you do with quantifying these things, and it does smack of laziness in the face of a complex world.

I suppose I just shy away from any philosophy that claims to work towards a "greater good." This goes for utilitarianism, Marxism, transhumanism, etc. As soon as a philosophy sets up a goal for itself to work towards, and places that goal above individuals, it becomes very easy for its adherents to justify their actions in the name of that goal. If you as an individual see the "greater good" as above yourself, what right do you have to question the methods used to achieve that greater good? This is the sort of groupthink that causes ethnic cleansing and ISIS. I'm of the opinion that the ends never justify the means-- and utilitarianism is all about a specific end (minimizing suffering/maximizing pleasure). Though I will give it credit in that its "means" are less dangerous than the other two schools of thought I just mentioned.

I think an important point here is that humans are inherently predisposed to work towards a goal. This seems particularly true once a person truly realises their mortality and that both their lifespan, and that of any other human around them, is comparatively short. I believe that it's that point that many people make a decision to place their goal above other people, and that goal does not have to be "greater good" - we know all too many stories of bog-standard exploitation and other horrors whose perpetrators were not motivated by such ideas, and at most used them to justify the aftermath. Slavery is an obvious example. Sure, it was a somewhat popular idea amongst Southern slaveowners they were "doing good" as it was "better" for a Negro to serve white man, etc. but that was a post facto justification. I don't think I have actually heard of people explicitly trying to acquire as many slaves as possible for the "greater good" of "helping" them. Same goes for the all-too-many examples of modern slavery. The common thread is that people believed their goals to be more important than other people, whether they were selfish or genuinely thought of as "the greater good".

Treet's next post overlaps with what I just wrote in quite a lot of ways. Here's the more interesting part:

As far as not having "the right to question," in no philosophy do I hear that as a legitimate point. Though I'm not keen on "transhumanism", I am a humanist/utilitarian and our worldview centers on challenging and being skeptical of everything and driving the world to be its best. Now, you can counter with "How do you define 'best' for everyone?" and I'll acknowledge that as a reasonable point. I can answer this and your hierarchy of pleasures point simultaneously. These questions all go back to a logical foundation of the philosophy. I would define the pushing the world to its best as what drives forward knowledge and advancements in medicine, science, mathematics, technology, and even morality as they help the human race address issues such as poverty, sickness, violence, and world hunger aka alleviation of human suffering.

Utilitarianism is hard to define, mostly because it should be a dynamic philosophy, constantly shifting with the needs of the people and of the times. Please don't try to look at it all in one lumped-parameter model.

The bolded part is great (though like Diego, I also have concerns about "advancements in morality", even if they are a little different.) However, something crucial is missing: advancement over what timeframe, and for how long? If there's actually a true bedrock to my worldview, something I'll continue to contemplate and which will continue to colour my thoughts on just about any issue even as my other views may change, it's this: how do we ensure advancements we make (or we think we make) are sustainable, and are not lost all-too-soon through sheer overreach?

As in, we know well know that our current civilisation is unsustainable. Whether it's the measures like carbon footprint, the "1,4 Earths" thing, or the "Earth Overshoot Day" when humanity blows through its sustainable resources (which was apparently in December in 1987, but is now marked in August), it all illustrates the same point: a lot of the advancements we consider integral to the current civilisation are bound to be sooner or later stripped away simply as we run out of resources to support many of the things we are used to. To me, contemplating future advancements without first ensuring the current ones are not lost in time is deluded.

I have a similar position on a lot of the futuristic ideas, like the push for automation; when I see headlines like "robots will replace job X for Y millions of people", I always think "and for what number of years? How long are these robots going to be maintained? How many robot brains, joints, etc. can be built before their production eats into the supply of REEs and the like that was supposed to go to solar panels and the like, thus delaying our supposed climate transition even further? (Not to mention that the mere presence of millions of robots will inherently demand more power and thus make it even more difficult to go carbon neutral/negative.) I want to eventually lay out this line of thought on Quora, but I always feel I need more proof.

I'm going to lump these two together-- yes, humans work towards goals. But there is an immense difference between working towards an internal goal (one you select for yourself) and an external one (a goal you set for humanity). Because everyone values different things, I don't expect their goals to align with mine any more than our different personal goals do. Therefore, I see no value in allowing others to set goals for me, and vice versa, setting goals for others.

Robert Neville

  • God-King
  • Zack Snyder
  • ******
  • Posts: 1868
Re: Diego Destroys Western Philosophy: The Thread
« Reply #29 on: June 03, 2018, 09:55:07 am »
I'm about to go out, but I wanted to respond quickly to two things here.

Well, I am not sure how many people here have heard of it, but what Diego initially said ("what's to stop us from treating a very small minority with terrible injustice in order to achieve that goal?) has been most famously rendered in fiction as Ursula K. Le Guin's The Ones Who Walk Away From Omelas, and has in fact been advocated for in (sort of) real life in spite of that. LessWrong, once just a forum like ours that for a few years hovered somewhere up high between genius and madness and produced an enormous volume of texts that may or may not be worth trawling through, also strongly supports utilitarianism and structures their philosophy (that they want to base all sentient AIs on) from it. Given that its founder believes (or believed) that torturing one person for 50 years is worth it if it will prevent all people who'll ever live from getting dust specks in their eyes), it may be a good thing their project seems to have stalled.

So, we have established that some people do believe in this interpretation of utilitarianism. However, it's not very relevant here, as Treet (thankfully) doesn't. For him "the unforgivable nature of human cruelty" must not be forsaken. Though he sees his "brand of utilitarianism" as "an axis which "minimizes human suffering", and LW's Yudkowsky also sought the same (since pleasure was not in the equation above) the reference to "sophistication of pleasures" suggests he would recognise the "sophistication" of suffering as well, and reject such ideas.

However, for me the larger problem with this interpretation of utilitarianism is not just whether or not the problem of "sophistication of pleasures (and suffering)", or even whether you can create an objective scale of pleasure and suffering, which is an objection Diego raises. Personally, I am inherently cautious of any set of ideas that intends to pin down all of the potential events and actions in the world down to a single axis. To me, the desire for this kind of simplification first and foremost suggests the unwillingness or inability to struggle with the complexity of the world, and so a single principle must be stuck to instead. It's again the thinking that drives people to literalist religions, etc. except that instead of offloading all their anxieties onto a religious screed, they dump them off onto a single axis that'll always tell them what is right. (A variant of this is "ethical altruism", which actually happens to be more of a libertarian thing, but still appears rooted in the same thinking. Now;

The subjective nature of utilitarianism is absolutely relevant to this discussion, even if Treet doesn't adhere to its worst possible tenets. Utilitarianism, as a philosophy, depends more heavily on the individual adherent's preconceived notions than most other philosophies (Marxism, for example) do. I think that's an indisputable fact. Because one's conception of the "greater good" is inherently subjective, the applicability of utilitarian philosophy to various situations must be called into question. My example of Muslim/Christian utilitarians was mostly hypothetical, but again, if you boil the philosophy down to "create the most pleasure and the least pain for the most people," there's nothing inherently contradictory in it with Christian doctrine-- which would explain why Treet finds the ideology so appealing.

I have the same problems as you do with quantifying these things, and it does smack of laziness in the face of a complex world.

Well, I would again say that it should be very difficult (impossible, really) for the concept of "creating the most pleasure and the least pain for the most people" to exist within a Christian worldview and survive a reckoning with God being indirectly the greatest source of pain, through his insistence on letting demons and the Devil eternally torture people in hell as punishment for their crimes. The idea that punishment must involve suffering is purely deontological; utilitarianism is generally aligned with the rehabilitative justice, like those famed Norwegian leafy island prisons. I suppose that some fringe Christian branches may reconcile this, like Annihilationism (if sinners just die and do not go anywhere, that's still no extra suffering caused, right?), but I am not sure if Treet ever had much experience with those.

And yes, like I said before, I prefer sustainability as a goal rather than some ideal of perfection calculated around someone's innate biases, precisely because it is much more objective, and it also leaves much more room for error. It's generally a good idea not to do things we cannot realistically reverse (which is again why I consider the environmental and resource degradation as the top issue), and some interpretations of utilitarianism push really hard for irreversible things. Just check out this plan for the "abolition of suffering" by wiping out predators, cats included.

I'm going to lump these two together-- yes, humans work towards goals. But there is an immense difference between working towards an internal goal (one you select for yourself) and an external one (a goal you set for humanity). Because everyone values different things, I don't expect their goals to align with mine any more than our different personal goals do. Therefore, I see no value in allowing others to set goals for me, and vice versa, setting goals for others.

Well, I would still say the main dividing line is still the degree to which someone is willing to sacrifice others for the sake of their goal, regardless of whether it's an internal or external one.

Tut

  • God-King
  • Paul Thomas Anderson
  • **********
  • Posts: 6690
  • It's all over now, baby blue...
  • Location: Nice try, NSA
Re: Diego Destroys Western Philosophy: The Thread
« Reply #30 on: June 03, 2018, 08:29:36 pm »
One on hand, this is logical. On the other hand, I think you (and Treet too, for that matter) overestimate the degree to which a moral system can "work against the human mind" in the first place. If a system is completely antithetical to the human mind, that mind simply wouldn't adopt it in the first place; indeed, a hypothetical philosophy that is antithetical to all human minds couldn't even be invented by any human mind. Hence, any philosophy that exists, and is adopted by people, must work with at least some part of the human mind. (or rather, works for with at least some neurological arrangements of the human mind.) "Brainwashing" is itself a loaded propaganda term; moreover, it must work with at least some elements of the human instinct to work; if it cannot latch onto to any instinct, it'll fail, and never be absorbed. The statement "humans should act according to their nature" becomes a "set of all sets" kind of dilemma, because adopting some form of a normative philosophy is itself part of that very human nature for a very large number of humans, as we have seen time, and time, and time again.

True. But as I'm sure you are aware, there are degrees of difference here. A good way of examining this is to determine the level of coercion necessary to implement a given system. Now, like Treet's quantification of suffering, this isn't fully scientific, but I believe it's safe to say that (for example) capitalism takes less force to implement than communism. This doesn't inherently make it better (though from my perspective it does), but it's true. Capitalism, mercantilism, and various other systems have occurred fairly organically throughout human history, especially in comparison to "ideal societies" like communes and fascist dictatorships. It's no coincidence that the philosophers identified as the "Progenitors of Capitalism" were merely individuals describing the system as it developed, while the progenitors of utopian systems described their ideal societies of the future.

Now, whether or not you accept that "less force = a better system" is true is neither here nor there. The fact is that the degree to which a system deviates from human nature can be measured based on the amount of force necessary to implement and maintain it. I'm sure you'll be able to think of a solid exception to this rule (and you'll probably take issue with how I define "force"), but the principle remains generally true. While the mind may not be able to adapt something antithetical to its nature (a point I would argue, but it's moot anyway), different systems deviate from that nature in different ways and to different magnitudes.

The human mind is responsible for human nature, so let's discuss human nature itself. Humans are many things-- empathetic, cruel, humorous, vindictive, neurotic-- but our defining trait is our pure selfishness.Unless our minds have been bleached out with some collectivist chemical (religion, military hierarchies, veganism), we are inherently self-interested. It is this trait, I would argue, that has given us the advancements you described as the ultimate goal of your philosophy. Technology, literature, engineering, medicine, and so on-- they all stem primarily from our selfish desire to succeed, improve our own lives, and assert our supremacy. And in our messy race to the top, we've created the most advanced civilization the world has ever seen. I would expect nothing less from a structure as wonderful as the human mind. Ironically, being concerned for the well-being of others (a central tenet of utilitarianism) comes into direct conflict with this rational selfishness. But then that's a discussion for another time.


The bolded part happens to be pure ideological interpretation of the data, rather than accepted conclusion. If a philosophy needs to "fit human nature", it should be one as scientifically observed, not a singular conception on it. The observational evidence is clear: people are often subconsciously primed to sacrifice themselves, and it may be a genetic trait. (Which also does not imply it would be "bred out", or the like, since a lot of the traits are "riders" that crop up only alongside other traits when particular genes combine, rather than be linked to a specific gene that can be cleanly eliminated while leaving everything else unaffected.) Again; if something is a "fundamental" part of human nature, a mere man-made ideology cannot make it go away.

I will not accept the claim that self-sacrifice is genetically innate until more evidence is compiled for it. It wouldn't completely surprise me (in fact, it would explain quite a bit about the world), but I won't respond to that based on one study. Still, I find the implication here interesting-- the people risking their lives to save others from a suicide bomber may be driven by the same self-sacrificial gene that motivated the suicide bomber to blow himself up in the first place. Simply another example of the destructiveness of selflessness.

A lot of your responses here center around the biased ideology I'm approaching this from. That's fair. And I don't deny it. But you have to understand that much of this was written as a retort to Treet's claim that there is "no rational defense for freedom" unless it serves a purpose. Again, whether or not he agrees with me is irrelevant-- it simply proves even further that morality is subjective, and therefore serves to debase his claim that morals will someday be scientifically quantified.

Most importantly, someone who is both logical and selfish (the two great traits of the human mind) will infringe on the rights of others as little as possible. They will understand that any action they take could be reversed and turned on them-- if they can censor speech they disagree with, those they disagree with can censor them. If they can take property from others, others can take property from them. Rational egoists know that actions set precedents, and will avoid harming others-- not out of some respect for the bogus "golden rule," but out of their own innate self-interest.

This idea assumes that every person has an equal chance of both taking an action towards someone, and being affecting by an identical action towards themselves. (Later on, you say that you don't imply an "equal" possibility in response to Treet, but I think that's just you not fully understanding the implications.) In practice, it doesn't work that way. Rational egotists who have nothing to say for themselves have no logical reason not to censor the speech they disagree with, because all they need is for the speech they do agree with to be broadcast ever louder. Those who have little property now, and little chance of obtaining more in the future normally, have little to lose by taking it from others - even someone takes it away down the line, they'll at worst be back where they started, and no-one can take away their subjectively good memories of their experiences they already had with the things they have taken.

On the surface, there is a logic in committing destructive acts (stealing, cheating, looting, etc) when the pleasure they can provide outweighs any potential punishment. There's a rationality to stealing an expensive car and going for a joyride, so long as the individual is convinced that they will never get the opportunity again. This applies to rape as well. However, we can then infer from their actions that instant gratification and short-term pleasure are the only goals these individuals have set for themselves. I do not see that goal as logical, and I doubt you do either-- you merely acknowledge that certain people who commit antisocial acts do. We must try to understand what is actually in the individual's best interests, which of course is not always so apparent. A street thug might think that stealing a Ferrari is the logical thing to do given the pleasure he will derive from it, but will his ensuing time spent in the justice system be better-spent than if he had learned a trade, kicked his drug habits, gone to community college, or applied for a job somewhere?

This is why goals are important, but only insofar as they apply to the individual. An individual with a personal goal has something to work towards. Now, you'll likely say that this is often a false hope depending on the level of social mobility in the given society. This applies to some people-- but I'd argue that the actual number is far lower than our welfare rates would have you believe. There's a psychological term called pessmism bias that I think applies here-- or more generally, just the belief that what is currently happening to you will always be happening, whether it's good or bad.

The man in your scenarios has every right to harm himself regardless of the environment. His wife and children are not owed his labor. They are individuals as well, not objects for him to look after. This is why I prefer egoism-- it is immutable from situation to situation.

And it is also entirely in the wife's and children's self-interest to make sure they are owed his labour, and that the man's right to harm himself is denied to him. In fact, it would actually be altruistic of them to allow the man to harm himself at their own expense, and it would be both rational and egotistic to deny that opportunity to him, especially if they have no intent to ever harm themselves in this manner. Once the society has significantly more "wives" and "children" then the "self-harming men" (i.e. practically always outside of times of great upheaval like civil wars), then the right to self-harm gets naturally curbed in full accordance with human nature.

So, do you see it now? Your philosophy is the one that naturally destroys itself.

You are redefining altruism from a different perspective than the norm. While that's fair, I must call attention to the fact that altruism-- which is concerned with helping others-- abhors suffering, and would therefore not allow members of a family to stand by while one of their own self-destructs. Nevertheless, I don't think that's relevant. All that matters is whether institutional force is applied in order to prevent the man from engaging in these habits.

I also find this metaphor to be non-applicable to what we're discussing here. Marriage is a contract that two individuals enter into (ideally) willingly. This is why adultery laws have always been complex for me as a libertarian-- on the one hand, individuals can do as they choose with their bodies. But on the other hand, committing adultery represents the violation of a contract, and as I'm sure you know, libertarians hold contracts in the highest regard. The fundamentals of this are too specific to treat it as a microcosm of society as a whole.

 

+- Hot Threads

2 Fudge 2 Knuckle by Kale Pasta
Today at 01:44:04 am

Awards Season by Kale Pasta
Today at 01:43:13 am

The Video Games MegaThread by Charles Longboat Jr.
December 07, 2018, 01:06:29 am

The 2018 US Midterms and Goober-natorial Elections Thread by Robert Neville
November 27, 2018, 04:19:15 pm

The Official Movie Trailer/TV Spot Watching Thread by Robert Neville
November 27, 2018, 03:59:28 pm

What song are you listening to - Part II by Charles Longboat Jr.
November 26, 2018, 11:58:34 pm

THE OFFICIAL MOVIE WATCHING THREAD by Charles Longboat Jr.
November 26, 2018, 11:56:39 pm

The Trump Presidency Thread by Robert Neville
October 09, 2018, 05:27:33 pm

2018 Standings by Crohn's Boy
October 07, 2018, 11:13:25 am

Khabib vs. Conor fight by Robert Neville
October 07, 2018, 07:15:48 am

Another reason why SEC is so embarrassing... by The One Who Lurks
October 06, 2018, 07:21:54 pm

Book Thread. What are you reading? by Tut
September 26, 2018, 11:40:42 pm

MWO Movie News, a subsidiary of the Walt Disney Company by Charles Longboat Jr.
September 20, 2018, 07:51:25 pm

Whats your take on movie crowdfunding? by Robert Neville
September 16, 2018, 07:23:03 am

Consensus XXXIII: Netflicks Moovys by Crohn's Boy
September 14, 2018, 04:06:15 pm