+- +-

+- You

Welcome, Guest.
Please login or register.
Forgot your password?

+- Site Data

Total Members: 80
Latest: John Oliver
New This Month: 1
New This Week: 0
New Today: 0
Total Posts: 112218
Total Topics: 4363
Most Online Today: 14
Most Online Ever: 55
(April 18, 2016, 06:09:38 pm)
Users Online
Members: 1
Guests: 3
Total: 4

Author Topic: Diego Destroys Western Philosophy: The Thread  (Read 463 times)

Robert Neville

  • God-King
  • Zack Snyder
  • ******
  • Posts: 1806
Re: Diego Destroys Western Philosophy: The Thread
« Reply #20 on: June 02, 2018, 08:24:41 pm »
Well, I did promise to reply to this, didn't I? Wonder if addressing the arguments from the beginning is still feasible. Anyway:

Utilitarianism always reminds me of Rousseau's "Dictatorship of the Majority." If the best course of action is the one that maximizes well-being for the largest amount of people, what's to stop us from treating a very small minority with terrible injustice in order to achieve that goal? I also dislike that it tries to consider all viewpoints equally, and that people use it as a justification for veganism.

I don't think since utilitarianism was first launched do you have ideals such as "all viewpoints are equal" or "all pains and pleasures are treated equal". My brand of utilitarianism would be an axis which "minimizes human suffering". To truly understand utilitarian philosophy, you first have to make a set of assumptions which the whole idea relies upon, but that is true with any worldview, be it theistic or secular. My main assumption is that one will be rigorous, honest, and logical when weighting the benefits and suffering of actions.

I would say most utilitarianism doesn't forsake the unforgivable nature of human cruelty. To think of it like "10 people raped 1 person. Ten people got pleasure. 1 suffered. The pleasure outweighs the suffering. The action was right." is a very narrow and incorrect view of the subject. (I'm not saying that's necessarily your view; it's just a common misconception.) The "sophistication of pleasures" is typically an idea that is present in utilitarian philosophy, so I adamantly disagree with "all viewpoints are equal" when not even all pleasures are equal. When deciding a course of action, the suffering of other humans must be taken into account first, and that suffering must be given much more weight than "simple, life-pleasures" that others may gain.

I had similar views about the subject as you do until I read into it more. Once I truly understand the power and benefit it has as a worldview, I "slowly" began adopting it and started to peel away the "higher ethics" layers I had built over myself the past couple years. So please, let us discuss.

Well, I am not sure how many people here have heard of it, but what Diego initially said ("what's to stop us from treating a very small minority with terrible injustice in order to achieve that goal?) has been most famously rendered in fiction as Ursula K. Le Guin's The Ones Who Walk Away From Omelas, and has in fact been advocated for in (sort of) real life in spite of that. LessWrong, once just a forum like ours that for a few years hovered somewhere up high between genius and madness and produced an enormous volume of texts that may or may not be worth trawling through, also strongly supports utilitarianism and structures their philosophy (that they want to base all sentient AIs on) from it. Given that its founder believes (or believed) that torturing one person for 50 years is worth it if it will prevent all people who'll ever live from getting dust specks in their eyes), it may be a good thing their project seems to have stalled.

So, we have established that some people do believe in this interpretation of utilitarianism. However, it's not very relevant here, as Treet (thankfully) doesn't. For him "the unforgivable nature of human cruelty" must not be forsaken. Though he sees his "brand of utilitarianism" as "an axis which "minimizes human suffering", and LW's Yudkowsky also sought the same (since pleasure was not in the equation above) the reference to "sophistication of pleasures" suggests he would recognise the "sophistication" of suffering as well, and reject such ideas.

However, for me the larger problem with this interpretation of utilitarianism is not just whether or not the problem of "sophistication of pleasures (and suffering)", or even whether you can create an objective scale of pleasure and suffering, which is an objection Diego raises. Personally, I am inherently cautious of any set of ideas that intends to pin down all of the potential events and actions in the world down to a single axis. To me, the desire for this kind of simplification first and foremost suggests the unwillingness or inability to struggle with the complexity of the world, and so a single principle must be stuck to instead. It's again the thinking that drives people to literalist religions, etc. except that instead of offloading all their anxieties onto a religious screed, they dump them off onto a single axis that'll always tell them what is right. (A variant of this is "ethical altruism", which actually happens to be more of a libertarian thing, but still appears rooted in the same thinking. Now;

I suppose I just shy away from any philosophy that claims to work towards a "greater good." This goes for utilitarianism, Marxism, transhumanism, etc. As soon as a philosophy sets up a goal for itself to work towards, and places that goal above individuals, it becomes very easy for its adherents to justify their actions in the name of that goal. If you as an individual see the "greater good" as above yourself, what right do you have to question the methods used to achieve that greater good? This is the sort of groupthink that causes ethnic cleansing and ISIS. I'm of the opinion that the ends never justify the means-- and utilitarianism is all about a specific end (minimizing suffering/maximizing pleasure). Though I will give it credit in that its "means" are less dangerous than the other two schools of thought I just mentioned.

I think an important point here is that humans are inherently predisposed to work towards a goal. This seems particularly true once a person truly realises their mortality and that both their lifespan, and that of any other human around them, is comparatively short. I believe that it's that point that many people make a decision to place their goal above other people, and that goal does not have to be "greater good" - we know all too many stories of bog-standard exploitation and other horrors whose perpetrators were not motivated by such ideas, and at most used them to justify the aftermath. Slavery is an obvious example. Sure, it was a somewhat popular idea amongst Southern slaveowners they were "doing good" as it was "better" for a Negro to serve white man, etc. but that was a post facto justification. I don't think I have actually heard of people explicitly trying to acquire as many slaves as possible for the "greater good" of "helping" them. Same goes for the all-too-many examples of modern slavery. The common thread is that people believed their goals to be more important than other people, whether they were selfish or genuinely thought of as "the greater good".

Treet's next post overlaps with what I just wrote in quite a lot of ways. Here's the more interesting part:

As far as not having "the right to question," in no philosophy do I hear that as a legitimate point. Though I'm not keen on "transhumanism", I am a humanist/utilitarian and our worldview centers on challenging and being skeptical of everything and driving the world to be its best. Now, you can counter with "How do you define 'best' for everyone?" and I'll acknowledge that as a reasonable point. I can answer this and your hierarchy of pleasures point simultaneously. These questions all go back to a logical foundation of the philosophy. I would define the pushing the world to its best as what drives forward knowledge and advancements in medicine, science, mathematics, technology, and even morality as they help the human race address issues such as poverty, sickness, violence, and world hunger aka alleviation of human suffering.

Utilitarianism is hard to define, mostly because it should be a dynamic philosophy, constantly shifting with the needs of the people and of the times. Please don't try to look at it all in one lumped-parameter model.

The bolded part is great (though like Diego, I also have concerns about "advancements in morality", even if they are a little different.) However, something crucial is missing: advancement over what timeframe, and for how long? If there's actually a true bedrock to my worldview, something I'll continue to contemplate and which will continue to colour my thoughts on just about any issue even as my other views may change, it's this: how do we ensure advancements we make (or we think we make) are sustainable, and are not lost all-too-soon through sheer overreach?

As in, we know well know that our current civilisation is unsustainable. Whether it's the measures like carbon footprint, the "1,4 Earths" thing, or the "Earth Overshoot Day" when humanity blows through its sustainable resources (which was apparently in December in 1987, but is now marked in August), it all illustrates the same point: a lot of the advancements we consider integral to the current civilisation are bound to be sooner or later stripped away simply as we run out of resources to support many of the things we are used to. To me, contemplating future advancements without first ensuring the current ones are not lost in time is deluded.

I have a similar position on a lot of the futuristic ideas, like the push for automation; when I see headlines like "robots will replace job X for Y millions of people", I always think "and for what number of years? How long are these robots going to be maintained? How many robot brains, joints, etc. can be built before their production eats into the supply of REEs and the like that was supposed to go to solar panels and the like, thus delaying our supposed climate transition even further? (Not to mention that the mere presence of millions of robots will inherently demand more power and thus make it even more difficult to go carbon neutral/negative.) I want to eventually lay out this line of thought on Quora, but I always feel I need more proof.

Then, Diego takes a particular issue with the "dynamic philosophy" part:

Oh, I absolutely understand that the doctrines of utilitarianism are not codified or set in stone. As with egoism (my personal favorite school of thought), there are as many takes on it as there are people who adhere to it. Still, I think those differences are amplified in utilitarianism, because it all comes down to your definition of being "rigorous, honest, and logical" in determining the greater good. Because utilitarianism is inherently concerned with the well-being of others, it invites its adherents to make normative statements about how others should live their lives. I've always found this to conflict heavily with my own worldview.

The "normative statements" part happens to be my problem with the mainly libertarian-supported "ethical altruism". It's the concept mainly centered around Zuckerberg's friend Dustin Moskowitz (that one guy in The Social Network who was basically irrelevant to the plot.), and seems to have other libertarian-ish backers, like Dominic Cummings (a pollster who ran the Brexit campaign because he thought the EU was too protectionist, and didn't foresee that most Leave voters wanted more of it, not less.) His 80,000 hours attempts to assess the impact of all career paths so that people could choose most helpful one and then "earn to give". (I think the main reason it's libertarian-endorsed is because their argument not only pushes everyone further towards the conventional career "rat race" over the "inefficient" political activism, etc., but also implicitly suggests such "earning to give" is superior to taxation and government programs: the aforementioned Brexit pollster actually tweeted to EU to cancel their program for aid in Africa and give all that money to Moskowitz because his think tank would know better what to do with it.)

The key problem there (besides the libertarianism-related arguments) is that these suggestions are only valid so long as they remain narrow, fringe ideas, and the world is comparatively static around them. If a large number of people begin to follow "80,000 hours" career guidelines to the letter (especially if they actually switch from their current, non-ideal careers to the suggest ones en masse), it'll lead to an oversupply in those fields, meaning that a) average wages will go down and your ability to "earn to give" will plummet correspondingly and b) you have an ever-further chance of staying unemployed or underemployed in that pathway, especially if you were midway through training for it when it stopped being "ideal", meaning that both ethical altruists concepts (the good of the "earning to give" and of the job itself) are thrown out.

In short, their current model seems to assume too much on imperfect information, and if it stops being comparatively fringe, it must be constantly adjusted (screwing over many people with sunken costs in once-advised paths), or it must simply acknowledge that for a lot of professions there's a minimum number of people necessary for maintaining the society as it is, and a maximum number beyond which the extra presence is counterproductive. So far, and according to their Twitter, they seem more concerned with (inefficiently) advocating voting reform in US then any such modifications.

Now, back to the original argument(s).

With my ISIS reference, I was acknowledging that a Muslim utilitarian and a Christian utilitarian would have fundamentally different approaches to the philosophy. Both would see their religions as a way for society to achieve a greater good, because only through their respective religions could people attain salvation. Therefore, others would have to convert in order for pleasure to be maximized. But we don't have to go off on a religion tangent here-- generally speaking, because utilitarianism is so concerned with the lives of others, it implicitly gives its adherents a free pass to evangelize their own lifestyles.

I might recall your claim that Paul Ryan cannot be an Objectivist and a (believing) Christian, and say it applies here as well. These two monotheistic religions are centered on the idea God/Allah always knows best, and the most you can personally do is follow their Commandments/surahs to the most. I mean, utilitarian logic states that if there's a way to launch all people into infinite pleasure (which is what heaven is supposed to be, or is at least the closest thing to one) you must definitely do it, and you definitely shouldn't condemn anyone to infinite suffering (hell). The people's good actions and transgressions are not relevant; saints and sinners feel the same pleasure and the same pain, and the idea that sins must always be punished, even with eternal hellfire is necessary, is clearly deontological. Hence, if God/Allah is clearly not an utilitarian (or else he would have just sent everyone to heaven/never cast anyone out of it), how can you follow him, placing his deontological will above yours, yet claim to be one?

An example: If I were to become a utilitarian, I might have to support a ban on marijuana. I think its overall impact on people is decidedly negative, and in order to minimize that negative for everyone, I would be morally obligated to prevent them from consuming it in order to maximize pleasure. It could even be argued that drug use in general (alcohol included) is antithetical to your goals of advancing medicine, science, mathematics, etc, as it wastes labor and destroys lives. But I would never support banning these substances. Just as I don't recognize the rights of others to ban things for me, I don't think I have any right to ban things for them.

What this analysis is misses is that your decision to place this ban and enforce it, sooner or later morphs into an implicit decision to divert resources away from "advancing medicine, science, mathematics" in order to maintain its enforcement (both through paying officers to bust pot dealers, and through keeping people in jail for pot offences instead of letting them contribute to society through their previously held careers) which is a waste of labour in itself. If the waste resulting from enforcement of the marijuana ban is greater than the benefits said ban brings, it is entirely logical, especially for the sensible utilitarians, to get rid of it.

In fact, I believe that no matter what you might think about rights and such, most people who have voted to legalise marijuana in every successful referendum on the subject because they thought much like I do. Otherwise, they would have supported legalising hard drugs and the like as well, as that is also an argument about freedom. However, they don't, because to them, the enforcement of bans on hard drugs brings more benefits then costs, and the enforcement of ban on pot does not. Freedom barely enters into it.


+- Hot Threads

Main NBA Thread... by cupcake
Today at 01:23:55 pm

Today at 11:57:45 am

Rank these late-night "comedians" by how obnoxiously they pander to liberals by Charles Longboat Jr.
Today at 11:48:53 am

The 2018 US Midterms and Goober-natorial Elections Thread by Charles Longboat Jr.
Today at 11:43:12 am

The D.I.E.G.O. System by Kale Pasta
Today at 10:34:08 am

Keeping up with the Joneses by Kale Pasta
Today at 10:31:48 am

What song are you listening to - Part II by Robert Neville
Today at 08:48:53 am

November Criminals by Robert Neville
Today at 07:32:32 am

MWO Movie News, a subsidiary of the Walt Disney Company by Robert Neville
June 23, 2018, 05:52:23 pm

Tut University Presents: Star Trek: The Original Series 101 by Kale Pasta
June 23, 2018, 05:50:41 pm

The US Supreme Court Thread by Charles Longboat Jr.
June 22, 2018, 07:49:57 pm

2 Fudge 2 Knuckle by ChillinDylan Godsend
June 22, 2018, 05:19:51 pm

Sicario 2: Electric Boogaloo Dawn of the Age of the War Days of the Soldado by Charles Longboat Jr.
June 22, 2018, 12:23:59 pm

Disney's™ Marvel's™ Han Solo™ Solo Standalone Film: A Star™ Wars™ Story by Robert Neville
June 21, 2018, 03:41:18 pm

Ghostbusters by Frankie
June 21, 2018, 02:56:24 pm