a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by sullyj3
sullyj3  ·  3212 days ago  ·  link  ·    ·  parent  ·  post: The Repugnant Conclusion

It's not a difficult problem, or even a strong knock against utilitarianism. I feel like most criticisms of utilitarianism can be countered with "that's only a problem with that poorly thought out definition X of utility that you're using - here's a better definition Y!" Worried about The Repugnant Conclusion? Maybe utility shouldn't sum linearly with the number of people you have. Maybe we should use something like Average Utilitarianism as an approach to utility instead!

I feel like the basic notion of utilitarianism should be uncontroversial. Assume there's some poorly specified but theoretically empirically measurable parameter that we can increase, that will be a Good Thing. Fumblingly try to increase it. The controversial part is which parameter is the Most Good Thing.

There are clear arguments against using "the total amount of pleasure in the universe" as a definition for utility, such as the Repugnant Conclusion and similar tiling problems, and also things like wireheading, (which some people are perfectly happy with, incidentally), and combinations of the two (rats on heroin everywhere). There are many, many alternative suggestions, each with their own benefits, each with their own fatal flaws.

That's what I see as the real problem with utilitarianism as a guide for making decisions. It's underspecified. "Maximise Utility" is the obvious part, (even if it took us several thousand years to get there) Now we need to decide which number we want to Go Up. Anyway, it's a start at least.





aeromill  ·  3210 days ago  ·  link  ·  

I completely agree that we should be using Average Utilitarianism as well. The only downsides I can think of are only "downsides" because they are counter intuitive, but intuition doesn't define morality. Here's an example: a population of 100 people with 100 happiness each or a population of 1,000,000 with 99 happiness each. Intuition tells us that the larger population with almost the same happiness is better when in reality population size is only important insofar as it adds to happiness.

I also agree with your point that utility or happiness isn't well specified. Personally, I think that an answer will arise the more we learn about the brain through neuroscience so that we can find an objective quantifiable measure that pleasure, joy, ecstasy, etc. all add to.

Edit: I forgot about an issue with average utilitarianism. See below.

TheSkeward  ·  3210 days ago  ·  link  ·  

So, let me ask you this. Say we have two completely separate areas - for the sake of argument, we'll have the Earth and a separate planet in our solar system called Earth Mk II. For the purposes of this thought experiment, the two planets are totally hidden from each other in such a way that neither will ever discover the other, and nothing that happens on one will ever affect the other.

Earth and Earth Mk II are pretty similar places, with equal populations. The big difference is that Earth has 90 units of happiness averaged throughout the population, and Earth Mk 2. has 98. Now, even though they're completely separated, isn't the correct move in average utilitarianism to destroy Earth, so that the solar system has a higher average happiness?

Further, assuming we can do this without affecting their happiness, isn't the correct play for average utilitarianism simply to find the happiest being in the universe, and kill everything else? That conclusion seems a little repugnant, too.

aeromill  ·  3209 days ago  ·  link  ·  

You know, I don't know why I forgot this. I knew I was forgetting why Average Utilitarianism was bad when simple utility is the goal.

What do you think of side constraint utilitarianism where another value can be selected in conjunction with utility?

TheSkeward  ·  3209 days ago  ·  link  ·  

Sure, I like it! Of course, the caveat is that off the top of my head, I can't think of a side constraint that would cover all cases. I've also heard it called two-level utilitarianism, where you make a judgment call over whether total or average utilitarianism is more important on a situation-by-situation basis. My post below about Utilitarianism But pretty much sums up my view.

sullyj3  ·  3209 days ago  ·  link  ·  
This comment has been deleted.
sullyj3  ·  3209 days ago  ·  link  ·  

I'll repost a comment I made on a different blog post:

"I feel like that’s only unintuitive if you’re already coming from a framework where greater net utility is desirable, like hedonistic utilitarianism. The only problem I have with the 100 people is the lack of diversity."