This was one of the best reads I've had recently.
I'm of the mentality that Utilitarianism is the closest thing to an ethical system that actually serves the people that devise it. And what more could we want out of an ethical system than that? That being said the issues that come with population ethics are fascinating because they are an extension of Utilitarianism. Without going into much more detail, what do you all think?
This is a fun read, in the sense that paradoxes and "if A then B then nonsense" ideas are fun (Maximin e.g.). Personally, I often find myself arguing from a utilitarian, rationalist point of view, but I actually dislike utilitarianism as anything but a pragmatic approach to solving certain problems. As a moral system, it doesn't scale in ways that apply to human nature, and it raises more questions than it answers. For the record, I enjoyed the reasoning present in 2.1.2. And I tend to find that common sense carves right through almost every moral conundrum there is. Badged because it's the sort of thing I want to be presented with on a Friday night. PS: wasoxygen I've been sending a lot of long crap at you lately, I feel. Instead of apologizing, I will forge boldly ahead.
I suspect your definition of happiness may be different from mine. Many people throughout history have valued things other than their own direct happiness, such as the happiness of their children and relatives, their potential inclusion in the afterlife, an ascetic lifestyle, a private set of ethics, hardships for the sake of personal growth, etc.
The means at which people seek happiness varies, sure. But all those things you listed are just means to an end of happiness for the individual. This sounds "selfish" in nature, which it is, but it's a different kind of selfishness from the egocentric one we commonly refer to.
> I disagree, and a Buddhist might be offended. Buddhists seek to eliminate pain which is Utilitarian as Utilitarianism seeks to maximize happiness and minimize pain. >I also do not think that maximizing utility and maximizing happiness are the same thing. Utility is actually happiness by definition: "Utility is defined in various ways, including as pleasure, economic well-being and the lack of suffering." I'm not describing hedonism as I'm not solely referring to bodily pleasures as the good.
I think "utility" would be better defined as "applicability as a means towards an end".
So "utility" is both "happiness" and "something that's useful towards the end of happiness". In other words, it's both the end and the means towards it at the same time? But that makes no sense. That's like saying the act of driving a car is both "getting someplace" and the means towards it. Not only that, but Utility is also various things that aren't at all related to how the word is commonly understood. This is like me defining a strawberry as something that could be an apple, an orange, or perhaps a coconut. To be fair, the quoted definition has a common theme. The various definitions are all positive things. But on the other hand, my example is just as good, because the various definitions are all delicious fruits!"Utility is defined in various ways, including as pleasure, economic well-being and the lack of suffering."
>So "utility" is both "happiness" and "something that's useful towards the end of happiness". No. Utility is the usefulness in achieving the end. And the end is happiness. And achieving the end well is aggregating a lot of happiness. Therefore high utility is high happiness. For that reason philosophers equate utility with happiness. It's a definition specific to philosophy. Hopefully that clears things up.
You're still equating it with both the end and the means towards it. In other words, you're still not making sense.
Here's what you originally said: But now you're saying.. something different that I can't be bothered to parse right now. I give up.utility, especially for utilitarians, is not usefulness, it's happiness. It's called utility because it's useful towards the end of happiness
It's not a difficult problem, or even a strong knock against utilitarianism.
I feel like most criticisms of utilitarianism can be countered with "that's only a problem with that poorly thought out definition X of utility that you're using - here's a better definition Y!" Worried about The Repugnant Conclusion? Maybe utility shouldn't sum linearly with the number of people you have. Maybe we should use something like Average Utilitarianism as an approach to utility instead! I feel like the basic notion of utilitarianism should be uncontroversial. Assume there's some poorly specified but theoretically empirically measurable parameter that we can increase, that will be a Good Thing.
Fumblingly try to increase it.
The controversial part is which parameter is the Most Good Thing. There are clear arguments against using "the total amount of pleasure in the universe" as a definition for utility, such as the Repugnant Conclusion and similar tiling problems, and also things like wireheading, (which some people are perfectly happy with, incidentally), and combinations of the two (rats on heroin everywhere). There are many, many alternative suggestions, each with their own benefits, each with their own fatal flaws. That's what I see as the real problem with utilitarianism as a guide for making decisions. It's underspecified. "Maximise Utility" is the obvious part, (even if it took us several thousand years to get there) Now we need to decide which number we want to Go Up. Anyway, it's a start at least.
I completely agree that we should be using Average Utilitarianism as well. The only downsides I can think of are only "downsides" because they are counter intuitive, but intuition doesn't define morality. Here's an example: a population of 100 people with 100 happiness each or a population of 1,000,000 with 99 happiness each. Intuition tells us that the larger population with almost the same happiness is better when in reality population size is only important insofar as it adds to happiness. I also agree with your point that utility or happiness isn't well specified. Personally, I think that an answer will arise the more we learn about the brain through neuroscience so that we can find an objective quantifiable measure that pleasure, joy, ecstasy, etc. all add to. Edit: I forgot about an issue with average utilitarianism. See below.
So, let me ask you this. Say we have two completely separate areas - for the sake of argument, we'll have the Earth and a separate planet in our solar system called Earth Mk II. For the purposes of this thought experiment, the two planets are totally hidden from each other in such a way that neither will ever discover the other, and nothing that happens on one will ever affect the other. Earth and Earth Mk II are pretty similar places, with equal populations. The big difference is that Earth has 90 units of happiness averaged throughout the population, and Earth Mk 2. has 98. Now, even though they're completely separated, isn't the correct move in average utilitarianism to destroy Earth, so that the solar system has a higher average happiness? Further, assuming we can do this without affecting their happiness, isn't the correct play for average utilitarianism simply to find the happiest being in the universe, and kill everything else? That conclusion seems a little repugnant, too.
You know, I don't know why I forgot this. I knew I was forgetting why Average Utilitarianism was bad when simple utility is the goal. What do you think of side constraint utilitarianism where another value can be selected in conjunction with utility?
Sure, I like it! Of course, the caveat is that off the top of my head, I can't think of a side constraint that would cover all cases. I've also heard it called two-level utilitarianism, where you make a judgment call over whether total or average utilitarianism is more important on a situation-by-situation basis. My post below about Utilitarianism But pretty much sums up my view.
I'll repost a comment I made on a different blog post: "I feel like that’s only unintuitive if you’re already coming from a framework where greater net utility is desirable, like hedonistic utilitarianism. The only problem I have with the 100 people is the lack of diversity."
I'd have to agree the most with 2.8. Surely a life worth living is one where it would be preferable to create an additional person who lives such a life, all else being equal, and therefore there is a great benefit to having very many people living lives worth living. The problem seems to be that the idea of a life only barely worth living is considered to be horrible, whereas surely if it would be horrible, it isn't worth living? I think this mostly arises from the difference between a life worth continuing and a life worth creating. The accepted view of suicide and euthanasia (especially in the US) seems to be that only the very worst, most painful lives should possibly be ended. Yet if you wouldn't have a child if it was destined to live a life in population Z, then surely those lives are not worth creating?
I agree completely. The addition of a life worth living is exactly that: worth it. But then you have the idea of Average Utilitarianism back. Do you want a bunch of low positive happiness citizens in a population, or a low population with high average utilitarianism? The only issue pointed out is that you can have a population of 1 with slightly higher happiness than a population of 5 million with slightly lower happiness each. So what? In the end, happiness is only important because it's what any individual wants. The whole justification for Utilitarianism is based around the principle that happiness is the end all be all for every individual. That being said, the counter intuitive nature of this complain means nothing, especially since intuition means very little in regards to logic.
I've seen this in many other instances. It's like the Great Ghost of Hubski will share submissions every now and then.
This example was used to show the shortcomings of total utilitarianism in that simply adding happiness can be counterintuitive (as Z suggests with lives barely worth living). As for who it matters to? You're actually onto a very recent development in philosophy that counters total utilitarianism flat out without simply saying "it's counterintuitive." I could try and explain it but this video does an excellent job.
The video was fascinating! It gets my mind thinking in such interesting ways. And double thanks for understanding exactly what my comment was getting at. I've not had much of a focus on philosophy as of yet, but this has intrigued me deeply. Thanks, again!
Oh, I love the repugnant conclusion! I just had a discussion about this with my brother the other day; we cast the graph onto the TV and talked through the different comparisons. I'll give you what we eventually agreed on, and you can tell me what you think. So, Utilitarianism, basically, is saying "If there is a magic equation that determines the maximum happiness (or maximum average happiness, or whatever) for everyone, then sticking to that is morally optimal." And that makes a lot of sense, right? It's got the basic precept of everyone's happiness being important, and covers a lot of corner cases by providing clear answers to thorny questions like "Is it okay to cause one person to die in order to save five?" or "Even if torture is immoral, is it immoral to torture someone if we are absolutely guaranteed to save a million lives?" So Utilitarianism is definitely a step forward from, for instance, the Golden Rule, which would trip over a lot of those questions, but Utilitarianism trips over questions like the Repugnant Conclusion, and also questions like "Is it okay to brutally torture someone to death in order to prevent a sufficiently large number of people from having a speck of dust in their eye?" So I won't presume to try to imagine an ideally moral society - I'm not sure I could improve upon the idea of a philosopher-king or council, anyway - but for myself in my personal life, I like to practice what I call Utilitarianism But. Basically, all else equal, utilitarianism is optimally moral... BUT when something feels really wrong, like the repugnant conclusion does, or like torturing someone to prevent dust specks does, or something, I put that on hold and go with what feels right, allowing myself and the rational people around me to override the magic equation. I think that before utilitarianism, the closest you could get to it might have been The Golden Rule But, and I think Utilitarianism But will serve until and unless we arrive at a more complete understanding of rational morality.
You brought up a perfect instance where Utilitarianism falls short. In the instance where you "enslave a small population for the betterment of the masses" this is morally permissible by pure utilitarianism. But image Utilitarianism that has a side constraint that generally maximizes happiness when adhered to. Therefore actions considered have to be asked first and foremost, "does this violate this side constraint?" and if it doesn't, "which action that doesn't violate this side constraint maximizes happiness?" That side constraint is up to your imagination, and I would love to hear what side constraints you would apply. What do you think?
That may well be true, considering that the people "devising it" are probably some sort of rulers. How would you define "Utilitarianism"? What's the central principle, if any?I'm of the mentality that Utilitarianism is the closest thing to an ethical system that actually serves the people that devise it
I wouldn't say that those devising Utilitarianism are rulers as Jeremy Bentham and John Stuart Mill were the ones to do so and they weren't particularly powerful men. Just simple philosophers. Although, you do see villains in movies use some form of Utilitarianism to justify their actions which only serves to paint it in a bad light. Utilitarianism is the principle that the only meaningful end in life (meaningful as it pertains to humans) is human wellbeing (happiness, utility, pleasure, etc.) As such, maximizing wellbeing is its central principle.
Alright, but they were probably acting on behalf of some rulers. But feel free to ignore that claim for now. I've heard another definition of Utilitarianism where the idea is that the morality of an action is determined by its consequences. I guess those two are relatively close, because obviously for the action to be moral, the consequence would need to be perceived as moral too, and "well-being" would certainly fit that mold. A central problem with Utilitarianism is that people act based on their perceptions, but the perceptions themselves are based on any individual's sense data, thought-patterns, pre-conceived notions and so on, and thus, they are subjective. So if people are just going around pursuing "well-being", there's no telling what they might decide to do. Besides, what the idea of the morality of an action being determined by its effect on some sort of perceived well-being really boils down to, is the idea that the end justifies the means. For example, I might decide that some exercise would be good for you, and chase you around with a baseball bat to enhance your physical well-being. I'd think that my end would justify my means, but you wouldn't consider that moral, would you? Only an objective moral system is distinguishable from having no morals at all. Luckily, all those of us that aren't psychopaths are born with an innate, objective morality, guided by our consciences. There's a simple moral principle that corresponds to that. It's called "The Non-Aggression Principle". The idea is that aggressing against people is immoral.I wouldn't say that those devising Utilitarianism are rulers as Jeremy Bentham and John Stuart Mill were the ones to do so and they weren't particularly powerful men
Utilitarianism is the principle that the only meaningful end in life (meaningful as it pertains to humans) is human wellbeing (happiness, utility, pleasure, etc.) As such, maximizing wellbeing is its central principle.
This sort of gets at the difference between act utilitarianism and rule utilitarianism. Act utilitarianism requires moral agents to dwell on the consequences of each and every moral action. Rule utilitarianism, by contrast, requires that utilitarian reasoning be applied to the construction of moral rules rather than the evaluation of moral acts. Basically, rule utilitarianism calls for the creation of 'moral rules-of-thumb' that, when followed, lead to the greatest amount of happiness. Obviously, following moral rules is much "easier" than performing utilitarian calculus on individual actions—in this way, rule utilitarianism reduces the cognitive load on imperfect moral agents (read: humans).A central problem with Utilitarianism is that people act based on their perceptions, but the perceptions themselves are based on any individual's sense data, thought-patterns, pre-conceived notions and so on, and thus, they are subjective.
.. Moral rules of thumb like The NAP? The thing is, you only need two moral rules of thumb: Anything else is just overcomplicating things.Basically, rule utilitarianism calls for the creation of 'moral rules-of-thumb'
If there's a need for more detailed analysis, that can and should be done on a case-by-case basis. You could come up with countless contrived edge cases like The Trolley Problem, but practical impossibilities don't matter in real life, and whatever strange situations actually come up can be handled when they do. 1) Don't aggress against other people
2) Don't violate their property rights (through theft, fraud etc)
>Alright, but they were probably acting on behalf of some rulers. I know you said I can ignore this for now, but I might as well address it while I'm writing this comment. To think that someone made a breakthrough in the field of ethical philosophy at the whims of someone more powerful all for the ends of having people be okay with whatever leader is in power to do what he wants under the guise of Utilitarianism is far fetched. Not to mention, Utilitarianism didn't catch on for decades afterwards anyways. It's much more reasonable to think that they simply were philosophers who wanted to contribute to the field. >I've heard another definition of Utilitarianism where the idea is that the morality of an action is determined by its consequences You're right in that they are compatible. When people describe Utilitarianism as consequentialist it's more to highlight its difference from Deontology than to define it 100% accurately. Utilitarianism values happiness, and happiness is the result of actions. So they are compatible. >A central problem with Utilitarianism is that people act based on their perceptions, but the perceptions themselves are based on any individual's sense data, thought-patterns, pre-conceived notions and so on, and thus, they are subjective. What you're essentially saying is that we aren't perfect and we can never have all the data that is involved in a certain situation; and that's fine. Every moral system would suffer from this, even the "Non-Aggression Principle" which I'll get to later. Furthermore, it also seems that you're saying it's far to difficult to have all the information and to execute perfectly accordingly. To respond, first: something being difficult doesn't make it not true. Second: unintended consequences would be a problem in any ethical system, even in "intention based" ethics. No one can simply ignore results even if you choose to value intentions more. >So if people are just going around pursuing "well-being", there's no telling what they might decide to do.... boils down to, is the idea that the end justifies the means. This is similar to the issue you raised earlier in that we can't gather all information and that we can't always perform the right action since we're flawed creatures. I completely agree with you here. But again, the outcome of people trying to maximize the happiness of everyone else if better than any alternative I can think of. And yes, the ends to justify the means if the ends include making everyone happy.
>For example, I might decide that some exercise would be good for you, and chase you around with a baseball bat to enhance your physical well-being. I'd think that my end would justify my means, but you wouldn't consider that moral, would you? It's interesting that you bring this up because John Stuart Mill address this point explicitly in the following book: On Liberty (great read, definitely pick it up!) Essentially he says that we should respect the liberty and autonomy of everyone because forcing someone to do some other action (even if its a better alternative to what they're doing) is ultimately worse for their well-being. Because that person will be happier with that lesser action that they're doing than if they were forced to do an action that, while is slightly better, is ultimately not what they want to do. >Only an objective moral system is distinguishable from having no morals at all. Utilitarianism is an objective moral system. Could you please clarify what you meant here? >guided by our consciences It's not outlandish to think that if we had 1 situation where we had to choose an action and there were two people present, that those two people can have their consciousness tell them to do opposite actions. That being granted, if you're saying that consciousness defines the good of an action, and we can safely grant that two people can have their conscious tell them two opposite actions are good, then that logically contradicts. You are essentially saying action A is good and action !A (not A) is also good. You can't have A = !A due to the law of noncontradiction. > "The Non-Aggression Principle" Intuitively I would agree that this is a good moral compass. However, you're freely asserting this as a moral truth without any backup. "What you freely assert I freely dismiss." Personally, I think that the right ethical system will boil down to "Do what makes you happy, while not harming others in the process" (with some exceptions), but getting there requires a lot of philosophy to be done first. Sorry for the long post, but you brought up a lot of good points I wanted to expand on!
Socrates was suicided for "corrupting the young", where "corrupting the young" is an euphemism for "being a threat to rulers' continued rule". That must have had a "chilling effect" on philosophy too. No, what I was saying is that the perceived moral value of an action is subjective. No, because if you're never going to aggress against anyone, you'll never need a perceived justification for doing so, and therefore, you will never act immorally. Therefore, the problem of subjectivity does not apply to the NAP, but it does to Utilitarianism. Therefore, The NAP is preferrable to Utilitarianism. That's not an argument against the problem of subjectivity. Nope. The NAP wins again because there are no unintended consequences to not doing something. I guess here we get to how Utilitarianism relates to rulers. Maximizing the happiness of "everyone else" is thinking in collectivist terms, as if we're some sort of collective entity instead of individuals, and that's exactly the way rulers want us to think, because otherwise we'd refuse to be ruled. People think that handing out other people's money to the needy masses is somehow a good thing. They're completely blind to the immorality of taking people's property by force, and taxation's effects on the economy. As an example, would you produce goods and services if 100% of the proceeds were taken away from you? Of course not. It would be clear to you that you're an outright slave. But when 50% is taken from you, it's just a difference in the degree of enslavement, and a huge demotivational factor compared to no taxation. Then there are massive effects related to the misallocation of resources and distortions in prices and so on. That's compatible with the NAP, but rules out the end justifying the means. He wasn't very consistent then. I was referring to the subjectivity of the perceived moral value of an action. That's a problem you can't get around. A moral system that's based on subjective evaluations of the morality of various actions is, by its very nature, not objective. "Happiness" is subjective too. But the NAP can be applied to everyone equally, at the same time, with no contradictions and no arbitrariness. That's not much of a "system", but it certainly is objective. In fact, it's just an objective moral principle. If they're both healthy and sane, it's just so extremely unlikely to happen that it's irrelevant to this discussion. The idea was that we have a "built-in objective morality", guided by our consciences. I think we've established that this is not the case. Do you really need me to back up the idea that it's immoral to aggress against people? :D That's the thing. If you have a conscience, it will deter you from aggressing against others. So you could say that your conscience is my backup :P But I did go into more detail in this message. Maybe that helps. You might realize that this is a perfect fit for the NAP.To think that someone made a breakthrough in the field of ethical philosophy at the whims of someone more powerful all for the ends of having people be okay with whatever leader is in power to do what he wants under the guise of Utilitarianism is far fetched.
What you're essentially saying is that we aren't perfect and we can never have all the data that is involved in a certain situation; and that's fine.
Every moral system would suffer from this, even the "Non-Aggression Principle" which I'll get to later.
Furthermore, it also seems that you're saying it's far to difficult to have all the information and to execute perfectly accordingly. To respond, first: something being difficult doesn't make it not true.
Second: unintended consequences would be a problem in any ethical system, even in "intention based" ethics.
But again, the outcome of people trying to maximize the happiness of everyone else if better than any alternative I can think of.
Essentially he says that we should respect the liberty and autonomy of everyone because forcing someone to do some other action (even if its a better alternative to what they're doing) is ultimately worse for their well-being.
Utilitarianism is an objective moral system. Could you please clarify what you meant here?
It's not outlandish to think that if we had 1 situation where we had to choose an action and there were two people present, that those two people can have their consciousness tell them to do opposite actions.
That being granted, if you're saying that consciousness defines the good of an action
You are essentially saying action A is good and action !A (not A) is also good.
However, you're freely asserting this as a moral truth without any backup. "What you freely assert I freely dismiss."
"Do what makes you happy, while not harming others in the process"
The problem with the NAP is that "aggression" is not particularly well-defined. In many cases, a sin of omission (refusing to do something good/necessary) can be just as damaging as a sin of commission (doing something bad). If I find someone in cardiac arrest lying on the sidewalk, I may be "aggressing" them (even breaking their ribs) by performing CPR, but it's probably the right thing to do. Of course, if that person desperately wished to die, my CPR would be far less welcome. "Aggression" is subjective too. Plus: the NAP isn't close to useful when it comes to answering textbook moral cases like the Trolley Problem, or the problem of moral luck.
It's often referred to as "the initiation of the use of force", which covers intimidation/coercion and physical violence of course. That's clear enough. Everyone knows when they're being coerced. Obviously your intent matters too. Genuinely trying to help someone can't sanely be considered immoral. So what? It can be applied to 100% of what happens in ordinary, everyday life. What does a theoretical scenario like the Trolley Problem matter with regard to real life and the kind of situations you actually encounter?The problem with the NAP is that "aggression" is not particularly well-defined.
If I find someone in cardiac arrest lying on the sidewalk, I may be "aggressing" them (even breaking their ribs) by performing CPR
the NAP isn't close to useful when it comes to answering textbook moral cases like the Trolley Problem
>Well, Socrates was suicided for "corrupting the young", where "corrupting the young" is an euphemism for "being a threat to rulers' continued rule". Yes, the ancient greeks killed a philosopher. But not only is it ridiculous to think that every philosopher who created a philosophy that could benefit the upper class because of that one instance, but it's also ridiculous to think that some elaborate plan by the upper class involved finding a philosopher, having him break ground in the field of ethics, and hoping that it would not only catch on in the philosophy circle but also the general public in time for those who devised to plan to benefit from it. >No, what I was saying is that the perceived moral value of an action is subjective. The moral value remains constant: happiness (or whatever synonym you wish you use). Whether people can distinguish that 100% accurately is certainly not true. But that's irrelevant. To say that people can't determine what makes other happy generally speaking would be absurd. >Nope. The NAP wins again because there are no unintended consequences to not doing something. Not acting when the action could potentially harm someone is something a Utilitarian can also do. But you're also falling into your own trap of people not perceiving the harm that they may cause from acting or not acting. >Maximizing the happiness of "everyone else" is thinking in collectivist terms Not true. Look up average utilitarians value the average happiness per person. Total utilitarians value the sum total. Objectivists value solely your self. Also, you're equating a trade of goods/services for taxes as "enslavement" when that's just an exchange. Additionally, you're going on about how this system makes it easier for rulers to rule. So what? If everyone is better off because of it, then why does that matter? >People think that handing out other people's money to the needy masses is somehow a good thing. It is if everyone does the same. If we didn't do this then we wouldn't have government services, a military, protection, roads, safe commerce trade, etc. If you want to call that exchange enslavement, then call me a salve. And to finish off this "slave" bit, if you're calling a slave someone with absolutely no autonomy and is under the full control of another and that we are partial slaves since some of our autonomy is limited then this "slavery" that everyone in the civilized world is in isn't bad. Being a "slave" by your scale is actually a good thing when you're a "slave" in the degree that we're slaves. If you think that being on the scale in any way shape or form is bad, then you have to prove the assertion that slavery in this kind is objectively bad in every form. >Well, that's compatible with the NAP, but rules out the end justifying the means. He wasn't very consistent then. It can be consistent, sure, but it's also consistent with other ethical systems which doesn't necessitate them to be true. As for "the ends justifying the means" he is being consistent because he's saying that happiness is maximized when liberty is preserved. >A moral system that's based on subjective evaluations of the morality of various actions is, by its very nature, not objective. You're still saying that we can't determine outcomes, and happiness perfectly which is irrelevant. That doesn't make the choice for a particular action subjective in its goodness. It just means that we would have trouble at times finding the right action. >"Happiness" is subjective too. Subjective in that an individual is experiencing it at any one time. But happiness is a very real phenomena. Assuming everything in this world is physical (which there is no proof to the contrary), then happiness too is physical. It is most likely a complex system of chemicals and neurons firing in the brain in a particular pattern and order. So happiness is an objective quantity. Am I saying we can measure this with today's science? Definitely not, but this is a strong enough argument to support the claim that happiness is an objective value. That being established, if you define the good action as that which maximizes happiness, then in the cenario where only one person is affected (for the sake of simplicity), then right action is the one that maximized that objective value of happiness. And to reiterate, just because we cannot perceive this measure either through technology or through our own senses, doesn't mean the right, objective action doesn't exist. >If they're both healthy and sane, it's just so extremely unlikely to happen that it's irrelevant to this discussion. By that logic, everyone would choose the same action in a moral situation because we're all guided by some universal moral system that speaks to us in the form of our conscious. If so, why do people choose different actions for the trolley problem? is everyone who's not choosing the right action not healthy and sane? >I think we've established that this is not the case. You only claimed this isn't true by dismissing any alternative moral choices steered by consciousness because of lack of "health" or "sanity" >Well, you might note that this is a perfect fit for the NAP. I was just throwing that out there. While the NAP does seem like a simple one size fits all theory for morality, i would rather subscribe to the moral system that has the most logical support behind it. >Do you really need me to back up the idea that it's immoral to aggress against people? :D Yes. Because you're assuming that harm or aggression is instrincally bad. It's therefore logical to assume that the opposite of harm (wellbeing, support, etc.) is good. But you're denying that claim by rejecting Utilitarianism. That way, if you want to stay consistent you must accept both sides to this coin by saying happiness is also an objective good.
You seem to be wary of "collectivized" forms of utilitarianism, and for good reason. Philosophers have a lot of trouble aggregating happiness. In concrete terms, even if we had objective ways of measuring an individual's happiness, there's no escaping the possibility that one person's happiness might not be the same as another's, i.e., that utility is interpersonally incommensurable. Aggregate forms of utilitarianism seem problematically compatible with vast amounts of inequality ('utility monster' problem). The problem is that utilitarianism doesn't seem very useful on an individual basis; very few actions affect just one person. When there are tradeoffs between person A's happiness and person B's happiness, it seems like we need something else to help arbitrate. Isn't this where desert comes in? And yet I haven't seen very many Utilitarian accounts of desert. As someone who's evidently put a lot of thought into this area of philosophy, I'm curious what you think about the problems that attend any aggregating forms of utilitarianism, and whether those problems shake your belief in utilitarianism as a viable moral framework.
"there's no escaping the possibility that one person's happiness might not be the same as another's" I think that happiness comes to people through different means e.g. their career, family, travel, etc. I would even say that people can desire different "feelings" of happiness i.e. pleasure, euphoria, eudaemonia, etc. But I think it's not too far fetched to think that you can roughly figure what actions produce what feelings and what people enjoy what kind of feelings. If that makes any sense? I actually do think Utilitarianism can be useful on an individual basis. It typically boils down to "do what makes you happy." Once you have harm being formed as a trade off things get tricky. I'll try and go into that in a second. I don't like the idea of aggregating happiness. Here's the solution I agree with most: if you have 6 dying patients who need a drug and 5 of them can survive with your supplies while the last 1 needs all the supply of the medicine ensuring the other 5 die. What do you do? Give it to the 5 or the 1? Intuitively you say the 5 because it saves more lives and provides more happiness. However, happiness is only valuable to a person perceiving it. Assuming these people have no family or friends then each person surviving is only providing happiness to 1 person each. So each life being saved totals out to 1 total happiness gained to the person actually perceiving this happiness gain. So the idea presented is that if we want to maintain some sort of justice (which if preserved tends to produce more happiness in the long run, generally speaking) then we should weigh each person equally. So what we should do is....roll a dice. Leave it to chance. While the odds do favor the 5 (if you roll a 1-5, you have but no choice to save the other 4) it still allows for the 1 to have his chance to survive. If my explanation wasn't clear, let me know. Also, what do you mean by utilitarianism accounting for desert? What is desert? Thanks for the reply!
You're bouncing all over the place, so I'll just put out a feeler response here, to see whether writing a comprehensive one might be worthwhile. Are you seriously calling taxation "just an exchange"? Do you want me to believe you don't know how taxes are collected, and what happens if you don't pay them? Well, for starters, everyone is actually worse off because of it, because that's just how ruling over people works. Everyone would personally be better off getting to keep and use all of their own property as they see fit. Besides, each individual's personal prosperity would contribute to the society's overall prosperity, or perhaps.. "happiness", if that suits you better.you're equating a trade of goods/services for taxes as "enslavement" when that's just an exchange.
Additionally, you're going on about how this system makes it easier for rulers to rule. So what? If everyone is better off because of it, then why does that matter?
> Are you seriously calling taxation "just an exchange"? Do you want me to believe you don't know how taxes are collected, and what happens if you don't pay them? Yes actually. You're exchanging your money for government goods and services. If you don't pay them then the government comes down on you because you still received the goods and services without paying them their rightful tax. Whether taxes are too high or too low relative to the services in your particular community can sway things for or against your favor, but you get the idea. >Well, for starters, everyone is actually worse off because of it, because that's just how ruling over people works. I said that if you focus on the wellbeing of the people then people and the government happens to be able to rule more efficiently as a side effect (this is assuming that's true which hasn't been established yet btw), that it's better for the people. You're responding that they're actually worse off because "that's just how ruling over people works." You may very well be right, but you haven't really supported that assertion by saying "it's just how it works." To clarify a point for you to refute: why is focusing on wellbeing bad for the people? > Everyone would personally be better off getting to keep and use all of their own property as they see fit. First, define property. Second, this libertarian dream land needs some sort of regulation to make sure chaos doesn't run amok. Here's a perfect example of people trying to keep to themselves not working out well. I highly recommend reading that article because it's very interesting, even outside the scope of this discussion. > Besides, each individual's personal prosperity would contribute to the society's overall prosperity, or perhaps.. "happiness", if that suits you better. It's not the total happiness that matters though. Happiness is only important to the "feeler" (poor choice of word, sorry) of said happiness. You have to remember, happiness is selected as the ultimate desirable good because it's what human nature dictates that each individual ultimately wants. We don't want an increase in total happiness, we want an increase in personal happiness. That being said, if society is collectively very happy (due to high population even though individually everyone is above average at best) then to the individual in that population, he's not living in a society that is ultimately desirable. By overall point here is that if we try and maximize the happiness of everyone else (in addition to ourselves) then the result creates a synergistic effect where the sum is greater than the parts. To be even more simple: You can focus on your happiness have have 10 "headons" or you can help contribute to society, your family, your day-to-day life and if everyone else does the same than your happiness will be 15 "headons." You might ask yourself, "what if no one else does the same? Then I'm just contributing to their happiness with nothing in return." While that may be true that's just the Tragedy of the Commons at which point you can only hope that your actions influence others to do the same which will ultimately help you in the end. Sorry for the ramble, my thoughts tend to run on this subject.
So if Ferrari decides to start delivering cars to you, it's alright for them to forcefully take your money "in exchange" if you don't feel like paying for the cars? You've received the Ferraris, after all, so now you should pay. We both know no one would consider that acceptable, so why would it be when a government does the same? I don't think you're being intellectually honest here. Do you want me to believe you don't think it matters that people aren't given a choice in the matter? Or that people pay for services they never use, have no say in what services get provided, but are morally obligated to pay anyway?
I did support the assertion. Being ruled involves paying taxes to your rulers, i.e. not keeping all of your property.
It's really not that complicated. No matter how you might define "property" for your distraction purposes, in this context it obviously covers any income you receive, which is then taxed. That understanding is enough for continuing the discussion without needing a detailed definition. Again, you're not being intellectually honest.
If that's true, then why would anyone attempt to maximize anyone else's happiness (through subjectively justified means, no less)?Yes actually. You're exchanging your money for government goods and services. If you don't pay them then the government comes down on you because you still received the goods and services without paying them their rightful tax.
You may very well be right, but you haven't really supported that assertion by saying "it's just how it works."
First, define property.
Happiness is only important to the "feeler" of said happiness
> So if Ferrari decides to start delivering cars to you, it's alright for them to forcefully take your money "in exchange" if you don't feel like paying for the cars? You've received the Ferraris, after all, so now you should pay. Haha not quite. The difference is that there's an implied agreement between the citizen and the government; a "social contract." If you don't want their goods and services then you can choose not to live in that country. > Do you want me to believe you don't think it matters that people aren't given a choice in the matter? But they are given a choice: leave. To think that everyone at birth or a reasonable age should be asked up front: "Hey, I know almost every human in history has lived in civilization despite having the ability to leave at any moment, but just in case, do you want to leave human civilization?" > Being ruled involves paying taxes to your rulers, i.e. not keeping all of your property. And we're worse off for it? By your logic anyone who pays taxes is a slave. Being a slave is morally undesirable. Therefore we shouldn't pay taxes and as a result we should not have any government services in return, just to make things fair. How do you expect civilization to continue with no rules, no protection and no one to enforce laws? Should everyone just keep a gun on them at all times and hope for the best? You're taking this way to far. >It's really not that complicated. No matter how you might define "property" for your distraction purposes, in this context it obviously covers any income you receive, which is then taxed. Wonderful. So that would include companies, products, production lines etc. That's reasonable to infer. Have you ever heard of the Robber Barrons? When there was no government protection (which comes with taxes btw) or regulations, life was miserable for the American public in the early 20th century. In you're libertarian utopia (which the article you conveniently haven't mentioned doesn't work) taxes won't be paid, governments wouldn't protect and this would happen all over again. How many people would actually turn down human civilization just to say a few percentage point on their income? I find it odd how you accuse me of distracting you with trying to define a definition, and with intellectual dishonesty when you're asserting this as a plausible alternative to civilization. > If that's true, then why would anyone attempt to maximize anyone else's happiness (through subjectively justified means, no less)? You completely ignored my last point. Allow me to repeat myself: "By overall point here is that if we try and maximize the happiness of everyone else (in addition to ourselves) then the result creates a synergistic effect where the sum is greater than the parts. To be even more simple: You can focus on your happiness have have 10 "headons" or you can help contribute to society, your family, your day-to-day life and if everyone else does the same than your happiness will be 15 "headons." You might ask yourself, "what if no one else does the same? Then I'm just contributing to their happiness with nothing in return." While that may be true that's just the Tragedy of the Commons at which point you can only hope that your actions influence others to do the same which will ultimately help you in the end." Let's keep this civil and avoid ad hominem please. I want to have a nice clean discussion here.
But you have the exact same implied agreement with Ferrari!! If you don't want to pay for the cars that are delivered to you without you asking for them, you can just leave the country! But let's say there's a Ferrari dealership everywhere, so you can't actually avoid getting those cars and having to pay for them. Is everything alright?If you don't want their goods and services then you can choose not to live in that country.
Nooooo, I'm saying that you can leave the country before you even receive the goods and services. By the time you're old enough to pay taxes, you can leave and live in the wilderness if you so wish. Actually, at that point you've benefitted for roughly 18 years without paying any taxes, so really you're being unjust, although I doubt anyone would really care for 1 person leaving. And yes everything is alright! Why do you ask?
I'm tired of your games, so I'll just stop playing here.