a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by aeromill
aeromill  ·  3064 days ago  ·  link  ·    ·  parent  ·  post: My thoughts on the Syrian refugee crisis

    this is from the view of a system with only those two people. From the view of society then it is absolutely true that one person coming to harm would be something society would want to stop, and such an action would be immoral. However, from the view within only that system, there are only two actors, and they do not agree with one another, so no choice will be made, and as such, the system will be treated as if it can make no choices

If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur. Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:

    I assume it cannot.

Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral. This isn't particularly groundbreaking. The real difficulty (and field of interest) is how to deal with actions where each outcome has pros and cons.

    but the massive numbers of wars, fighting, psychopaths, and so on, clearly show that not all humans are concerned with total human wellbeing.

Now you're charting into some interesting ethical philosophy that compares which form of well being we should aim towards: total or average well being. To sum it up both sides have issues (called the Repugnant Conclusion, check my post history for the link + discussion):

"Total Utilitarianism" would essentially favor the addition of any life that is even marginally worth living. So having 500 billion humans with barely enough resources to survive (let's say 1 happiness point each) is favorable to a smaller population of 1 billion with much higher average happiness (let's say 100 happiness each). 500 billion 1 is greater than 1 billion 100 so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.

"Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.

My personal solution to the Repugnant Conclusion is to do what I mentioned earlier: add some rules to actions that have to be held for them to be considered moral. For me that rule is the preservation of justice (no infringing human rights like liberty, etc). This prohibits the idea that we should kill/enslave a minority to bring up the average happiness.

Thoughts?

For the points, keep the above on mind when rereading them.





bioemerl  ·  3064 days ago  ·  link  ·  

    If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur.

Remember that this is assuming that the two actors in the system are of equal levels of power.

In society, this is never true. Where it is true, a thing does not become moral or immoral for quite some time. See topics such as abortion, which for some time were quite heavily debated, and only now, as the free-choice groups gain more power, is it becoming more of a moral action.

    Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:

if society had actors on two sides, of equal levels of power, with no ability to resolve those view differences, then no action would be produced. Society is so large, and so complex, that this situation rarely remains true for quite some time.

And, of course, this is not a pure matter of power, a group with a lot of guns is not going to exist forever, and if their actions have negative effects on society in the long run, while the society they rule over will consider their actions moral, all societies that result from that one will look back on them as immoral.

As well, social power is a thing, and morality is often based on opinion more than it is on other topics.

It all matters how you define the scope, how you look at the actions, and so on. There is no simple, concrete, answer.

    Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral.

Only if you are considering the scope of only that person.

    so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.

That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.

    "Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.

Which has been done, and was considered moral, in the past. We even do it today, killing pigs and cows for meat so that humans may have more things, along with destroying forests and so on for the same reason.

    add some rules to actions that have to be held for them to be considered moral

In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function.

aeromill  ·  3064 days ago  ·  link  ·  

    Remember that this is assuming that the two actors in the system are of equal levels of power.

I don't see how it is. You have the leader and the population affected. The leader has two decisions: (1) help subset x at the expense of y or (2) do nothing to protect subset y at the expense of x. There's no need to measure power or anything. This is a simple case of 1 individual's decision affecting multiple people. With either decision (action or inaction) people are harmed and benefitted.

    That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.

Did you quote the wrong passage here? Because I was referring to how many many lives barely worth living being the best option is counter intuitive, but you responded saying that many people would find the idea of few lives with a lot of happiness the better option. Could you clarify which option are you saying is intuitive?

    Which has been done and was considered moral, in the past. (in reference to slavery)

But that clearly isn't the best way to maximize happiness. Just because people thought that slavery was the moral action doesn't actually make it the moral action (moral being measured against well being, that is).

    In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function. (in reference to adding rules)

The rules will be based on the original end of well being. These rules (or rule), whatever they are, should be rules that generally maximize well being in the long run. That way its still consistent with the original aim of well being.