followed tags: 42
followed domains: 2
badges given: 0 of 1
member for: 1079 days
Thank you! That means a lot!
Sam is write in wanting to open up a dialogue on criticizing the bad parts of Islam. But that's about the only nice thing I can say about him. His methods, his arguments and the lens in which he views almost all political issues is one of Islamaphobia and ignorance.
To add to this, he clearly doesn't have a strong grasp on the philosophy he talks about (especially free will). His idea of rooting ethics in science is an interesting one but his premise (we should derive an ought from an is) is clearly built on a shaken foundation. He gives his only virtue (actually talking about Islam) is overshadowed by his actual words, which is an issue when that's your whole career.
- Remember that this is assuming that the two actors in the system are of equal levels of power.
I don't see how it is. You have the leader and the population affected. The leader has two decisions: (1) help subset x at the expense of y or (2) do nothing to protect subset y at the expense of x. There's no need to measure power or anything. This is a simple case of 1 individual's decision affecting multiple people. With either decision (action or inaction) people are harmed and benefitted.
- That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.
Did you quote the wrong passage here? Because I was referring to how many many lives barely worth living being the best option is counter intuitive, but you responded saying that many people would find the idea of few lives with a lot of happiness the better option. Could you clarify which option are you saying is intuitive?
- Which has been done and was considered moral, in the past. (in reference to slavery)
But that clearly isn't the best way to maximize happiness. Just because people thought that slavery was the moral action doesn't actually make it the moral action (moral being measured against well being, that is).
- In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function. (in reference to adding rules)
The rules will be based on the original end of well being. These rules (or rule), whatever they are, should be rules that generally maximize well being in the long run. That way its still consistent with the original aim of well being.
- this is from the view of a system with only those two people. From the view of society then it is absolutely true that one person coming to harm would be something society would want to stop, and such an action would be immoral. However, from the view within only that system, there are only two actors, and they do not agree with one another, so no choice will be made, and as such, the system will be treated as if it can make no choices
If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur. Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:
- I assume it cannot.
Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral. This isn't particularly groundbreaking. The real difficulty (and field of interest) is how to deal with actions where each outcome has pros and cons.
- but the massive numbers of wars, fighting, psychopaths, and so on, clearly show that not all humans are concerned with total human wellbeing.
Now you're charting into some interesting ethical philosophy that compares which form of well being we should aim towards: total or average well being. To sum it up both sides have issues (called the Repugnant Conclusion, check my post history for the link + discussion):
"Total Utilitarianism" would essentially favor the addition of any life that is even marginally worth living. So having 500 billion humans with barely enough resources to survive (let's say 1 happiness point each) is favorable to a smaller population of 1 billion with much higher average happiness (let's say 100 happiness each). 500 billion 1 is greater than 1 billion 100 so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.
"Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.
My personal solution to the Repugnant Conclusion is to do what I mentioned earlier: add some rules to actions that have to be held for them to be considered moral. For me that rule is the preservation of justice (no infringing human rights like liberty, etc). This prohibits the idea that we should kill/enslave a minority to bring up the average happiness.
For the points, keep the above on mind when rereading them.
- you cannot, in good faith, simply argue that "we should do X because it is the moral thing to do"
The lens I would view this through is that the decision to act or not is a moral question. That being said, you're 100% right because through my lens that all actions are moral ones then saying "we should act because its moral" is the same as saying "this is moral because it's moral", circular reasoning and all. I should have been more clear in saying that all my points to my argument (in the OP) were reasons to support the morality of acting in this situation. Good catch though.
- If both disagree, and are of similar levels of power, then all actions are immoral, as no actions would be taken
Before addressing this specific point, I think a visual would make it clearer. I think I understand your construct here though so: In this case, what if the inaction would cause harm to one or the other? The system would have to find a way of balancing the wants and needs one the two moral agents with one another.
- Overall, if you can draw a "box" around an object, and that actor would result in an action, or would have the mindset required to set that action to occur, then that action can be called "subjectively moral"
Agreed. I said it in passing in one of my earlier comments that if all life were to go extinct, then morality would go with it. So while morality isn't objective in the sense that it's omnipotent, written in the fundamental laws of nature regardless of who's perceiving it, it is "objective" in regards to who it pertains to: humans (you called this subjective morality).
However, I'm not 100% clear on your notions that all actions are subjectively moral because all immoral actions don't occur. What do you mean by that?
- It may be useful to get rid of "immoral" as a category entirely, and instead only consider actions to be "moral" or "not moral", or "I would cause this" and "I would not cause this"
Similar to above, what happens when your actions and inaction have consequences? At that point you can't simply abstain from acting since both acting and not acting will cause (let's say) harm to one person in one case and another person in another case.
- There is no objective morality based on the definition I give above
I spoke to this above too, but to be clearer here since I don't think I have been completely: I think we're using different definitions of "objective." I don't think there is an objective aim (since we both know of the concept of good and bad are measured against an aim, let's talk about an aim instead of the word "morals"; means to an end, and all that) in that it's written in the stars and will continue to exist outside the scope of humanity. I think the aim is objective in regards to the scope of humanity (drawing the box around humanity, so to speak). But what do I mean by "objective." I mean that there exists an aim that we all aim at by default and without the need for argument for it, or against another. For me, that aim is human well being. In that sense, there exists an objective aim.
While I did read it and have some points on mind, I think the common ground we found is far more interesting to discuss. Besides, the overarching theme was addressed above anyways (objective aim).
I do want to comment on the last part specifically (points 1-5). I would actually like to revise the points to make them clearer and to highlight that these are observations rather than a self containing argument:
1) All humans desire well being as the aim to their actions
2) Actions that increase human well being are good in regards to humans' natural aim (1)
3) There's no compelling argument to change our aim
4) Our aim remains as it is, and the goodness of our actions are measured against it
- it assumes that "actions that increase human well being" exist. Due to the first point being false, humans can seek different things for their well-being, and as a result no single category can satisfy this point fully. Actions that increase "human well being" may well not exist.
First, let me give an example of a specific action that will increase well being: Me taking a breath right this very instant. No one suffers, I gain (ever so slightly): well being increases. Second, when you say "no actions exist that can increase well being" I think you're thinking of general actions (e.g. donating to charity), then thinking of an instance where that general action can decrease well being in a given circumstance (e.g. the charity robbed you). But remember that we're dealing with a system that takes these variables into account here. So if you're faced with a dilemma (donate to charity?), you look at the variables (are they going to rob me? No) then you can act knowing (reasonably so) that your action increased well being.
- Except most people do not see morality like that, they see morality as a definite set of things that are good, things that we should discover that are in this set, and perform. They see immoral actions as things inside a different set of actions, and things we have to discover and not perform.
"Do this because it is moral" is an argument that X falls into objective set Y, and as a result X should be done.
I am arguing that no objective set Y exists, and as a result, that form of argument to morality, the argument for an idea of right and wrong, do not exist.
Sorry I never explicitly addressed this aspect of your argument. I wanted to focus on the overall theme but here are my thoughts:
Philosophers will rarely say that there exists a set of actions (e.g. killing, stealing, lying) which are objectively wrong morally. The way ethical philosophy is done is (to use an analogy), create a formula that, when the variables are put into it, will calculate the goodness of an action.
The way that the above is done is by defining an aim (happiness, well being, logical consistency, etc.) and then measuring actions against that aim.
My argument here is that you choose that aim to be selfish gain (you can be more specific, but you get the idea). The goodness of any actions is then measured against that, as we've already been over. That's great and all but your counter is: but that aim isn't objectively good. Here's my retort:
You subscribe to the ethical philosophy of egoism for a reason. You may say it's "just what I want to do" but it's a little deeper than that. You believe that the aim of self gain is the best aim for a reason. You therefore believe it to be the objectively correct aim by the logic that the best solution is the most correct one. So the very fact that you choose one aim over another is you implicitly saying that your aim is the best of all possible ones.
- what is the difference between a moral judgement, and any average judgement?
Nonexistant. Your original comment about the reasons why we should intervene in Syria, be it economic, political stability, etc., are in fact moral justifications for an action. By that logic you could say even breathing is a moral action, to which I would respond: yes. A miniscule, functionally irrelevant one, yes. But it does serve to further whatever my aim is (because living is required for basically any end).
- What I am trying to do with my "moral" system is to explain the way people act, the rationalization between choices. Why the worm chooses to eat dirt, why the human chooses to eat meat.
Let's play that game and take a naturalistic approach. Let's talk about "is"s and forget "oughts".
Is statements for humans:
(1) All humans have the same nature
(2) Human nature dictates that we all seek well being (call it happiness, pleasure, etc.)
(3) Nurture doesn't change that desire
(4) Actions that increase human well being are good in regards to what humans naturally desire
(5) We should do what's good as defined above
Now I can't empirically prove my first three premises but I'm sure you'll find them to be pretty reasonable. This also might not be your original idea of a moral system, but you might find it interesting as it's rooted in concrete "is" statements (as opposed to tricky "oughts").
Now naturally you will have utilitarian issues with this guideline which is why you can play around with adding a few rules as precursors like "don't violate others' liberty" to smoothen out the edges, but you get the basic idea.
I'm most curious as to your thoughts on the above 5 points, what do you think?
briandmyers (tagged since you might be interested)
Let's get two things straight:
(1) Saudi Arabia is one of the most backwards countries on the planet right now. Executing journalists like they never got the memo that it's not longer 1566, subjugation of women running rampant and much much more.
(2) America is only in bed with these savages because of oil. It's no coincidence that John Kerry paid a visit to the Saudis conveniently before OPEC decided to crank up its supply, lowering the equilibrium price of oil thus hurting Venezuela and Russia (which just so happens to be under US sanctions). The Saudi's benefited because it drove some (not as much as expected though, thanks to their vitality) North American natural gas companies. So the relationship was of mutual benefit.
Sure, the Saudis are technically enemies with the Daesh, but they only further serve to indoctrinate the next generation with their corrupted, conveniently selective distortion of Islam (mentioned in the article). This indoctrination flames the fire that is the Middle East's religious division (just look at the alliances in the Middle East, clear Sunni/Shiite divide).
There can be no good to come out of the "unholy alliance" between the US and the Saudis (economics aside). So surely something has to change. But if there's anything that can be learned from the latter half of the 20th century, it's the following: you can't force a country to become democratic. They must come to that decision on their own. It can't be given, it must be earned. If these countries can't work their way to it, then you must leave them to their own devises until they do. This means (a) not supporting the current regime (not to say we should form a coupe to overthrow it; I'm looking at you CIA) by not buying their oil; that is to say, move towards energy independence, preferably renewable (kill two birds with one stone). (b) What this doesn't mean is impose sanctions on them. Sanctions will only serve to hurt the people of that country (further alienating the West more than it already is). These people go through enough as is, and sanctioning the country won't get rid of their government. But will it make the people dislike their government to the point of change? Certainly not. Look no further than Russia: The West has imposed sanctions on their leaders (rightfully so) which was then used as propaganda to make the russian population not hate their leadership, but to hate the countries that imposed those sanctions in the first place.
In summation, the West has to not support these groups of terror through focus on energy independence because its impractical and terrible economics (to the point where it overshadows the gain) to simply cut ourselves off of their oil without the proper counterbalance (massive increase in oil prices due to lower demand i.e. bad for consumers). For this reason we need to invest in domestic, renewable energy. We also have to not get involved further with sanctions. Countries tend to move towards democracy (look at China; slowly but surely) and as such the best we can do is get out of the people's way to allow that to happen.
- I do, of course.
Great. So this along with your very long winded way of saying you only care about self gain is you admitting to what I've been trying to prove all along: you have a moral system and it's happened to be called Egoism. Any action that benefits you is good by that end (end being your goal of self gain).
You say that there's no objective good yet you subscribe to the belief that your good is the good you ought to measure actions against. You have your reasons for valuing your own selfish gain over everything else; possibly because you think it's in your nature (read: psychological egoism), or some other reason, but you have your justification for your belief.
I could sit here and argue why that's not a good end to have in mind, or why that's the only logical end to have, either way you're now doing ethical philosophy with the ethical position that you have taken.
Welcome to the club.