followed tags: 0
followed domains: 0
badges given: 0 of 0
member for: 1353 days
You know, I think you're absolutely correct here. I think this is related to the availability heuristic, in which people associate topics with the first example to come to mind. So, when you think of something like "awareness" or similarly deep-sounding words, they think about deep-sounding concepts whether the actual message is deep or not.
Sure, I like it! Of course, the caveat is that off the top of my head, I can't think of a side constraint that would cover all cases. I've also heard it called two-level utilitarianism, where you make a judgment call over whether total or average utilitarianism is more important on a situation-by-situation basis. My post below about Utilitarianism But pretty much sums up my view.
So if I understand your point correctly, it's not about having a responsibility to the culture, it's about having a responsibility to the message? I'm not religious, but I listen to some religious music without feeling bad because what the lyrics are communicating aren't leaving an impression on me.
As far as I can tell, the unstated premise of this article is that by listening to music with its roots in a given culture, you have a responsibility to that culture. I'm not sure I agree with that premise.
So, let me ask you this. Say we have two completely separate areas - for the sake of argument, we'll have the Earth and a separate planet in our solar system called Earth Mk II. For the purposes of this thought experiment, the two planets are totally hidden from each other in such a way that neither will ever discover the other, and nothing that happens on one will ever affect the other.
Earth and Earth Mk II are pretty similar places, with equal populations. The big difference is that Earth has 90 units of happiness averaged throughout the population, and Earth Mk 2. has 98. Now, even though they're completely separated, isn't the correct move in average utilitarianism to destroy Earth, so that the solar system has a higher average happiness?
Further, assuming we can do this without affecting their happiness, isn't the correct play for average utilitarianism simply to find the happiest being in the universe, and kill everything else? That conclusion seems a little repugnant, too.
Shadowrun. Magical cyberpunk teams of experienced badasses planning Ocean's-style heists, then executing them and reacting to whatever it was their decker couldn't find out by hacking the system, hoping they're not going to be double-crossed by the client or their fixer, and if all else fails, blasting their way to safety with fireballs and huge guns.
Hmm. It is a pretty elegant idea to have everything divvied up like that. What would you do in the case of someone who just heard about it from a RL friend, googled 'Hubski' and clicked the link?
So I live in a house with myself, my fiance, my college buddy, my two brothers, both of whom are bi, and another of my childhood friends with his boyfriend. (It's a big house, 4 bedrooms).
As soon as I heard the news, I texted everybody and I called my mom. She didn't believe me at first, but she was really glad once she had determined that I really, absolutely wasn't playing a prank on her.
I didn't really have a point there, I just thought I would share my happy feeling. I think this is going to be a historic moment, where people are going to remember what they were doing when they heard that marriage was fully legal.
Oh, I love the repugnant conclusion! I just had a discussion about this with my brother the other day; we cast the graph onto the TV and talked through the different comparisons. I'll give you what we eventually agreed on, and you can tell me what you think.
So, Utilitarianism, basically, is saying "If there is a magic equation that determines the maximum happiness (or maximum average happiness, or whatever) for everyone, then sticking to that is morally optimal." And that makes a lot of sense, right? It's got the basic precept of everyone's happiness being important, and covers a lot of corner cases by providing clear answers to thorny questions like "Is it okay to cause one person to die in order to save five?" or "Even if torture is immoral, is it immoral to torture someone if we are absolutely guaranteed to save a million lives?"
So Utilitarianism is definitely a step forward from, for instance, the Golden Rule, which would trip over a lot of those questions, but Utilitarianism trips over questions like the Repugnant Conclusion, and also questions like "Is it okay to brutally torture someone to death in order to prevent a sufficiently large number of people from having a speck of dust in their eye?"
So I won't presume to try to imagine an ideally moral society - I'm not sure I could improve upon the idea of a philosopher-king or council, anyway - but for myself in my personal life, I like to practice what I call Utilitarianism But. Basically, all else equal, utilitarianism is optimally moral... BUT when something feels really wrong, like the repugnant conclusion does, or like torturing someone to prevent dust specks does, or something, I put that on hold and go with what feels right, allowing myself and the rational people around me to override the magic equation.
I think that before utilitarianism, the closest you could get to it might have been The Golden Rule But, and I think Utilitarianism But will serve until and unless we arrive at a more complete understanding of rational morality.