I think we have to keep in mind the degree to which people not knowing what 10^100 looks like plays a role in their reacting in complete disbelief. Yeah, I could imagine someone deciding to sacrifice themselves on behalf of more beings than there are protons in the observable universe.
Thank you for the polite and well-reasoned response!
I think your 2 main arguments can be summarized as:
1) You would in fact behave selflessly in many scenarios, which detracts from the credibility of my claim that utilitarianism relies on the veil of ignorance and will almost always break when our own personal interests are introduced
2) But even if you wouldn't, just because people may struggle to do something, doesn't mean it isn't the 'moral' or 'ethical' decision.
To your first point, I would say that you seem to be convinced of your selfless behavior by the utter magnitude of the shrimp hypothetical, but your response to the surgeon problem is all I really need to remain convinced of my argument:
>> "If the doctors said “hey, you can give your organs for the five,” you know what I would say?
Hell no. You kidding me? I love myself more than almost anything else in the world, no way am I gonna sacrifice myself for some random nerds."
If we forget the shrimp (which I'm sure at this point everyone on Substack is begging us to), and just consider the surgeon problem, then we're mostly on the same page here.
Next, you move on to the critical point #2:
>> "But here’s the stress point: I think refusing to sacrifice the few to save many is not the moral choice. I think when I value myself over the welfare of many other people, I am being selfish... I think my unwillingness to make a good choice says nothing about how good the choice actually is."
>> "When I say morality, I am talking about selflessness. I am talking about caring about other people, grounded in the assumption that other people are real, conscious individuals who exist. I am not looking for an evolutionary justification for why you personally wouldn’t care about other people."
I start with the premise that to value your own experience more than others, including valuing your life more than infinite shrimp- or your child more than 5 strangers- is a natural and blameless act. I believe that every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us. I think that to ask someone to make a great sacrifice for others on the basis that 'overall utility is maximized' is akin to gaslighting, because the very people doing the asking or the judging wouldn't be able to do it themselves.
In this case it's not that I think people's unwillingness to be selfless means that to be selfless is 'bad', it's that, to borrow from my article: "...I want a system of morality that doesn’t gaslight me. In other words, one that does not instruct us to make choices against our own self-interest on the basis of some assumed objective moral truths. Rather than pretend away or condemn our inherent selfish desires, I want a philosophy that accepts them as a necessary starting place."
I want a more practical philosophy. I find utilitarianism only useful as a macro tool to make decisions about other people when your self-interest is completely obscured. Otherwise, simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy.
I think we understand each other pretty well, I think we’re just referring to completely different things when talking about morality.
In this part, “simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy”, and this part, “every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us,” I really feel like the word “morality” is destroying us here.
When I think of morality, I’m talking about making selfless decisions to make the world a better place. I think that selfish desires being chosen over selfless ones is not the moral decisions, and I think there are things that are good and bad in the world. Namely, I think deep suffering and pain is bad, and that love, fulfillment, passion, and joy are good. Some utilitarians disagree, and think the most selfless decision is the one that fulfills the preferences of a person are, but either works for this comment. In either case, I think that people’s lives being BETTER or WORSE is something that is real, and it’s more moral to choose someone in love and happy over someone in deep pain. If you do not care about seeing people in great pain, or think that love and fulfillment are preferable things you hope for, then I can’t convince you of that.
Dylan might even accept all that, but he sees morality as more of a social tool; promote collaboration between humans, keep society in order, do what you want to do and what makes you happy in life. He even agrees that doing selfless actions is good, but thinks being selfish is natural and should be blameless under morality. Dylan cares about his family and friends above all else, so he seeks morality as something that describes why that’s the case, and I think under his view, especially with the sacrifice yourself for the earth example, there’s little difference between his morality and what he wants to do in the first place.
These are not the same concept. The same word should not be used to describe the priorities of specifically yourself and what you care about vs a classification of which actions make the world a better place. When Dylan says “the very people doing the asking or the judging wouldn't be able to do it themselves” this didn’t make sense to me — why would someone not taking a moral choice mean it’s not moral? But it makes perfect sense when Dylan’s morality is used as a descriptor for the social tool of what individuals selfishly prioritize about from their perspective!
We need different words for these things.
I think my only question is how this meshes with the “collaboration” part, which he thinks an ethical system should lead towards — isn’t sacrificing yourself to save the earth not very collaborative? Maybe the descriptor is used to say how humans are inclined towards small communities, like friends and family?
I think we have to keep in mind the degree to which people not knowing what 10^100 looks like plays a role in their reacting in complete disbelief. Yeah, I could imagine someone deciding to sacrifice themselves on behalf of more beings than there are protons in the observable universe.
Nice post!
My response fwiw...
https://davidschulmannn.substack.com/p/my-theory-of-morality
Thank you for the polite and well-reasoned response!
I think your 2 main arguments can be summarized as:
1) You would in fact behave selflessly in many scenarios, which detracts from the credibility of my claim that utilitarianism relies on the veil of ignorance and will almost always break when our own personal interests are introduced
2) But even if you wouldn't, just because people may struggle to do something, doesn't mean it isn't the 'moral' or 'ethical' decision.
To your first point, I would say that you seem to be convinced of your selfless behavior by the utter magnitude of the shrimp hypothetical, but your response to the surgeon problem is all I really need to remain convinced of my argument:
>> "If the doctors said “hey, you can give your organs for the five,” you know what I would say?
Hell no. You kidding me? I love myself more than almost anything else in the world, no way am I gonna sacrifice myself for some random nerds."
If we forget the shrimp (which I'm sure at this point everyone on Substack is begging us to), and just consider the surgeon problem, then we're mostly on the same page here.
Next, you move on to the critical point #2:
>> "But here’s the stress point: I think refusing to sacrifice the few to save many is not the moral choice. I think when I value myself over the welfare of many other people, I am being selfish... I think my unwillingness to make a good choice says nothing about how good the choice actually is."
>> "When I say morality, I am talking about selflessness. I am talking about caring about other people, grounded in the assumption that other people are real, conscious individuals who exist. I am not looking for an evolutionary justification for why you personally wouldn’t care about other people."
I start with the premise that to value your own experience more than others, including valuing your life more than infinite shrimp- or your child more than 5 strangers- is a natural and blameless act. I believe that every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us. I think that to ask someone to make a great sacrifice for others on the basis that 'overall utility is maximized' is akin to gaslighting, because the very people doing the asking or the judging wouldn't be able to do it themselves.
In this case it's not that I think people's unwillingness to be selfless means that to be selfless is 'bad', it's that, to borrow from my article: "...I want a system of morality that doesn’t gaslight me. In other words, one that does not instruct us to make choices against our own self-interest on the basis of some assumed objective moral truths. Rather than pretend away or condemn our inherent selfish desires, I want a philosophy that accepts them as a necessary starting place."
I want a more practical philosophy. I find utilitarianism only useful as a macro tool to make decisions about other people when your self-interest is completely obscured. Otherwise, simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy.
I think we understand each other pretty well, I think we’re just referring to completely different things when talking about morality.
In this part, “simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy”, and this part, “every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us,” I really feel like the word “morality” is destroying us here.
When I think of morality, I’m talking about making selfless decisions to make the world a better place. I think that selfish desires being chosen over selfless ones is not the moral decisions, and I think there are things that are good and bad in the world. Namely, I think deep suffering and pain is bad, and that love, fulfillment, passion, and joy are good. Some utilitarians disagree, and think the most selfless decision is the one that fulfills the preferences of a person are, but either works for this comment. In either case, I think that people’s lives being BETTER or WORSE is something that is real, and it’s more moral to choose someone in love and happy over someone in deep pain. If you do not care about seeing people in great pain, or think that love and fulfillment are preferable things you hope for, then I can’t convince you of that.
Dylan might even accept all that, but he sees morality as more of a social tool; promote collaboration between humans, keep society in order, do what you want to do and what makes you happy in life. He even agrees that doing selfless actions is good, but thinks being selfish is natural and should be blameless under morality. Dylan cares about his family and friends above all else, so he seeks morality as something that describes why that’s the case, and I think under his view, especially with the sacrifice yourself for the earth example, there’s little difference between his morality and what he wants to do in the first place.
These are not the same concept. The same word should not be used to describe the priorities of specifically yourself and what you care about vs a classification of which actions make the world a better place. When Dylan says “the very people doing the asking or the judging wouldn't be able to do it themselves” this didn’t make sense to me — why would someone not taking a moral choice mean it’s not moral? But it makes perfect sense when Dylan’s morality is used as a descriptor for the social tool of what individuals selfishly prioritize about from their perspective!
We need different words for these things.
I think my only question is how this meshes with the “collaboration” part, which he thinks an ethical system should lead towards — isn’t sacrificing yourself to save the earth not very collaborative? Maybe the descriptor is used to say how humans are inclined towards small communities, like friends and family?
Anyway, fun discussion, Dylan!