I think we have to keep in mind the degree to which people not knowing what 10^100 looks like plays a role in their reacting in complete disbelief. Yeah, I could imagine someone deciding to sacrifice themselves on behalf of more beings than there are protons in the observable universe.
The reason I would not sacrifice the whole world (or even a large enough portion of it to put human society in jeopardy) is because of the uncertainty surrounding humanity itself. Consciousness aside, humans have the ability to do things that no amount of shrimp could ever do, and we don't know exactly what the limits of those abilities are. It's possible that one day we could do things so incomprehensibly advanced (like, say, change the past) that it would scramble the calculus.
(We also do not know exactly how rare human-level intelligence is; it's not impossible it is so extraordinarily rare that we're the only civilizational beings in the observable universe. I'm not saying that is the case or even likely to be the case, but it certainly merits consideration. The Fermi paradox is not actually a paradox, but it is a bit surprising, and suggestive.)
This is a reasonable longtermist explanation, I probably agree. I just think the odds that humanity does something that has 10^100 conscious lived being improved best are very low, and I’m not bought into Pascal’s Mugging quite yet. But I agree, very possible!
Yeah, that's fair enough. But my immediate sense - and this is something I'd need to ruminate on a little before I'm comfortable calling it anything more than that - is that when you start throwing around numbers like a googol/googolplex/etc, you're already very deep into speculative terriitory, outside what Richard Dawkins calls "Middle World." Eg, actually visualizing a googol shrimp is something human beings simply can't do - we can shorthand it as "a lot" but can't actually comprehend how large it truly is.
So a part of me almost wants to say that the thought experiment itself is nonsense because it can't be accurately "thought." But regardless, it's so abstracted that I feel much more comfortable invoking similarly abstracted "Pascal's mugging" scenarios.
Again, epistemic status here is not high and I'm open to changing my mind. Just thinking out loud.
I think there’s a key difference going on between things with extremely small probability to have large numbers, and things that *are* large numbers. You ask whether I would sacrifice 1 person or have a 0.000001% chance of sacrificing a billion people, that’s tough. You ask whether I would sacrifice 1 person or have a 100% chance at sacrificing a billion, that is not pascals mugging, that’s just an easy choice.
Yeah - I guess I just don't see eventual godlike descendants to be that small of a possibility. When it comes to intelligent life, we have a sample of one. If humanity does't destroy itself over the next millennia - which is not, to my mind, given all we've already avoided, at all implausible - and if we keep up the progress of the past ~250 years, well. Who knows where we might be?
Nope, diminishing personal value of money means that money’s less valuable the more there is; but this is just because you run out of things to buy and help your life with, this isn’t true with animals
My guess is you probably won’t be able to afford one, but maybe you’ll get lucky! I do think something happens with extremely large numbers that makes people completely unable to process them. The difference between 1 shrimp and 1000 feels a lot bigger than the difference between 1 billion and 10^100. I don’t know that people are equipped for moral reasoning when it comes to impossibly large numbers
Thank you for the polite and well-reasoned response!
I think your 2 main arguments can be summarized as:
1) You would in fact behave selflessly in many scenarios, which detracts from the credibility of my claim that utilitarianism relies on the veil of ignorance and will almost always break when our own personal interests are introduced
2) But even if you wouldn't, just because people may struggle to do something, doesn't mean it isn't the 'moral' or 'ethical' decision.
To your first point, I would say that you seem to be convinced of your selfless behavior by the utter magnitude of the shrimp hypothetical, but your response to the surgeon problem is all I really need to remain convinced of my argument:
>> "If the doctors said “hey, you can give your organs for the five,” you know what I would say?
Hell no. You kidding me? I love myself more than almost anything else in the world, no way am I gonna sacrifice myself for some random nerds."
If we forget the shrimp (which I'm sure at this point everyone on Substack is begging us to), and just consider the surgeon problem, then we're mostly on the same page here.
Next, you move on to the critical point #2:
>> "But here’s the stress point: I think refusing to sacrifice the few to save many is not the moral choice. I think when I value myself over the welfare of many other people, I am being selfish... I think my unwillingness to make a good choice says nothing about how good the choice actually is."
>> "When I say morality, I am talking about selflessness. I am talking about caring about other people, grounded in the assumption that other people are real, conscious individuals who exist. I am not looking for an evolutionary justification for why you personally wouldn’t care about other people."
I start with the premise that to value your own experience more than others, including valuing your life more than infinite shrimp- or your child more than 5 strangers- is a natural and blameless act. I believe that every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us. I think that to ask someone to make a great sacrifice for others on the basis that 'overall utility is maximized' is akin to gaslighting, because the very people doing the asking or the judging wouldn't be able to do it themselves.
In this case it's not that I think people's unwillingness to be selfless means that to be selfless is 'bad', it's that, to borrow from my article: "...I want a system of morality that doesn’t gaslight me. In other words, one that does not instruct us to make choices against our own self-interest on the basis of some assumed objective moral truths. Rather than pretend away or condemn our inherent selfish desires, I want a philosophy that accepts them as a necessary starting place."
I want a more practical philosophy. I find utilitarianism only useful as a macro tool to make decisions about other people when your self-interest is completely obscured. Otherwise, simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy.
I think we understand each other pretty well, I think we’re just referring to completely different things when talking about morality.
In this part, “simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy”, and this part, “every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us,” I really feel like the word “morality” is destroying us here.
When I think of morality, I’m talking about making selfless decisions to make the world a better place. I think that selfish desires being chosen over selfless ones is not the moral decisions, and I think there are things that are good and bad in the world. Namely, I think deep suffering and pain is bad, and that love, fulfillment, passion, and joy are good. Some utilitarians disagree, and think the most selfless decision is the one that fulfills the preferences of a person are, but either works for this comment. In either case, I think that people’s lives being BETTER or WORSE is something that is real, and it’s more moral to choose someone in love and happy over someone in deep pain. If you do not care about seeing people in great pain, or think that love and fulfillment are preferable things you hope for, then I can’t convince you of that.
Dylan might even accept all that, but he sees morality as more of a social tool; promote collaboration between humans, keep society in order, do what you want to do and what makes you happy in life. He even agrees that doing selfless actions is good, but thinks being selfish is natural and should be blameless under morality. Dylan cares about his family and friends above all else, so he seeks morality as something that describes why that’s the case, and I think under his view, especially with the sacrifice yourself for the earth example, there’s little difference between his morality and what he wants to do in the first place.
These are not the same concept. The same word should not be used to describe the priorities of specifically yourself and what you care about vs a classification of which actions make the world a better place. When Dylan says “the very people doing the asking or the judging wouldn't be able to do it themselves” this didn’t make sense to me — why would someone not taking a moral choice mean it’s not moral? But it makes perfect sense when Dylan’s morality is used as a descriptor for the social tool of what individuals selfishly prioritize about from their perspective!
We need different words for these things.
I think my only question is how this meshes with the “collaboration” part, which he thinks an ethical system should lead towards — isn’t sacrificing yourself to save the earth not very collaborative? Maybe the descriptor is used to say how humans are inclined towards small communities, like friends and family?
Reading this made me think there’s a theistic dimension we could bring into play here as well. If I believe that God a good place waiting for me in the next world, and my dying can vastly improve things in this world, then isn’t that kind of a win-win scenario? Like, aren’t noble deaths a thing that we’ve culturally appreciated, maybe even celebrated for forever, telling ourselves that ‘at least he’s with God now’ or ‘now she has eternal rest’ and so on?
I’m not one to quote random bits out of context and act like they answer life’s biggest questions full stop, but Jesus saying ‘he who wishes to save his life will lose it, but he who loses his life for me will save it’ comes to mind. Maybe that means all sorts of things, but at the core of it (I think) is the idea that if you’re serious about living a moral life, then you gotta recognize it’s gonna cost you—but what you gain is worth it!
I think we have to keep in mind the degree to which people not knowing what 10^100 looks like plays a role in their reacting in complete disbelief. Yeah, I could imagine someone deciding to sacrifice themselves on behalf of more beings than there are protons in the observable universe.
Nice post!
My response fwiw...
https://davidschulmannn.substack.com/p/my-theory-of-morality
I still can't get over this post. One of those I loved from the moment I saw the title.
The reason I would not sacrifice the whole world (or even a large enough portion of it to put human society in jeopardy) is because of the uncertainty surrounding humanity itself. Consciousness aside, humans have the ability to do things that no amount of shrimp could ever do, and we don't know exactly what the limits of those abilities are. It's possible that one day we could do things so incomprehensibly advanced (like, say, change the past) that it would scramble the calculus.
(We also do not know exactly how rare human-level intelligence is; it's not impossible it is so extraordinarily rare that we're the only civilizational beings in the observable universe. I'm not saying that is the case or even likely to be the case, but it certainly merits consideration. The Fermi paradox is not actually a paradox, but it is a bit surprising, and suggestive.)
This is a reasonable longtermist explanation, I probably agree. I just think the odds that humanity does something that has 10^100 conscious lived being improved best are very low, and I’m not bought into Pascal’s Mugging quite yet. But I agree, very possible!
Yeah, that's fair enough. But my immediate sense - and this is something I'd need to ruminate on a little before I'm comfortable calling it anything more than that - is that when you start throwing around numbers like a googol/googolplex/etc, you're already very deep into speculative terriitory, outside what Richard Dawkins calls "Middle World." Eg, actually visualizing a googol shrimp is something human beings simply can't do - we can shorthand it as "a lot" but can't actually comprehend how large it truly is.
So a part of me almost wants to say that the thought experiment itself is nonsense because it can't be accurately "thought." But regardless, it's so abstracted that I feel much more comfortable invoking similarly abstracted "Pascal's mugging" scenarios.
Again, epistemic status here is not high and I'm open to changing my mind. Just thinking out loud.
I think there’s a key difference going on between things with extremely small probability to have large numbers, and things that *are* large numbers. You ask whether I would sacrifice 1 person or have a 0.000001% chance of sacrificing a billion people, that’s tough. You ask whether I would sacrifice 1 person or have a 100% chance at sacrificing a billion, that is not pascals mugging, that’s just an easy choice.
Yeah - I guess I just don't see eventual godlike descendants to be that small of a possibility. When it comes to intelligent life, we have a sample of one. If humanity does't destroy itself over the next millennia - which is not, to my mind, given all we've already avoided, at all implausible - and if we keep up the progress of the past ~250 years, well. Who knows where we might be?
But would you suck someone’s dick for a 1/10^90 chance at 10^100 dollars mr utilitarian
Nope, diminishing personal value of money means that money’s less valuable the more there is; but this is just because you run out of things to buy and help your life with, this isn’t true with animals
What if you can spend the money on any planet where shrimp exist
Do I also get a rocket ship so I can be a spacefaring shrimp philanthropist helping them around the universe? Cuz that sounds sick
My guess is you probably won’t be able to afford one, but maybe you’ll get lucky! I do think something happens with extremely large numbers that makes people completely unable to process them. The difference between 1 shrimp and 1000 feels a lot bigger than the difference between 1 billion and 10^100. I don’t know that people are equipped for moral reasoning when it comes to impossibly large numbers
Thank you for the polite and well-reasoned response!
I think your 2 main arguments can be summarized as:
1) You would in fact behave selflessly in many scenarios, which detracts from the credibility of my claim that utilitarianism relies on the veil of ignorance and will almost always break when our own personal interests are introduced
2) But even if you wouldn't, just because people may struggle to do something, doesn't mean it isn't the 'moral' or 'ethical' decision.
To your first point, I would say that you seem to be convinced of your selfless behavior by the utter magnitude of the shrimp hypothetical, but your response to the surgeon problem is all I really need to remain convinced of my argument:
>> "If the doctors said “hey, you can give your organs for the five,” you know what I would say?
Hell no. You kidding me? I love myself more than almost anything else in the world, no way am I gonna sacrifice myself for some random nerds."
If we forget the shrimp (which I'm sure at this point everyone on Substack is begging us to), and just consider the surgeon problem, then we're mostly on the same page here.
Next, you move on to the critical point #2:
>> "But here’s the stress point: I think refusing to sacrifice the few to save many is not the moral choice. I think when I value myself over the welfare of many other people, I am being selfish... I think my unwillingness to make a good choice says nothing about how good the choice actually is."
>> "When I say morality, I am talking about selflessness. I am talking about caring about other people, grounded in the assumption that other people are real, conscious individuals who exist. I am not looking for an evolutionary justification for why you personally wouldn’t care about other people."
I start with the premise that to value your own experience more than others, including valuing your life more than infinite shrimp- or your child more than 5 strangers- is a natural and blameless act. I believe that every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us. I think that to ask someone to make a great sacrifice for others on the basis that 'overall utility is maximized' is akin to gaslighting, because the very people doing the asking or the judging wouldn't be able to do it themselves.
In this case it's not that I think people's unwillingness to be selfless means that to be selfless is 'bad', it's that, to borrow from my article: "...I want a system of morality that doesn’t gaslight me. In other words, one that does not instruct us to make choices against our own self-interest on the basis of some assumed objective moral truths. Rather than pretend away or condemn our inherent selfish desires, I want a philosophy that accepts them as a necessary starting place."
I want a more practical philosophy. I find utilitarianism only useful as a macro tool to make decisions about other people when your self-interest is completely obscured. Otherwise, simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy.
I think we understand each other pretty well, I think we’re just referring to completely different things when talking about morality.
In this part, “simple concepts like reciprocal morality (that accept selfishness) are far more relevant to promoting collaboration, which in my opinion should be the ultimate goal of ethical philosophy”, and this part, “every being has selfish relative values that we place on things, which is incompatible with the objective neutrality that utilitarianism demands of us,” I really feel like the word “morality” is destroying us here.
When I think of morality, I’m talking about making selfless decisions to make the world a better place. I think that selfish desires being chosen over selfless ones is not the moral decisions, and I think there are things that are good and bad in the world. Namely, I think deep suffering and pain is bad, and that love, fulfillment, passion, and joy are good. Some utilitarians disagree, and think the most selfless decision is the one that fulfills the preferences of a person are, but either works for this comment. In either case, I think that people’s lives being BETTER or WORSE is something that is real, and it’s more moral to choose someone in love and happy over someone in deep pain. If you do not care about seeing people in great pain, or think that love and fulfillment are preferable things you hope for, then I can’t convince you of that.
Dylan might even accept all that, but he sees morality as more of a social tool; promote collaboration between humans, keep society in order, do what you want to do and what makes you happy in life. He even agrees that doing selfless actions is good, but thinks being selfish is natural and should be blameless under morality. Dylan cares about his family and friends above all else, so he seeks morality as something that describes why that’s the case, and I think under his view, especially with the sacrifice yourself for the earth example, there’s little difference between his morality and what he wants to do in the first place.
These are not the same concept. The same word should not be used to describe the priorities of specifically yourself and what you care about vs a classification of which actions make the world a better place. When Dylan says “the very people doing the asking or the judging wouldn't be able to do it themselves” this didn’t make sense to me — why would someone not taking a moral choice mean it’s not moral? But it makes perfect sense when Dylan’s morality is used as a descriptor for the social tool of what individuals selfishly prioritize about from their perspective!
We need different words for these things.
I think my only question is how this meshes with the “collaboration” part, which he thinks an ethical system should lead towards — isn’t sacrificing yourself to save the earth not very collaborative? Maybe the descriptor is used to say how humans are inclined towards small communities, like friends and family?
Anyway, fun discussion, Dylan!
People think this post is about shrimp, but it’s about Big Numbers and scaling.
Reading this made me think there’s a theistic dimension we could bring into play here as well. If I believe that God a good place waiting for me in the next world, and my dying can vastly improve things in this world, then isn’t that kind of a win-win scenario? Like, aren’t noble deaths a thing that we’ve culturally appreciated, maybe even celebrated for forever, telling ourselves that ‘at least he’s with God now’ or ‘now she has eternal rest’ and so on?
I’m not one to quote random bits out of context and act like they answer life’s biggest questions full stop, but Jesus saying ‘he who wishes to save his life will lose it, but he who loses his life for me will save it’ comes to mind. Maybe that means all sorts of things, but at the core of it (I think) is the idea that if you’re serious about living a moral life, then you gotta recognize it’s gonna cost you—but what you gain is worth it!