You Should Just Grade Morality on a Curve
Calling morality “too demanding” is missing the point of what morality is
Many moralities are on the hunt for the fabled “moral obligation.” They see morality as a list of “stuff you should probably do” and when you fulfill your list of moral obligations, checking all the boxes, you’re set free to focus on whatever else you have to balance in your daily life.
Religions believe this, with their Ten Commandments, and secular moralities believe this, with their “do not lie, do not kill” credos that then set you free to focus on your passions.
I think this hunt for an obligation is a bad way of looking at morality, and does not follow from the moral facts that the universe actually rests on.
…
So these other moralities aren’t based off of how much good you actually do. What’s weird about these other moralities is that, if presented with a scenario where the option that would keep you morally perfect would lead to worse results, is that they still hope for the best, good option. If someone says you shouldn’t pull a lever to kill one person and save five, they still hope a rock falls from the sky and hits the lever to change the track anyway. So by banning actions that will, in some cases, make everyone’s lives better1, they prioritize the natural state the world is already in as somehow mattering over the conscious experiences of the people in it. I find this to be an odd worship of “the way things already are” over the actual preferred worlds that we all want to live in.
That consequence, and the idea of a “moral obligation”, can lead you to a belief that resembles what Nietzsche called slave morality. The idea that what you must do to be perfectly moral is ensmallen yourself: don’t lie, don’t cheat, don’t murder, and so on. You may notice that by these credos, a corpse is morally perfect. Slave morality is goals for dead people.
Most insidiously, if we try to put a moral obligation like “save someone when they’re drowning” into this belief system, the morally optimal thing to do is to never go near rivers or lakes, because if you don’t see a drowning child, you have no obligation to save. If a firefighter saves 10 children but cannot muster the strength to save the last one, he is more “morally tarnished” for letting someone die compared to me, who sat on my ass watching YouTube all day2.
Enter utilitarianism, which praises the firefighter, and says that saving ten children is better than saving one, or saving none. Utilitarianism is focused on outcomes, and what outcomes are best for people. That’s all utilitarian says; the trivial claim that saving 1 more person would always be better. Even if 1,000 children have already been saved, the 1,001th is a real person who matters morally. But this is where its critics come in with the most common complaint of utilitarianism: if you’re trained to think of the world in terms of moral obligations, then the trivial claim that “saving more people is better” might lead you to say “wait, if you’re morally obligated to save people even indirectly, aren’t you morally obligated to save all the people?” Called “the demandingness objection”, these critics who can only see the world in terms of obligations say utilitarianism must not be true, because admitting that saving two people is morally better than one obligates you to give 100% of your money away and attempt to save the world.
But this is absurd! Utilitarianism doesn’t say any of that shit! It just says that it’s BETTER to do more good than less good. This framework of moral obligations is a bad one, no matter how instinctive to our goal-and-deadline brains it looks. Instead, just as saving two people is better than saving one, I propose another way to look at morality: being more moral is better than being less moral.
If your actions purposely3 cause many people to be saved, I would propose that that is better than if your actions cause less people to be saved. If you save many people from being tortured, that is better than if you cause less people from being tortured. We can grade humans on a curve: the most moral people are better than the less moral people. It’s alright if your actions don’t save a trillion people single-handedly, that’s fine, because expectations aren’t even built in — you just should do more good to be more moral, if you care about being moral.
I’d like this to be uncoupled from the FEELING of being a good person, or being moral. If you donate to a charity that saves one person, but you get an immense ego boost and feeling of virtue and goodness in your heart because you get to see the person you saved, that is still less moral than donating to a charity that saves 1,000 people but you don’t get as much of an ego boost from “being a good person” (Of course, both are much better than not donating at all). It’s a shame our feelings of virtue are divorced from the good that they cause (evil people still see themselves as the hero), but moral facts don’t care about your feelings! It is still true saving 1,000 people is better than saving one!
Our human intuitions are biased. If someone saves 1,000 children by donating to charity, but is generally an asshole and bumps into people on purpose on the sidewalk, sure that’s really fuckin weird, but his presence has made the world a much better place. I would like to call him more “moral” than an average Joe who did none of those things, because he has directly affected the world in a more positive way than Joe did by doing squat. I think you can offset the moral bad you do by causing a vast amount of good; as
says in his great article On Moral Offsetting, anti-offsetters place morality on utterly trivial details, like how bad occurs, instead of what’s obviously morally paramount — the bad itself, happening to real conscious individuals! If you do something bad, it’s not moral to wallow in guilt, morality wants you to pull yourself up by your bootstraps, try to not do the bad thing again, and get out there and do something GOOD. Maybe even more good than the bad thing you did!This naturally implies the more power you have, the further you can go up on the curve, because you can make the world a much better place; this should be intuitive, as with great power, comes great responsibility.
The reason I think this decoupling from “moral” and “good-person, kindhearted, virtuous” or whatever is important is because it heavily incentivizes someone to make the world a better place, causing more good in the world. I think it matters if a person consciously made the world a much better place than the person next to them, and they should be praised for that. In this strange world we live in, charity is much more effective than almost anything else you can do to improve the world by an order of magnitude. This is a weird fact that doesn’t make much sense with our tribal intuitions built for strong, close bonds, where evolution never ever had to consider humans halfway around the globe.
But once again! Moral facts don’t care about your feelings! If donating to charity makes the world 10x a better place than something else that sounds more virtuous, I would like the thing that makes the world a better place to be more incentivized. And I would like to heap praise on people who do this. Think of the incentives! If people bragged about making the world a better place directly, instead of appealing to some wishy-washy caveman-brain-activated abstract virtue, the world would be an incredible place to live. And while it’s true that saving 1,001 kids is better than saving 1,000, that’s literally just a fact, I would like to praise the man saving 1,000 over the man sitting on his ass. A leprechaun will not pop out after you save 1,000 kids and say “you did it! The last kid you were obligated to save! Now you’re morally perfect!” But what I hope happens is that you get praised, for being a better person than the people next to you.
If you do more good than the person next to you, you don’t need to save a trillion people. Don’t think in terms of obligations; try to do what you can to improve all our lives in the best way you can find.
Grade morality on a curve. Make the world a better place to live in.
Liking is the best way to support mwah, Kyle Star, as I rage against the Substack algorithm. Feel free to subscribe if you haven’t already. The more I grow, the more time I can spend writing.
Some people might complain here that the trolley problem, the guy on the 1 track isn’t hoping for death. This is true. But you don’t need to go very far to find many scenarios where everyone is worse off from deontology. One benign example is a white lie, where you want to lie and the other party wants to be lied to. Some deontologists might find no fault with this, but that doesn’t spare them from being prone to issues like sacrificing 1 person to killing 1,000,000,000, which I think even the 1 person would understand.
Some may say “can’t you say that it’s morally obligated to donate a little to charity?” That sentence is extremely vague with what “little” means, and I think looking at morality through absolutes is a terrible way to look at it ignoring what those absolutes say — as the rest of the post will explain, it’s better to compare people against one another and say “you’re amazing for donating so much to charity” than to set an arbitrary goal.
The word “purposely” is crucial here. Some may appeal to the butterfly effect, where the flap of a butterfly’s wings could cause 1,000 tornados. This is true, but irrelevant, as completely random effects don’t matter for morality. Now, if someone ignores risks and is negligent, that’s obviously relevant (say there’s a 10% of saving 1 person and a 90% of killing 10, that’s a bad bet) but if you earnestly try to cause a vast amount of good and it happens to go wrong in a completely unforeseen way that means you’re not negligent, you’re morally in the clear.
I agree entirely with your post. Frankly, I have never understood what the difference between obligatory obligations and superarogatory obligations is supposed to mean. Clearly, even people who think it’s not obligatory to donate to charity would agree that it’s preferable and even praiseworthy to do so. But then what does it mean for an obligation to be obligatory if it doesn’t just mean that you should do that thing. Moral philosophy is about determining what you should do, and this whole question of whether something is obligatory or not seems entirely impossible to state in terms of talk of what you should or should not choose. Personally, I myself treat all talk of what is obligatory as talk of what would be blameworthy not to do. So, for example, stating that it’s obligatory not to casually lie to people for your own good when it harms them just means that if somebody does this, we ought to criticise them, and generally give them punitive consequences in the form of blame.
This is not incompatible with your post, obviously, since after all, you have to set the 0.4 where to grade someone somewhere. in this system, it makes perfect sense that you aren’t a bad person for not donating to help people in Africa since almost nobody does that so unless you’re in unusual reference class, you aren’t doing anything. 99% of people will not. Obviously, it would be pretty stupid to set the zero points so high that 99% of people would be bad people since then you’re pretty much guarantee that nobody listens to you.
I think part of why your post is so appealing to me is that I think of praise and blame as primarily about incentivising the correct behaviour for maximising utility and your system is great for properly aligning incentives. For people who don’t think this way, it is probably just not that unnatural for praise or blame to create bad incentives, but then these people have to cope with the fact that we find it very counterintuitive and undesirable when our judgements of praise and blame create bad incentives. While the average person might not agree that praise and blame is entirely about incentives. They do have a strong intuition that judgements of praise or blame should not create bad incentives and expect them to reliably have the effect of generating the correct incentive structure. Of course because of moral uncertainty, even I don’t purely think praise and blame are purely about incentives since I do hold some probability for the theory that retribution for bad actions and reward for good actions is inherently good. Still, the fact that I generally think of them as about incentives probably influences my theory as does the fact that I’m the sort of person who thinks in incentives in the first place. After all, even when I think in terms of retribution and just deserts I allocate them on the basis of actual contributions in a way which pretty much exactly maps onto what would be correct for proper incentives, even if their severity and magnitude is larger, and I treat them as terminally valuable. Still for all, I think my intuition are influenced by being the sort of person who reads game theory and economics. I do think it’s notable that nobody from the just desert crowd will just stand up and declare that praise and blame sometime generate bad incentives without acknowledging that that’s a significant problem with their theory, which I think shows that they share this intuition.
A thousand percent