21 Comments
User's avatar
Ali Afroz's avatar

I agree entirely with your post. Frankly, I have never understood what the difference between obligatory obligations and superarogatory obligations is supposed to mean. Clearly, even people who think it’s not obligatory to donate to charity would agree that it’s preferable and even praiseworthy to do so. But then what does it mean for an obligation to be obligatory if it doesn’t just mean that you should do that thing. Moral philosophy is about determining what you should do, and this whole question of whether something is obligatory or not seems entirely impossible to state in terms of talk of what you should or should not choose. Personally, I myself treat all talk of what is obligatory as talk of what would be blameworthy not to do. So, for example, stating that it’s obligatory not to casually lie to people for your own good when it harms them just means that if somebody does this, we ought to criticise them, and generally give them punitive consequences in the form of blame.

This is not incompatible with your post, obviously, since after all, you have to set the 0.4 where to grade someone somewhere. in this system, it makes perfect sense that you aren’t a bad person for not donating to help people in Africa since almost nobody does that so unless you’re in unusual reference class, you aren’t doing anything. 99% of people will not. Obviously, it would be pretty stupid to set the zero points so high that 99% of people would be bad people since then you’re pretty much guarantee that nobody listens to you.

I think part of why your post is so appealing to me is that I think of praise and blame as primarily about incentivising the correct behaviour for maximising utility and your system is great for properly aligning incentives. For people who don’t think this way, it is probably just not that unnatural for praise or blame to create bad incentives, but then these people have to cope with the fact that we find it very counterintuitive and undesirable when our judgements of praise and blame create bad incentives. While the average person might not agree that praise and blame is entirely about incentives. They do have a strong intuition that judgements of praise or blame should not create bad incentives and expect them to reliably have the effect of generating the correct incentive structure. Of course because of moral uncertainty, even I don’t purely think praise and blame are purely about incentives since I do hold some probability for the theory that retribution for bad actions and reward for good actions is inherently good. Still, the fact that I generally think of them as about incentives probably influences my theory as does the fact that I’m the sort of person who thinks in incentives in the first place. After all, even when I think in terms of retribution and just deserts I allocate them on the basis of actual contributions in a way which pretty much exactly maps onto what would be correct for proper incentives, even if their severity and magnitude is larger, and I treat them as terminally valuable. Still for all, I think my intuition are influenced by being the sort of person who reads game theory and economics. I do think it’s notable that nobody from the just desert crowd will just stand up and declare that praise and blame sometime generate bad incentives without acknowledging that that’s a significant problem with their theory, which I think shows that they share this intuition.

Expand full comment
Salma's avatar

A thousand percent

Expand full comment
Philip's avatar

Is a scientist who heads a lab that does successful cancer research, but viciously abuses his wife and children, really more moral that a virtuous janitor who is a loving husband and father?

Expand full comment
Jared's avatar

Every ethical framework says it's better to do more good than less good. What they disagree on is what good is. Utilitarianism is a consequentialist ethic that says what is good is maximizing pleasure. It works from the assumption that no kinds of acts are always prohibited or always required, nor does it consider intentions. Some forms of utilitarianism consider probable outcomes rather than actual.

It's flawed because it really is the case that some kinds of acts are duties and refraining from some kinds of acts is morally obligatory - but not all. A mature ethical framework recognizes some things as obligatory or forbidden, but most kinds of moral acts are intrinsically neutral, so that circumstances, including intention, are what make the act moral or immoral. Probable outcome is one such circumstance.

Expand full comment
Corsaren's avatar

Okay, I'm going to write a whole separate essay on this at some point, but for now I'll just ask this: under (act) utilitarianism, when do I get to stop caring about whether an action is morally better?

I take this to be slightly different from the typical demandingness objection, which argues that utilitarianism obligates one to do the best thing and that this is absurd. My question is about what decision criteria I should use--even if it is fuzzy--for deciding when I should stop worrying about when an action is "morally better" than another.

To illustrate the problem, let's just use the easy example of deciding how much money to donate. Let's assume that I could always donate one dollar more--because of food stamps and public housing, there is quite literally always a poorer person in a third world country that could use the money more than me (assume no information or transaction costs, Least Convenient World, yadayada, assume my donating this money does not negatively impact my future earning / donating ability yadayada). So: when do I get to stop adding another dollar and prioritize myself instead? Don't say "just pick what feels right" or "do the best you can". I'm asking for a decision criteria *under utilitarianism itself* that allows me to at least articulate where this cutoff point should be. What cutoff point do I have "most reason" to select if not 100%?

If there is no such decision criteria under utilitarianism, then presumably I must simply weigh the utilitarian concerns vs. my own self-interested concerns, but this only kicks the can down the road. Because now we ask "what criteria do I use as part of my larger decision process, which includes both utilitarian and self-interested concerns, for deciding when one set of concerns outweighs the other?"

Expand full comment
Kyle Star's avatar

I think you’re still craving a moral obligation to anchor on. There’s no point where it becomes “pointless” to save another kid even if you’ve saved 1,000. That’s just not how the universe is set up. That’s reality.

But you can set whatever cutoff point you want. There’s no principled one to pick. In fact, that’s the whole idea! That’s why deontology is wrong, because it thinks there’s a level of perfect morality you can achieve even when you could save someone else — it has an issue with arbitrary cutoffs, not utilitarianism.

Instead, I argue in this essay that no matter what cutoff you pick, DOING MORE GOOD IS BETTER — if you can encourage yourself to do the most good by just donating 10% year end, then I don’t see why you should be judged by someone who has done less moral good than you, if you’re in the top 1% most saintly humans on earth. Doing good is good.

Expand full comment
Corsaren's avatar

Okay, but if there's "no principled number to pick" then why shouldn't I pick 0%? Or perhaps 0.00001% since at least that isn't the literal worst number on option?

I'm not asking for the minimum number I have to pick to be morally blameless. I'm not asking for the point at which any further increase no longer counts as doing a morally better action. I'm not asking what number allows me to say that I've satisfied my moral obligations, whatever those may be (we can even say that such a concept is fake for our purposes).

What I'm asking for is a decision theory and criteria. You say that picking a larger number is "morally better" -- and I'm happy to stipulate to that! We can agree that no matter what I pick, adding another 1% or 0.0001% would be morally better. But picking a smaller number is, ofc, egoistically better. So how do I weigh those against each other and decide what number to pick?

I'm also not asking for a specific number btw; I'm just looking for the description of what that number is meant to capture and what I should be attempting to do when I select it. For instance, in economics, we might say that I should stop buying chocolate at the point at which the marginal utility of another bar of chocolate is equal to the opportunity cost of how I could otherwise spend that money. Simple. What does your moral theory tell me about which number I should pick?

If morality is the study of "what we should do", then your theory ought to be able to answer this question. The decision to pick a number here is an action. It is a choice. If your moral theory cannot tell me what it claims that I should choose here then I charge it with being totally useless as a moral theory (okay lol no, that's a bit harsh but you get the idea).

The problem, ofc, is that I think you have backed yourself into a corner here such that the only thing you can say is "pick the highest number" which is, in this case, 100%. At which point, the demandingness objection goes through.

Expand full comment
Kyle Star's avatar

If you want to know what point that saving a life isn’t a morally good action, it’s when there’s no more lives to be saved, or I suppose when saving a life costs more than saving a life does.

You sound like you want the morally optimal possible recommendation utilitarianism gives. Donate all your money to charity. Spend all your time gaining power. Spend that all to make the earth a utopia.

Assuming that humans will fall short of automaton type efficiency, we can compare who’s done better and who’ve done worse. But no, you’re correct, it will always be morally good to save a life until the cost of saving a life is more than the value of saving it. That’s just true.

Expand full comment
Corsaren's avatar

Again, I’m not asking what is morally good per se. I agree that it is, ofc, always more morally good to donate more to save or help a life that is struggling. We’re also aligned on the question of evaluating who has done better or worse. Rather, I am asking what do we *do* given that it is always more morally good to increase the %. If the answer is: we should donate 100%, then I think you are vulnerable to the demandingness objection, which is the whole thing that you were trying to avoid here.

Expand full comment
Corsaren's avatar

Sorry, not trying to beat a dead horse here, I'm just actually not sure how scalar utilitarianism responds to this objection without having to resort to some other collective standard. It feels like you either have to be silent on the "what to do" question, or you make yourself vulnerable to the demandingness objection. Neither seems satisfactory.

Expand full comment
Kyle Star's avatar

I think we may be talking past each other, apologies. It sounds like we're aligned on the moral facts of the universe -- you can always save an extra life.

But in the real world, humans have limited information, willpower, time‑pressure, risk‑aversion, etc. If I say, "you should donate 100% of your income" you will not donate 100% of your income. Neither will I; I don't want to. But I can calculate what sacrifices I'm willing to make for my morality, while also living my life in a "selfish" way by prioritizing myself.

My way to depower being stunlocked by demandingness is to say, like I do in the article "doing more good is better, I will praise people who do more moral good, as that's the behavior I want to incentivize". People want to know what to do. I say "Try to do more good than the person next to you, in a way that does a lot of good for the most people" and that's how I want people to spend their time thinking about morality.

I think you "should" donate 90% of your income from a "make the world a better place" standpoint. I just don't prioritize morality as the thing I care about the most in life; I care about my friends and family a great deal more. People will try to twist themselves in knots explaining why they're morally perfect and they don't do shit. I think their morality fails to explain the moral facts of the universe. Any hard line of "what to do" MUST be arbitrary or we fall into non-utilitarianism, where once you hit 1,000 kids you've hit your quota. So we MUST work around human psychology and soft recommendations, like donating 10% of your income, and maybe we can peer pressure people to help there.

Utilitarianism isn't satisfying. It is correct.

Expand full comment
Monica's avatar

I wonder if a fix to Kyle’s thesis, with respect to the flaw you highlight, lies in his point about power. Those who have the ability to do more, have a greater obligation to do more. “Grading on a curve,” yes, and a curve with handicaps. Those who make 10k/yr, 100kyr, and 1M/yr have different levels of spare financial resources to save lives. Thus, we must hold each person to a higher standard for monetary charity given their financial capital.

A similar statement could be made for those with various levels of political or social capital.

Going back to the original thesis, I think this fix might imply the necessity of constructing an “average human’s” capacity for charity, then adjusting our judgment of another individual based on weights such as their financial, political, and social capital, and their charity above or below the benchmark, given those handicaps.

I personally (though I am an atheist) find religion to be a good guide here because a number of religions define the boundaries of charity. I find this useful because it provides hundreds / thousands of years of guidance.

In Islam, it is 2.5% of wealth, with a floor below which you may give zero.

In Judaism and Christianity, it is 10% of after tax income.

Hinduism also suggests 8-10% of income, or up to 40% of income remaining after paying for personal / family expenses.

And obviously there’s the giving what we can pledge, a modern interpretation of that.

These are all referring to financial capital - obviously political or social capital are more challenging to quantify. I don’t think it would be simple to normalize these (because obviously time / era is a factor too! If ancient people gave away 10% of their income, how should that be brought to 2025?). but I think it would be possible to come up with a model here to determine a given’s individuals charitableness, given their circumstances.

I think, then, it would be reasonable to call anyone above that 50th percentile line marker could be called “charitable.”

We could layer the effectiveness of their charity onto this model to determine who among the charitable can consider themselves “moral” - that’s certainly a better explored topic within the EA community.

Expand full comment
Corsaren's avatar

Yeah, I’m very sympathetic to something like the “average human” approach, as it seems similar to Collective Consequentialism, rule utilitarianism, and Kantian ethics. Basically, we ask “okay, what would we want the average person to do? Surely we wouldn’t want everyone donating 100%, that’s not possible. We wouldn’t even want them to donate 50%. It’s probably some number where, if everyone donated that amount, we’d have enough resources for the poor.” And then you adjust for income or what have you based on other principles or rules.

The prooooooooblem ofc is that, because you have now invoked this sort of universalization test, it no longer resembles the pure act or scalar utilitarianism that Kyle advocates. Moreover, you can then ask the question: okay, but why is it that we should pick the number that, if the avg person did the same, things would go well EVEN IF the avg person is not actually doing that and, as such, we could easily donate more?

In other words, we can invent a new standard, and it makes for a good decision criteria, but how do we justify it given the metaethical framework we have adopted? It seems like we are crying out for something else. Something…dare I say…Kantian?

Expand full comment
Monica's avatar

Hmm, I do see the kantian imperative as substantively different.

Universal law as per Kant and even rules as per rule utilitarianism is in effect for all instances, whereas clearing a median is a net of behavior over time. Kant would have me save a drowning child on my walk by the lake because that ought to be the universal law. Considerations such as the delay to my job or the value of my clothes are not relevant.

Rule utilitarianism might say either I ought to save the child every day, or perhaps calculate the value of my clothes and save the children who are drowning on days where my clothes and loss of income due to being late are worth less than the average value of a child’s life. Perhaps I may let the child drown if I am wearing a 20k outfit that would be ruined.

Calculating a charity median, or using it to then imply a morality median, does not tell me how to act in a specific situation. It tells me to calculate the amount or/and effect of my charity on a periodic basis. I am allowed to net out my behavior. That is substantially different from any rules-based morality system.

Expand full comment
Corsaren's avatar

The tricky thing here is to carefully define what a rule is and how it operates, especially how detailed it can/should be. I think you can articulate the exact behavior you are describing (netting out on a periodic basis) as a rule, we just have to be more flexible about personhood and the scope of actions. But ya know, that takes a whole book.

Expand full comment
Monica's avatar

This was meant to be a reply to @Corsaren

Expand full comment
funplings's avatar

Feels like there's a couple different ideas here that are being slightly conflated. Idea number one is that of "perfect is the enemy of good"; doing good is better than not doing good, even if it's not the most possible good that you could be doing. Number two is the effective altruist mindset of, well, effective altruism: i.e. what's moral is what helps the most people as much as possible, not whatever make the do-gooder *feel* the most good.

Your discussion about "moral obligation" also reminds me a lot of the Copenhagen Interpretation of Ethics (https://laneless.substack.com/p/the-copenhagen-interpretation-of-ethics).

Expand full comment
Kyle Star's avatar

The “morally tarnished” for interacting with the lakes idea was indeed directly inspired by the Copenhagen interpretations of ethics! Maybe I should deliberately reference that article, I heard about it from Scott Alexander. Good eye there.

This article is indeed an conflation of two ideas: the objection to utilitarianism that it’s too difficult (good comparison to “perfect is the enemy of good”) and then following that, how we can compare different moral actions IF humans can’t make the completely perfect decision in every situation, which of course they can’t. I think admitting morally that saving 1,000 kids is better than saving 1 kid naturally follows that actions which save more kids are better, and “count more.”

So yep, you’ve hit the point of the post on the nose.

Expand full comment
Brock's avatar

"Utilitarianism doesn’t say any of that shit! It just says that it’s BETTER to do more good than less good."

Congratulations, you've just re-invented Alistair Norcross's "scalar utilitarianism"!

Expand full comment
Kyle Star's avatar

This is a very similar idea, but this post is less about the philosophical implications and more how we can judge the squishy fallible humans to encourage them to do more moral things. It can be true that you SHOULD always do the thing that provides the most utility, while also admitting that humans don’t, and we want them to do what they can.

Expand full comment