That Sam Kriss Article About Rationalism, “Against Truth,” Sucks
The title, "against truth," is very accurate though
You can be an amazing writer and the most eloquent wordsmith, but if you literally just don’t understand what you’re talking about, there’s only so much you can do.
made a post titled The law that can be named is not the true law about how modern anti-terror laws criminalize not just an action but even questioning the law itself. It starts by talking about the UK’s new Palestine Action law, which, because of UK law, makes it illegal to even express an opinion that might make other people support Palestine Action. He then weaves through similar and related cases involving Judaism, and ends with a case about Laurentius Clung, a nihilistic Calvinist who insists all souls are damned.People noticed that this “Laurentius Clung,” did not exist, and was in fact made up to make the story cooler. Rationalists, like Eliezer Yudkowsky, shot back saying mixing something that appears as political commentary along with fiction might make people believe the lie that you put immediately after the facts.
Sam Kriss than shot back with a 6,000 word essay called Against Truth where he fires shots at people calling him a liar, Yudkowsky, AI bros, and utilitarians. I encourage you to read it so you can decide if my criticisms are fair, but here’s a summary:
He fires some tongue-in-cheek shots saying his haters called him “‘morally miscalibrated,’ ‘morally repulsive,’ ‘sadistic,’ ‘operating in bad faith,’ a ‘bottom-feeder,’ a ‘grifter,’ a ‘malicious actor,’ both a ‘data hazard’ and an ‘infohazard’.
He then says tongue-in-cheek that Clung is a real person that he didn’t make up, from The Reformation of the Sixteenth Century. He then says there’s substantially more on Clung in Blaire G Smellowicz’s Sodomites, Shepherds, and Fools: Minor Prophets of the Reformation
He then says recounts a number of other false claims he’s made, like how he had sex with half the Tory front benches after the 2023 Spectator summer party, and says they were all true too.
Alright, time out. We’re only a third of the way through, but the post is long, so I gotta be able to stop once every while. First, this is pretty funny. His list of what his haters have called him are long and obviously tongue in cheek, but when I started to read his paragraph about how Clung is real, it reads straight! Like he’s not doing a bit and Clung is actually real! He makes some plausible citations, some reasonable claims about how you can’t find everything on the internet, and only then launches into ridiculous claims about how everything is all true in the next paragraph. Responding to your haters who say you mix fact and fiction by insisting you always tell objective truth and making it very hard to tell if you’re telling the truth is funny.
Then, for no reason, he launches into a sincere and heated critique of rationalism and how much he dislikes Eliezer Yudkowsky, rationalism, utilitarianism, and everything Big Yud stands for, where he strawmans most of the points he makes and generally does a terrible job of understanding any part of what he’s knocking down.
…
Before we talk about any of the actual points, first, damn, what the hell did Yudkowsky do to this guy? Yudkowsky is totally known for his snark, and despite being very good at finding truth in my opinion he indeed can come off super arrogant. Alright, what insane diss deserved a demolishing of everything Yudkowsky’s ever done? What devious rant did Yud launch into? Well, hold onto your butts, here’s the tweet:
Come on. That’s the most fair and most mild Yud critique I’ve ever seen. I’m actually impressed by how milquetoast that take is.
The second of three sections is critiquing rationalists, based off of that tweet which launched Sam into tizzy, and this section is not very good in my opinion:
Rationalism is “mostly about living in the Bay Area, writing things like ‘fark’ or ‘f@#k’ instead of ‘fuck,’ and having unappealing sex with your entire friend group.”
Sam’s take on truth, and trying to be generally correct? “I think the universe is not a collection of true facts; I think a good forty to fifty percent of it consists of lies, myths, ambiguities, ghosts, and chasms of meaning that are not ours to plumb. I think an accurate description of the universe will necessarily be shot through with lies, because everything that exists also partakes of unreality.”
Anthropic in 2021, before ChatGPT ever came out, Anthropic thought AI assistants could be helpful to the world. AI is actually bad and ChatGPT is bad for society.
Harry Potter and the Methods of Rationality is bad, and the foundation of rationalist thought. He spends 5 paragraphs mocking how bad the writing, characters, and plot is, and it explains why rationalists believe AI will be big — “a recursively self-improving artificial general intelligence is just our name for the theoretical point where infinite intelligence transforms into infinite power”
Okay, this is the bulk of the anti-truth stuff he gets into, and it’s just a bad argument. Sam is a great writer, but I don’t think the phrase “I think an accurate description of the universe will necessarily be shot through with lies, because everything that exists also partakes of unreality” MEANS anything. I think he’s a great writer who can dress up nonsense in fancy words so your brain doesn’t realize he ain’t saying squat.
Is Sam saying he’s a conspiracy theorist, and if you believe flat earth, that’s just as valid? Surely if I say climate change isn’t real or I say QAnon’s plight is completely true, Sam has some thoughts on how truth might mean something in some cases? Do you get a pass to believe false things if you just, like, really want to and can justify it? WHAT DOES THAT QUOTE MEAN GUYS. Is it just gesturing at some general spiritual course that nature takes, for only God’s eyes, far too mysterious for any human mind to even try to comprehend? Can I believe in ghosts only if I don’t have an opinion on how ghosts are polluting the atmosphere with CO2 and will only stop if the woke agenda is stopped?
The same instincts that draw me away from believing all that dumb shit is wrong, are the same instincts that let me say “hey, maybe truth is actually, like, useful and real.” Snow is white.
His AI critique is that rationalists thought AI would be TOO GOOD? I don’t see many rationalists disagreeing that AI has done harm to society nowadays with stuff like making learning worse in schools. The Harry Potter fanfiction stuff is obviously not, in fact, the foundation of all of rationalists, it was a side project for fun; engaging 5 paragraphs with how it’s badly written seems like a poor use of time, and his understanding of why rationalists are scared of AI, talking about infinite intelligence and infinite power, is not a good representation of why rationalists think AI could accelerate. The actual argument boils down to “AI is good at doing tasks fast. This is already obviously true; ask ChatGPT or an image generator or video generator anything. What if we use the fact that it’s good at tasks to help one specific task, helping make AI?”
…1
He ends the article tackling against utilitarians:
Rationalists are utilitarians, which is weird, since philosophers at large aren’t utilitarians.
Utilitarianism can make torturing someone unjustly be positive for society if many people get enjoyment out of it.
The Repugnant Conclusion pits a small number of very happy people against a large number of miserable people — utilitarianism chooses the latter.
If I want to save a drowning child, I have no way of knowing that the drowning child I pull out a river isn’t Baby Hitler, so you shouldn’t save a drowning child if you see one (?). He says utilitarianism requires infinite knowledge (?)
Utilitarianism is “lifeless, brutal, reducing us all to preference maximisers, arrogant beyond belief, and utterly opposed to every principle of life and dignity.”
He takes a final stab at rationalists by saying Roko’s basilisk is the idea of hell, because of quantum immortality, before ending the essay.
His point about utilitarianism thinking an action which hurts one can be good if it helps many is true. It’s a very odd complaint, since utilitarianism is the one moral philosophy most against torturing billions of animals for our own small satisfaction of eating them. If a society sacrificing one for many is bad, because “According to any sensible ethical system, we’ve entered the abyss,” wait until Sam finds out about real life where we do way worse than that for way less of a reason!
The rest don’t even seem to understand what they’re talking about, at all. The Repugnant Conclusion only works if lives are worth living, not miserable, which is pretty damn fundamental to the thought experiment! The Baby Hitler one, he just admits he wouldn’t save a drowning child he sees to Own The Utes™, which I guess if you don’t want to save drowning children that’s fair, but uhh. His point about requiring infinite knowledge shows he doesn’t understand utilitarianism or decision making in philosophy in general, because you have to do the things you think are the best! That’s baked into any moral philosophy! And for the Roko’s stuff that he ends the article on, I don’t think any rationalists I know believe in quantum immortality, that’s literally just… not what Roko’s basilisk is at all. Weak ending.
…
Sam Kriss is a very good writer. He’s funny, engaging, and the stories he weaves are interesting. He should’ve admitted that mixing truth and fiction might, maybe, possibly, be something that could lead to people thinking false things. Instead, he goes into a diatribe against the idea of truth, and presents the worst arguments of all time proving he doesn’t understand any of the stuff he’s talking about. I could give him even less credit and say he’s purposely completely misrepresenting the arguments for entertainment, but that’s too on the nose even for me.
You’re allowed to be wrong, being “against truth” is an opinion someone can have, but surely if someone says “hey, all those arguments you made are literally just lies” you can’t be very mad, and you probably shouldn’t follow it up by making some seemingly sincere arguments that show a massive lack of understanding of the arguments themselves. If he’s “against truth” even when he’s being sincere, if he says “hah, I made all those strawmen on purpose! My readers should never care what I say even when I present myself as earnest! Mwahaha!” then sure man, congrats, I guess the title of your post was an evil scheme to lead people to believing incorrect things. Who cares if climate change is real, truth is just, like, fake, man.
Sam Kriss makes good art, not arguments. Keep reading him because he’s interesting. But the words he weaves are a curious and enjoyable lie, and a self-admitted one at that, and I’m now unsure if he really understands anything he talks about. If Sam is against Truth, then I will happily side with Truth herself.

There’s a line here says that rationalists’ “politics run all the way from the furthest fringes of the far right to the furthest fringes of the liberal centre.” I’m not gonna mention this point again but it’s a really really good line I wanted to highlight lmao. This dude is funny.
i think you might need to read it again.
1) my ai critique is not that rationalists think ai is too good, it's that rationalists tend to overemphasise the speculative dangers of ai rather than the actual bad effects, and this overemphasis on speculative dangers is in part how those actual bad effects came about
2) i did not say you shouldn't save a drowning child, i said that an account of ethics that determines the moral value of an action based on its consequences is a poor normative guide for beings, like humans, that can't know the consequences of their actions ahead of time, and used the drowning child as an example of how our actions can have unknown effects
3) utilitarianism is absolutely not the one moral philosophy most against harming animals. not all ethical vegans are utilitarians, not all utilitarians are ethical vegans. for a utilitarian there is a theoretical steak delicious enough to justify the harm to the cow; for a deontological vegan there is not. the fact that some non-utilitarians are not vegans is not a meaningful response to the actual theoretical weaknesses of utilitarianism
4) the repugnant conclusion only requires a life to have minimally positive utility. i think a life can be generally miserable and still worth living. if we go by revealed preference, "one step above actively suicidal" seems like a reasonable baseline, and i've seen it used plenty of other times in this context
5) i did not say quantum immortality is the same as roko's basilisk, i said that both of them in part hinge on the same theory of personal identity
6) i'm not surprised that you don't understand what i mean when i say that truth is itself composed of fiction is true. if you want to understand it you must first perform dhyana and tapas in the forests of austerity for six thousand years
7) i'm not mad. i'm laughing actually. please don't put in your substack that i got mad
I mean, maybe, but I enjoyed that post more than, like, anything else I have read on substack ever. So I think the ‘fun to read’ part wins.