1) my ai critique is not that rationalists think ai is too good, it's that rationalists tend to overemphasise the speculative dangers of ai rather than the actual bad effects, and this overemphasis on speculative dangers is in part how those actual bad effects came about
2) i did not say you shouldn't save a drowning child, i said that an account of ethics that determines the moral value of an action based on its consequences is a poor normative guide for beings, like humans, that can't know the consequences of their actions ahead of time, and used the drowning child as an example of how our actions can have unknown effects
3) utilitarianism is absolutely not the one moral philosophy most against harming animals. not all ethical vegans are utilitarians, not all utilitarians are ethical vegans. for a utilitarian there is a theoretical steak delicious enough to justify the harm to the cow; for a deontological vegan there is not. the fact that some non-utilitarians are not vegans is not a meaningful response to the actual theoretical weaknesses of utilitarianism
4) the repugnant conclusion only requires a life to have minimally positive utility. i think a life can be generally miserable and still worth living. if we go by revealed preference, "one step above actively suicidal" seems like a reasonable baseline, and i've seen it used plenty of other times in this context
5) i did not say quantum immortality is the same as roko's basilisk, i said that both of them in part hinge on the same theory of personal identity
6) i'm not surprised that you don't understand what i mean when i say that truth is itself composed of fiction is true. if you want to understand it you must first perform dhyana and tapas in the forests of austerity for six thousand years
7) i'm not mad. i'm laughing actually. please don't put in your substack that i got mad
1) Rationalists think AI is powerful, not good. I don’t think rationalists disagree that AI has bad effects now, they just also care about the bad effects later.
2) All moral theories and all decisions ever made in history are, in fact, made under uncertainty. Most people still try to do good things and make good things happen. This is an especially weak point imo and really strikes me as something pretty basic.
3) Utilitarianism is able to acknowledge that large amounts of unnecessary suffering is bad. Deontologists have no metric for scale, which is why they often (this is a core feature) HOPE for the utilitarian outcome. Hey, wait, I thought you hated hypotheticals and utilitarianism is “the morality of edge cases”. Suddenly I point out that utilitarianism is the best at acknowledging the magnitude of horror our society perpetuates today and you’re into “theoretically juicy steaks”?
4) “human factory farm of one hundred trillion people, stacked in wire cages that cover the entire surface of a dead Earth. Absolutely everyone is utterly miserable, but thanks to pharmaceuticals in the mossy water that drips from the ceiling of your cage, you are not quite actively suicidal.” Uh
5) You at least strongly imply they’re related. You could probably fix that with a sentence, but I don’t think any rationalists believe in quantum immortality at all anyway, so the mention of it is weird
6) I can levitate 12 inches off the ground.
7) your post didn’t own me! I didnt make this post cuz I’m the soyjack I swear! Truth is good and Sam is wrong ya gotta believe me. I’m not a nerd I have a fulfilling life with a girlfriend you gotta believe me
2) yes, this is why kant says that the only thing that can be said to be good is a good will, and virtue ethics similarly emphasises the actor rather than the consequences. these moral philosophies accommodate the uncertainty intrinsic to being human, and utilitarianism does not. again, this is the point i'm making
3) the theoretically juicy steak is an example of utilitarian logic and not an idea i endorse
4) like i said, revealed preference
5) yes, they're related because they both hinge on the same theory of personal identity. this does not mean they're the same
Regarding point 2, it deserves an entire post, but I believe it's a bad argument. In order to make good moral decisions, the best strategy is to do your best in guessing the ultimate consequences, and the results will still be good on average. Why do I say that? Because that's the algorithm that a big chunk of well-adjusted people run in their own lives, not just in moral situations.
You can't predict the stock market, but you will still do better by investing using some framework (like EMH passive investing, or abusing knowledge arbitrages, for example) compared to investing via vibes. Trying your best to predict the consequences and taking the bet will be much better compared to doing random stuff or, even worse, using virtue ethics.
Just like utilitarianism, virtue ethics has a bunch of bizarre edge cases and issues that lead to micro and macro hells that are only resolved via some kind of magical "well, I don't like it, so let's choose the consequential analysis instead for this specific case." Eventually, it boils down to "morality is the aesthetics that I personally like," which obviously has no attack surface area because the typical Machiavellian virtue ethicist will always pick the most popular virtue. So it can never be refuted via ridicule, because ridicule requires a majority. So of course it's a virtue to burn the entire world to protect your kid - everyone thinks this way.
this is kind of a strawman of virtue ethics as a virtue ethicist, but i do agree that they both share strange edge cases in theory. that being said, they're a lot more ingrained in utilitarianism to me.
though ultimately your preferred system doesn't actually change most day to day life
> if we go by revealed preference, "one step above actively suicidal" seems like a reasonable baseline
Maybe, but going by revealed preference as a utilitarian is an *insane* thing to do. Almost as insane as going by revealed preference as a socialist; either way, you end up concluding that true happiness has nothing to do with anything anyone has ever claimed to truly value - not the lower pleasures, not the higher pleasures, not man's self-authoring and gattungswesen, not man's brain on drugs - and everything to do with buying a slightly larger television for your slightly larger house, every year forever.
Of the three major classical utilitarians, Bentham and Sidgwick stayed liberals and went crazy, while Mill went socialist and stayed sane. This is not an accident: classical utilitarianism is not the philosophical expression of liberalism, but of *utopian socialism* - the 19th century's twin visions of the Good And True And Useless.
Rationalists, wedded as they are to the notion that the Good And True must necessarily be *useful*, invariably go mad.
> it's that rationalists tend to overemphasise the speculative dangers of ai rather than the actual bad effects, and this overemphasis on speculative dangers is in part how those actual bad effects came about
Well todays actual bad effects are last years speculative dangers. Are you suggesting that we never think ahead, and only act on the bad effects that have already started happening?
> , like humans, that can't know the consequences of their actions ahead of time,
We can guess, based on probabilities. The average baby kills less than one person (source, some people die for reasons other than murder), so unless you have particular reason to think that This baby is evil, save the baby.
> "one step above actively suicidal"
What if we have different standards for what lives to create, and what lives to continue.
> i did not say quantum immortality is the same as roko's basilisk, i said that both of them in part hinge on the same theory of personal identity
Both of them are the sort of idea that rationalists discuss sometimes, because considering wild new ideas in case they are true is important. Both have been considered and are largely seen as not true.
2) has the interesting spin off that I have a higher chance of knowing whether my kid is babyhitler or not, and I have a higher chance of knowing the neighbor's kid than some stranger kid, so utilitarians are wrong when they say we should not prioritize people close to us. we should. higher chance of knowing the actual outcomes.
> for a utilitarian there is a theoretical steak delicious enough to justify the harm to the cow
What in Satan's name is this steak? Is it made from the fifth secret breed of wagyū? Is it laced with opioids? Can't you just take drugs without the steak?
For some reason, most examples of the failure modes of consequentialism and utilitarianism I see around involve weird sci-fi shit like theoretical steaks, utility monsters and universe-spanning factory farms of people. Often the point is made by turning the pleasure dial way up (e.g. what if the spectators of gladiatorial combat get orgasms?). Meanwhile, failures of deontology and virtue ethics are abound in reality - from libertarians who think that exploitation is fine as long as it doesn't fall under their narrow definition of aggression to anarchists who flail around without a plan because they grade themselves based on effort.
The most famous failure of utilitarianism I'm aware of is the Bankman-Fried scandal, though other utilitarians pretty much universally condemned him.
Virtually every failed utopian society and revolution in history is an example of a failure of consequentialism, if we define it as broadly as you do deontology and virtue ethics. These sci-fi hypotheticals are only intended to show that utilitarianism produces bad results even if theoretically perfectly applied, but its practicality is obviously far worse. If we think in consequentialist terms every time we waive our considerations of duties and virtues to achieve a greater good then yeah, obviously that kind of thinking produces terrible results all the time.
SBF is the most obvious and modern example, but the (or atleast a certain subset of) TESCREAL community is a pretty clear example
I'm not a utilitarian or necessarily an Effective Altruist, but I think on the whole EAs have done a lot of great work (some still do, but mostly i'd say they "used to") unfortunately, the problem lies in this particular framework being seemingly quite susceptible in practice to hijacking (not that other moral frameworks can't be as well). There are a fairly large subset, undeniably the face of EA at this point, that would love to Effectively provide food to those most in need in Africa, or provide mosquito nets to people who need it! Unfortunately, what's more pressing is donating to MIRI because of the infinite potential lost value from the acausal robot demon! This is something that is not just understandable, but obligatory! Sorry kids. (I admit this is a partial ad absurdum, its not everyone in EA, but I really don't think its far at all from certain sects of EA)
If one is to use Libertarian Exploiters as an example against virtue ethics, then I think its reasonable to point out everyone's favorite Libertarian Longtermist Utilitarian Exploitaire Elon Musk? He's the richest man in the world and he's far more exploitative under the pretense of achieving maximum end goals. I don't think that he's representative of all Utilitarians but you get infighting either way
Oh yeah, Kriss’s writing is super fun to read. I still like his posts where he “mixes fact with fiction”, they’re sick. I just hope he doesn’t make bad arguments in the future.
Look, if you call your movement “rationalism”, that is “we’re right and everyone else is so wrong that they are irrational”, you should expect some sick burns.
They literally called their central forum “Less Wrong” not “We’re Right” . The idea that rationalists are unaware that they are imperfect is a strawman and yet almost every critique of rationalism treats it as a slam dunk for some reason.
Newsflash: exact kind of person article is designed to anger is angered by article!
Also, just a small thing, if you reread the last part he isn’t talking about Roko’s basilisk at all and isn’t conflating it with quantum immortality. You misread it and assumed he was mistaken. I would suggest this is emblematic of perhaps the entire rationalist community needing to steelman their opponents more and ameliorate their reading comprehension before commenting on meta-ironic literature like this.
I just don’t understand why he would mention Roko’s and then talk about something around 0% of rationalists would believe. I think if you read the article straight, you assume quantum immortality is required or the basis of Roko’s, it’s barely a transition at all; I only know they’re different because I already knew about them.
I love Sam Kriss and I like Yudkowsky. I think it simply is not true that adding a disclaimer regarding the fictional parts would improve the work. It would make them so much worse. The exaggerations, the lies, the magical realism aspects in his work, the parts that make them question 'wait is that real?' are a critical part of why they're so goddamn good and bewitch the reader the way they do. They would be much worse with a disclaimer. Also, who exactly is harmed, and in what way, if they thought that 6th century theologian really existed??
I thought the paragraphs describing Yud's Harry Potter fiction were some of the best parts of the essay. I haven't read it. It is an enlightening view into his character and motivations. Perhaps he was trying to give a view into what those motivations are, to make the point that what at least some rationalists desire above all is total power over everyone via their exercise of rational intellect? And that the reason they fear AI so much is because they understand their own motivations, presume AI would be similarly motivated, and don't want it to do to them what they themselves would like to do and assume they'd be able to do if they had sufficient processing power? And that it is kind of fucked up that the same people who think that way are generally the ones working for the AI companies? That was my take.
I also think it's instructive to compare the character and motivations of what Yud seemingly revealed in his fiction, with what you think Sam Kriss's motivations are. If I take it as a fact (which I don't, necessarily) that his description of the Harry Potter fiction is correct, then Yud is motivated by a burning desire to be adored and admired by everyone for his intellect, and to be able to control and manipulate everyone with it. What is revealed by Kriss's writing? I would say something like his motivation to seduce, delight, tickle, enchant, and provoke his reader, and to issue sick burns and denunciations on those people and things he doesn't like, and to reveal the absurdities, impossible and intolerable contradictions, and everyday magic and majesty in real life. Something like that. So if this is a contest between them, Sam definitely wins.
I don’t think the rationalists think AI is dangerous because they’re selfish and assume AI would be selfish. Rationalists think AI could be dangerous because of rather modest and boring facts about what it means to pursue a goal, gradient descent optimization, and exponential growth. Then, they fear AI because it could kill the things they love.
I think the HPMOR stuff you talk about is what happens when you like someone and they explain why they vehemently hate someone, so you side with them. I think if you read it you would have a different take — maybe you would think it was boring or badly written, but Sam is playing up his hatred here.
To be honest, I’ve read all of Eliezer’s work on his blog within the last year, and the main character trait I see from him is an obsessive dedication to, indeed, the truth. He has an article “Something to Protect” where he straight out describes the reason he’s obsessively dedicated to the truth; he loves humanity deeply and wants to protect it from huge threats that he thinks are in people’s blind spots. I think he’s just a guy, not a mastermind.
I'm in a weird spot epistemically where because of my beliefs the doom rhetoric is not at all convicincing but I know enough about them for low level takedowns to bother me
I think its a lot more uncertain and unlikely than the average Bay Area techlord but then I see some people try to argue against it and its like "cmon man you can do better than this" (to the point where it almost makes me want to start my own substack) its really hard to argue that Yudkowsky isnt atleast trying to be committed to truth. Is he an arbiter of truth? Not really. Is he close to the truth on most things? No not really either imo. Hes arrogant, one of the most unlikeable people alive today, occasionally engages in blatant motivated reasoning, and I think he's very VERY overhyped by his fans. That being said, i do think its clear that he isn't someone with like evil hidden motivations or anything and he is genuinely trying to be an approximation of altruistic even if i disagree wildly with his methods and think him to be misguided
I agree with Yud's opinions on AI and I ALSO think it is entirely stupid to develop this thing that would be more powerful than us. So on those points I'm on his side. I think that comes down to people who do or don't believe it will ever be that powerful. I don't necessarily think it will be, but even a 5% chance is bad enough to not go there in the first place. I also get frustrated at people who are more upset at the smaller and immediate bad things than the hypothetical future much worse things, though I don't think that's because they're discounting the importance of the hypothetical much worse things but that they don't believe they will or could ever happen, and assign them zero or close to zero probability.
On the Harry Potter thing, you could be right, and generally I never take second hand opinions without judging the source material for myself. However, I would never read any type of fan fiction, I find it distasteful in and of itself, though I can't pinpoint exactly why. And I DEFINITELY wouldn't read fan fiction where one inserts themselves as the main character, I find that 5X more distasteful. Therefore, since I'm already prone to dislike the whole genre or even the concept of it being a genre in the first place, I'm inclined to put a lot more weight on Sam's description of it. Call it pre-existing bias. I just don't like the idea of fiction where you take someone else's original creation and adulterate it, especially by inserting yourself as the star.
There is a part where (after everyone else at the school fails, naturally) Yud/Harry deletes a Dementor from existence by summoning a Super Saiyan Patronus shaped like a person. And he gives a bombastic speech about how humanity rocks and can beat anything because we are big brained. Thus, those animal Patronuses (Patronii?) get the lesson they deserve.
I didn't realize much about Rationalism or why the LessWrong community that i heard about was the why they were the way they are until I read some of Yudkowsky's stuff. His writing style is nails on a fucking chalkboard and borderline unreadable to me because its so embarrassing, but he appeals SO hard to a niche of neurodivergent 30 something millennials who embody the reddit atheist stereotype. So many of the reddit STEM teens that got into him basically learned a lot of fields through Yudkowsky's overconfident hobbyist writing and not actually engaging with works of professionals until after they internalize his stuff
Comparing him to Chris Chan feels super mean but its not too crazy to me. They both have sorta delusions of grandeur and are impossibly reddit just Yudkowsky is less evil and fairly smart. If i was more autistic and less philosophically inclined i mightve grown up to love this guy
If a reader at any point thought that Laurentius Clung was real then I don't know what to say except that such a person is no more than fifteen minutes of unsupervised Facebook access away from concluding that Bill Gates is part of a secret cabal conspiring to put tracking microchips in vaccines and we should probably be more concerned about that
I think his view of truth is correct, but you have to be careful with it. The world of experience is rational & consistent, but if you poke around deeply enough you can infer that it's part of a larger, invisible, irrational & inconsistent world. This realization typically doesn't help you build things or solve problems the way science & empiricism do, but it does help you sit back and relax. And these guys called rationalists really need to relax. They are trying too hard.
The reason people should not be so worried about AI is really simple, which is that AI is trained on things human beings produce. It is not going to self-improve to infinite intelligence. It is completely dependent on us. The real danger of AI is that people may come to rely on AI so much that we become unable to produce the kind of content required to train new AI. But that's a self-limiting problem: without new content AI will eventually become useless and people will be forced to go back to the old ways.
I agree he doesn't seem to have that sophisticated a worldview to justify criticising anyone else's (that sounds harsh but so be it). Also it can be an issue for genius artists/writers in general who get praised for their cleverness, and then try to apply that to real world thinking. Like when conscious musical artists start talking about society and you realise they're just saying random stuff.
I do think there is a more of a point to his distaste for relying on facts. Truth is something slightly different to fact. You can say 'thats so true' about something that never happened and have it still make sense, imo
i think you might need to read it again.
1) my ai critique is not that rationalists think ai is too good, it's that rationalists tend to overemphasise the speculative dangers of ai rather than the actual bad effects, and this overemphasis on speculative dangers is in part how those actual bad effects came about
2) i did not say you shouldn't save a drowning child, i said that an account of ethics that determines the moral value of an action based on its consequences is a poor normative guide for beings, like humans, that can't know the consequences of their actions ahead of time, and used the drowning child as an example of how our actions can have unknown effects
3) utilitarianism is absolutely not the one moral philosophy most against harming animals. not all ethical vegans are utilitarians, not all utilitarians are ethical vegans. for a utilitarian there is a theoretical steak delicious enough to justify the harm to the cow; for a deontological vegan there is not. the fact that some non-utilitarians are not vegans is not a meaningful response to the actual theoretical weaknesses of utilitarianism
4) the repugnant conclusion only requires a life to have minimally positive utility. i think a life can be generally miserable and still worth living. if we go by revealed preference, "one step above actively suicidal" seems like a reasonable baseline, and i've seen it used plenty of other times in this context
5) i did not say quantum immortality is the same as roko's basilisk, i said that both of them in part hinge on the same theory of personal identity
6) i'm not surprised that you don't understand what i mean when i say that truth is itself composed of fiction is true. if you want to understand it you must first perform dhyana and tapas in the forests of austerity for six thousand years
7) i'm not mad. i'm laughing actually. please don't put in your substack that i got mad
1) Rationalists think AI is powerful, not good. I don’t think rationalists disagree that AI has bad effects now, they just also care about the bad effects later.
2) All moral theories and all decisions ever made in history are, in fact, made under uncertainty. Most people still try to do good things and make good things happen. This is an especially weak point imo and really strikes me as something pretty basic.
3) Utilitarianism is able to acknowledge that large amounts of unnecessary suffering is bad. Deontologists have no metric for scale, which is why they often (this is a core feature) HOPE for the utilitarian outcome. Hey, wait, I thought you hated hypotheticals and utilitarianism is “the morality of edge cases”. Suddenly I point out that utilitarianism is the best at acknowledging the magnitude of horror our society perpetuates today and you’re into “theoretically juicy steaks”?
4) “human factory farm of one hundred trillion people, stacked in wire cages that cover the entire surface of a dead Earth. Absolutely everyone is utterly miserable, but thanks to pharmaceuticals in the mossy water that drips from the ceiling of your cage, you are not quite actively suicidal.” Uh
5) You at least strongly imply they’re related. You could probably fix that with a sentence, but I don’t think any rationalists believe in quantum immortality at all anyway, so the mention of it is weird
6) I can levitate 12 inches off the ground.
7) your post didn’t own me! I didnt make this post cuz I’m the soyjack I swear! Truth is good and Sam is wrong ya gotta believe me. I’m not a nerd I have a fulfilling life with a girlfriend you gotta believe me
1) yes, this is the point i'm making
2) yes, this is why kant says that the only thing that can be said to be good is a good will, and virtue ethics similarly emphasises the actor rather than the consequences. these moral philosophies accommodate the uncertainty intrinsic to being human, and utilitarianism does not. again, this is the point i'm making
3) the theoretically juicy steak is an example of utilitarian logic and not an idea i endorse
4) like i said, revealed preference
5) yes, they're related because they both hinge on the same theory of personal identity. this does not mean they're the same
6) i can levitate 13 inches
Regarding point 2, it deserves an entire post, but I believe it's a bad argument. In order to make good moral decisions, the best strategy is to do your best in guessing the ultimate consequences, and the results will still be good on average. Why do I say that? Because that's the algorithm that a big chunk of well-adjusted people run in their own lives, not just in moral situations.
You can't predict the stock market, but you will still do better by investing using some framework (like EMH passive investing, or abusing knowledge arbitrages, for example) compared to investing via vibes. Trying your best to predict the consequences and taking the bet will be much better compared to doing random stuff or, even worse, using virtue ethics.
Just like utilitarianism, virtue ethics has a bunch of bizarre edge cases and issues that lead to micro and macro hells that are only resolved via some kind of magical "well, I don't like it, so let's choose the consequential analysis instead for this specific case." Eventually, it boils down to "morality is the aesthetics that I personally like," which obviously has no attack surface area because the typical Machiavellian virtue ethicist will always pick the most popular virtue. So it can never be refuted via ridicule, because ridicule requires a majority. So of course it's a virtue to burn the entire world to protect your kid - everyone thinks this way.
this is kind of a strawman of virtue ethics as a virtue ethicist, but i do agree that they both share strange edge cases in theory. that being said, they're a lot more ingrained in utilitarianism to me.
though ultimately your preferred system doesn't actually change most day to day life
> if we go by revealed preference, "one step above actively suicidal" seems like a reasonable baseline
Maybe, but going by revealed preference as a utilitarian is an *insane* thing to do. Almost as insane as going by revealed preference as a socialist; either way, you end up concluding that true happiness has nothing to do with anything anyone has ever claimed to truly value - not the lower pleasures, not the higher pleasures, not man's self-authoring and gattungswesen, not man's brain on drugs - and everything to do with buying a slightly larger television for your slightly larger house, every year forever.
Of the three major classical utilitarians, Bentham and Sidgwick stayed liberals and went crazy, while Mill went socialist and stayed sane. This is not an accident: classical utilitarianism is not the philosophical expression of liberalism, but of *utopian socialism* - the 19th century's twin visions of the Good And True And Useless.
Rationalists, wedded as they are to the notion that the Good And True must necessarily be *useful*, invariably go mad.
> it's that rationalists tend to overemphasise the speculative dangers of ai rather than the actual bad effects, and this overemphasis on speculative dangers is in part how those actual bad effects came about
Well todays actual bad effects are last years speculative dangers. Are you suggesting that we never think ahead, and only act on the bad effects that have already started happening?
> , like humans, that can't know the consequences of their actions ahead of time,
We can guess, based on probabilities. The average baby kills less than one person (source, some people die for reasons other than murder), so unless you have particular reason to think that This baby is evil, save the baby.
> "one step above actively suicidal"
What if we have different standards for what lives to create, and what lives to continue.
> i did not say quantum immortality is the same as roko's basilisk, i said that both of them in part hinge on the same theory of personal identity
Both of them are the sort of idea that rationalists discuss sometimes, because considering wild new ideas in case they are true is important. Both have been considered and are largely seen as not true.
2) has the interesting spin off that I have a higher chance of knowing whether my kid is babyhitler or not, and I have a higher chance of knowing the neighbor's kid than some stranger kid, so utilitarians are wrong when they say we should not prioritize people close to us. we should. higher chance of knowing the actual outcomes.
> for a utilitarian there is a theoretical steak delicious enough to justify the harm to the cow
What in Satan's name is this steak? Is it made from the fifth secret breed of wagyū? Is it laced with opioids? Can't you just take drugs without the steak?
For some reason, most examples of the failure modes of consequentialism and utilitarianism I see around involve weird sci-fi shit like theoretical steaks, utility monsters and universe-spanning factory farms of people. Often the point is made by turning the pleasure dial way up (e.g. what if the spectators of gladiatorial combat get orgasms?). Meanwhile, failures of deontology and virtue ethics are abound in reality - from libertarians who think that exploitation is fine as long as it doesn't fall under their narrow definition of aggression to anarchists who flail around without a plan because they grade themselves based on effort.
The most famous failure of utilitarianism I'm aware of is the Bankman-Fried scandal, though other utilitarians pretty much universally condemned him.
Virtually every failed utopian society and revolution in history is an example of a failure of consequentialism, if we define it as broadly as you do deontology and virtue ethics. These sci-fi hypotheticals are only intended to show that utilitarianism produces bad results even if theoretically perfectly applied, but its practicality is obviously far worse. If we think in consequentialist terms every time we waive our considerations of duties and virtues to achieve a greater good then yeah, obviously that kind of thinking produces terrible results all the time.
SBF is the most obvious and modern example, but the (or atleast a certain subset of) TESCREAL community is a pretty clear example
I'm not a utilitarian or necessarily an Effective Altruist, but I think on the whole EAs have done a lot of great work (some still do, but mostly i'd say they "used to") unfortunately, the problem lies in this particular framework being seemingly quite susceptible in practice to hijacking (not that other moral frameworks can't be as well). There are a fairly large subset, undeniably the face of EA at this point, that would love to Effectively provide food to those most in need in Africa, or provide mosquito nets to people who need it! Unfortunately, what's more pressing is donating to MIRI because of the infinite potential lost value from the acausal robot demon! This is something that is not just understandable, but obligatory! Sorry kids. (I admit this is a partial ad absurdum, its not everyone in EA, but I really don't think its far at all from certain sects of EA)
If one is to use Libertarian Exploiters as an example against virtue ethics, then I think its reasonable to point out everyone's favorite Libertarian Longtermist Utilitarian Exploitaire Elon Musk? He's the richest man in the world and he's far more exploitative under the pretense of achieving maximum end goals. I don't think that he's representative of all Utilitarians but you get infighting either way
I mean, maybe, but I enjoyed that post more than, like, anything else I have read on substack ever. So I think the ‘fun to read’ part wins.
Oh yeah, Kriss’s writing is super fun to read. I still like his posts where he “mixes fact with fiction”, they’re sick. I just hope he doesn’t make bad arguments in the future.
Look, if you call your movement “rationalism”, that is “we’re right and everyone else is so wrong that they are irrational”, you should expect some sick burns.
I can take a sick burn, but don’t you dare say I’m incorrect!
People have been talking about this for 20 years and the consensus has always been that yes, the name sucks, but there isn't really a better one.
They literally called their central forum “Less Wrong” not “We’re Right” . The idea that rationalists are unaware that they are imperfect is a strawman and yet almost every critique of rationalism treats it as a slam dunk for some reason.
Newsflash: exact kind of person article is designed to anger is angered by article!
Also, just a small thing, if you reread the last part he isn’t talking about Roko’s basilisk at all and isn’t conflating it with quantum immortality. You misread it and assumed he was mistaken. I would suggest this is emblematic of perhaps the entire rationalist community needing to steelman their opponents more and ameliorate their reading comprehension before commenting on meta-ironic literature like this.
I just don’t understand why he would mention Roko’s and then talk about something around 0% of rationalists would believe. I think if you read the article straight, you assume quantum immortality is required or the basis of Roko’s, it’s barely a transition at all; I only know they’re different because I already knew about them.
> Sam Kriss is a very good writer.
You completely lost me there!
I love Sam Kriss and I like Yudkowsky. I think it simply is not true that adding a disclaimer regarding the fictional parts would improve the work. It would make them so much worse. The exaggerations, the lies, the magical realism aspects in his work, the parts that make them question 'wait is that real?' are a critical part of why they're so goddamn good and bewitch the reader the way they do. They would be much worse with a disclaimer. Also, who exactly is harmed, and in what way, if they thought that 6th century theologian really existed??
I thought the paragraphs describing Yud's Harry Potter fiction were some of the best parts of the essay. I haven't read it. It is an enlightening view into his character and motivations. Perhaps he was trying to give a view into what those motivations are, to make the point that what at least some rationalists desire above all is total power over everyone via their exercise of rational intellect? And that the reason they fear AI so much is because they understand their own motivations, presume AI would be similarly motivated, and don't want it to do to them what they themselves would like to do and assume they'd be able to do if they had sufficient processing power? And that it is kind of fucked up that the same people who think that way are generally the ones working for the AI companies? That was my take.
I also think it's instructive to compare the character and motivations of what Yud seemingly revealed in his fiction, with what you think Sam Kriss's motivations are. If I take it as a fact (which I don't, necessarily) that his description of the Harry Potter fiction is correct, then Yud is motivated by a burning desire to be adored and admired by everyone for his intellect, and to be able to control and manipulate everyone with it. What is revealed by Kriss's writing? I would say something like his motivation to seduce, delight, tickle, enchant, and provoke his reader, and to issue sick burns and denunciations on those people and things he doesn't like, and to reveal the absurdities, impossible and intolerable contradictions, and everyday magic and majesty in real life. Something like that. So if this is a contest between them, Sam definitely wins.
I don’t think the rationalists think AI is dangerous because they’re selfish and assume AI would be selfish. Rationalists think AI could be dangerous because of rather modest and boring facts about what it means to pursue a goal, gradient descent optimization, and exponential growth. Then, they fear AI because it could kill the things they love.
I think the HPMOR stuff you talk about is what happens when you like someone and they explain why they vehemently hate someone, so you side with them. I think if you read it you would have a different take — maybe you would think it was boring or badly written, but Sam is playing up his hatred here.
To be honest, I’ve read all of Eliezer’s work on his blog within the last year, and the main character trait I see from him is an obsessive dedication to, indeed, the truth. He has an article “Something to Protect” where he straight out describes the reason he’s obsessively dedicated to the truth; he loves humanity deeply and wants to protect it from huge threats that he thinks are in people’s blind spots. I think he’s just a guy, not a mastermind.
I'm in a weird spot epistemically where because of my beliefs the doom rhetoric is not at all convicincing but I know enough about them for low level takedowns to bother me
I think its a lot more uncertain and unlikely than the average Bay Area techlord but then I see some people try to argue against it and its like "cmon man you can do better than this" (to the point where it almost makes me want to start my own substack) its really hard to argue that Yudkowsky isnt atleast trying to be committed to truth. Is he an arbiter of truth? Not really. Is he close to the truth on most things? No not really either imo. Hes arrogant, one of the most unlikeable people alive today, occasionally engages in blatant motivated reasoning, and I think he's very VERY overhyped by his fans. That being said, i do think its clear that he isn't someone with like evil hidden motivations or anything and he is genuinely trying to be an approximation of altruistic even if i disagree wildly with his methods and think him to be misguided
I agree with Yud's opinions on AI and I ALSO think it is entirely stupid to develop this thing that would be more powerful than us. So on those points I'm on his side. I think that comes down to people who do or don't believe it will ever be that powerful. I don't necessarily think it will be, but even a 5% chance is bad enough to not go there in the first place. I also get frustrated at people who are more upset at the smaller and immediate bad things than the hypothetical future much worse things, though I don't think that's because they're discounting the importance of the hypothetical much worse things but that they don't believe they will or could ever happen, and assign them zero or close to zero probability.
On the Harry Potter thing, you could be right, and generally I never take second hand opinions without judging the source material for myself. However, I would never read any type of fan fiction, I find it distasteful in and of itself, though I can't pinpoint exactly why. And I DEFINITELY wouldn't read fan fiction where one inserts themselves as the main character, I find that 5X more distasteful. Therefore, since I'm already prone to dislike the whole genre or even the concept of it being a genre in the first place, I'm inclined to put a lot more weight on Sam's description of it. Call it pre-existing bias. I just don't like the idea of fiction where you take someone else's original creation and adulterate it, especially by inserting yourself as the star.
There is a part where (after everyone else at the school fails, naturally) Yud/Harry deletes a Dementor from existence by summoning a Super Saiyan Patronus shaped like a person. And he gives a bombastic speech about how humanity rocks and can beat anything because we are big brained. Thus, those animal Patronuses (Patronii?) get the lesson they deserve.
It’s supposed to be insanely inspirational.
I didn't realize much about Rationalism or why the LessWrong community that i heard about was the why they were the way they are until I read some of Yudkowsky's stuff. His writing style is nails on a fucking chalkboard and borderline unreadable to me because its so embarrassing, but he appeals SO hard to a niche of neurodivergent 30 something millennials who embody the reddit atheist stereotype. So many of the reddit STEM teens that got into him basically learned a lot of fields through Yudkowsky's overconfident hobbyist writing and not actually engaging with works of professionals until after they internalize his stuff
Comparing him to Chris Chan feels super mean but its not too crazy to me. They both have sorta delusions of grandeur and are impossibly reddit just Yudkowsky is less evil and fairly smart. If i was more autistic and less philosophically inclined i mightve grown up to love this guy
The argument below this post is my favorite part
Many such cases there’s more likes than comments (or close) on one of my posts, lol.
I notice you don't even try to defend Harry Potter and the Methods of Rationality.
If a reader at any point thought that Laurentius Clung was real then I don't know what to say except that such a person is no more than fifteen minutes of unsupervised Facebook access away from concluding that Bill Gates is part of a secret cabal conspiring to put tracking microchips in vaccines and we should probably be more concerned about that
I think his view of truth is correct, but you have to be careful with it. The world of experience is rational & consistent, but if you poke around deeply enough you can infer that it's part of a larger, invisible, irrational & inconsistent world. This realization typically doesn't help you build things or solve problems the way science & empiricism do, but it does help you sit back and relax. And these guys called rationalists really need to relax. They are trying too hard.
The reason people should not be so worried about AI is really simple, which is that AI is trained on things human beings produce. It is not going to self-improve to infinite intelligence. It is completely dependent on us. The real danger of AI is that people may come to rely on AI so much that we become unable to produce the kind of content required to train new AI. But that's a self-limiting problem: without new content AI will eventually become useless and people will be forced to go back to the old ways.
I agree he doesn't seem to have that sophisticated a worldview to justify criticising anyone else's (that sounds harsh but so be it). Also it can be an issue for genius artists/writers in general who get praised for their cleverness, and then try to apply that to real world thinking. Like when conscious musical artists start talking about society and you realise they're just saying random stuff.
I do think there is a more of a point to his distaste for relying on facts. Truth is something slightly different to fact. You can say 'thats so true' about something that never happened and have it still make sense, imo