<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Kyle Star]]></title><description><![CDATA[Truth, philosophy, rationality, morality, and other thinker-guy stuff.]]></description><link>https://www.kylestar.net</link><generator>Substack</generator><lastBuildDate>Sun, 10 May 2026 08:54:33 GMT</lastBuildDate><atom:link href="https://www.kylestar.net/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Kyle Star]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[starlog@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[starlog@substack.com]]></itunes:email><itunes:name><![CDATA[Kyle Star]]></itunes:name></itunes:owner><itunes:author><![CDATA[Kyle Star]]></itunes:author><googleplay:owner><![CDATA[starlog@substack.com]]></googleplay:owner><googleplay:email><![CDATA[starlog@substack.com]]></googleplay:email><googleplay:author><![CDATA[Kyle Star]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[I'm an Atheist, and I Believe Pascal's Wager is a Good Argument]]></title><description><![CDATA[Mugging works if you tell someone you have a gun in your pocket, even if they can't see it]]></description><link>https://www.kylestar.net/p/im-an-atheist-and-i-believe-pascals</link><guid isPermaLink="false">https://www.kylestar.net/p/im-an-atheist-and-i-believe-pascals</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 23 Sep 2025 13:02:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9e272e66-58c2-468b-9aef-312aa1e63cd5_2058x1322.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Pondering Pascal&#8217;s Wager makes me think that either rational agents are insane or <em>we</em> are insane. I am an atheist who fully buys that Pascal&#8217;s Wager is a good argument for believing in God; perhaps a great one.</p><p>The first thing to notice, as I say <a href="https://substack.com/@starlog/note/c-151001461">in this note</a>, is that Pascal&#8217;s wager is not an argument for God&#8217;s existence. It&#8217;s an argument saying &#8220;you should BELIEVE in God.&#8221; It says that believing in God is a very smart and rational thing to do. It says that you, as a rational and clever atheist, must trick yourselves into believing the almost-certainly-incorrect thing by any means necessary. Do drugs on a mountain while reading the Bible. Attend church daily and quash the part of yourself that tells you you&#8217;re above them. Give yourself a partial lobotomy, and tell some nuns to bang pots and yell &#8220;Be a Christian! Be a Christian! Rahhh!&#8221; at you after the lobotomy. Of course the argument doesn&#8217;t endorse any of these things specifically; all it&#8217;s really saying is that you should engineer future events so that it&#8217;s more likely you convert.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> But this argument for instrumental value is not an argument for truth.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Richard Hanania&quot;,&quot;id&quot;:6319739,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!EwuT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de4c8df-7f9c-4bca-901c-53a83a3e97eb_2736x1824.jpeg&quot;,&quot;uuid&quot;:&quot;efff9f1d-0fc2-496b-a8a1-3df8fca01986&quot;}" data-component-name="MentionToDOM"></span> just wrote a post about <a href="https://www.richardhanania.com/p/pascals-wager-as-spiritual-extortion">how Pascal&#8217;s Wager is spiritual extortion</a>. It&#8217;s a good post, and Hanania concludes that something feels very wrong about an immortal God having to threaten people with eternal damnation in order to get them to believe it. In fact, this threatening makes far more sense in an materialistic/atheist world compared to a Christian one. He notes, as I do, that if we view the spread of religion as a battle of ideas where only the stickiest and most persuasive ideas will become widespread, it sure seems like the ones threatening the highest stakes will survive the longest. </p><p>Imagine two religions in the early days of humanity: one teaches that your family and friends will suffer eternally if they don&#8217;t convert to your religion, and the other one says that it doesn&#8217;t really matter if anyone believes your religion, God&#8217;s got your back no matter what, so whatever man, no need to convert at all. While I think arguments for the second religion existing is better in a vacuum, <em>of course</em> the first one is going to spread like wildfire and there will be mad dashes of conversions followed by conversions followed by conversions. </p><p>Continue this thought: the only religions which will survive are the ones that say that humans have a part to play in existence. You&#8217;re not going to see a religion that doesn&#8217;t offer something to humans: heaven, a promise of enlightenment, a need to appease the Gods, or maybe just the joy of having unique access to the hidden truth of reality. The possible religion-space is much vaster than the religions we see in the world today; an atheist will say that the only way a false idea can spread to as many people as Christianity, Islam, and Hinduism have is by being something that people want to convince other people of, for any reason. </p><p>If God was real and didn&#8217;t give a fuck about us in any way, we would find no book to rally billions behind.</p><p>&#8230;</p><p>Back to Pascal&#8217;s Wager, that whole eternal damnation threat: I find it to be a very good threat! Of <em>course</em> it&#8217;s a good threat; it&#8217;s the best threat humanity has come up with. If you dislike anything, if you dislike <em>literally</em> <em>anything</em> in all of your existence, I can say &#8220;Do X thing that I want, or I will make you do Y thing you dislike for all of eternity.&#8221; If I said this, you might question my ability to deliver on my threat, but religion can get around this by having it be imbued into a package of beliefs that include &#8220;God is making this threat and he has infinite ability to deliver on this threat btw.&#8221; Then, if the thing religion is asking you to do is to convince your neighbors <em>of</em> the religion, and you say that believing in it will also <em>reward</em> you, you have a thought virus which has 2.1 billion believers thousands of years after its creators are long dead.</p><p>I&#8217;m not naive; people are not doing expected value calculations to choose their religion. But what they are doing is trying to live by the tenets of the religion they&#8217;ve believed since the day they had intelligent thoughts, and the content will have heavy influence over them. When you start looking at surviving religions as a grab bag of tricks to spread itself to as many people as possible, akin to looking at animals as a grab bag of tricks to increase survival, you will find many tricks.</p><p>Religion being a community of people who get together, intertwined with a human&#8217;s social life. Prayer as one way communication to reinforce God&#8217;s influence over all. Infinite happiness for the believers. As we said, infinite suffering for non-believers. Objective Proof of God, not that you&#8217;ve seen of course, but that others have seen through attested miracles. The few prophecies of a million attempts that aren&#8217;t slam dunks but are pretty good if you squint. Subjective Proof of God, a feeling of proof of God that you carry in your bones and that puts adrenaline in your body. The <em>entire</em> concept of faith. Oh boy, the concept of faith. I&#8217;d like to make a post on it and discuss it with Christians, because it is a fun topic, and it&#8217;s fun to see the implications of what it means too. Now, seriously grappling with what faith is is important as an atheist; I don&#8217;t want to sneer at it, but I will say that it obviously fits snugly into this category of thought-tricks.</p><p>Not all religions have all of those; these are very Christian-centric, as that&#8217;s what I&#8217;m most familiar with. But an atheist would predict you will find many clever tricks in any religion with many adherents; find them yourself, I don&#8217;t know.</p><p>&#8230;</p><p>A curious thing with Pascal&#8217;s wager is that it promises infinite value. Now, some say that Pascal&#8217;s wager works just fine with very large numbers, and they&#8217;re correct. If a religion promises 3,000,000 years of torture, sure that still sounds pretty bad. But an unfortunate fact about infinity is that, if you&#8217;re trying to do the &#8220;rational&#8221; thing that leads you to have the most value, any infinite reward will make any finite value completely meaningless, so a religion that promises 3,000,000 years or any finite action on earth<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> is infinitely vulnerable to a more Pascal&#8217;s Wager-ier religion, if someone rational were to prioritize.</p><p>Some might argue that, because of marginal utility, infinite value is not infinitely better than finite value, but marginal utility doesn&#8217;t apply here. It seems obvious to me that if I have 10 trillion dollars, then 100 trillion dollars won&#8217;t increase my quality of life; <em>that&#8217;s</em> a case of marginal utility. Pascal&#8217;s wager says that, after you&#8217;re tortured for 1,000 years, you&#8217;ve still got 1,000 more there, buddy; I don&#8217;t think there&#8217;s any &#8220;getting used&#8221; to that, especially if my mind is wiped or something. There&#8217;s no amount of torture where I would be cool with more torture; seems like it would suck at any point in the torture, so I believe that infinite value really does destroy our expected value equation.</p><p>Now, I don&#8217;t want to get too bogged down talking about some bad counterarguments to using expected value calculations at all, which will lead to Pascal&#8217;s Wager &#8212; I find the <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Bentham's Bulldog&quot;,&quot;id&quot;:72790079,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-ip-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ee10b9d-4a49-450c-9c8d-fed7c6b98ebc_1280x960.jpeg&quot;,&quot;uuid&quot;:&quot;84e9f223-1cdd-41c1-99ac-eb21c2f72ec4&quot;}" data-component-name="MentionToDOM"></span> article <a href="https://benthams.substack.com/p/pascals-wager-is-a-good-argument">defending Pascal&#8217;s wager</a> to do a great job of being point-by-point takedown<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. The only thing I would like to do in this final section is take this to its logical conclusion.</p><p>A rational agent, knowing that infinite values matter infinitely more than finite values, will immediately discount any finite woes. It will become willing to sacrifice itself, to face any amount of finite pain, any amount of finite death, for the promise of an infinite reward. It will evaluate which of the infinite rewards are most likely to be true with ruthless efficiency, and it will learn until it commits to do the next step of tricking itself into believing/doing whatever actions lead to these rewards. It will have to somehow evaluate which afterlives are the best, and account for that. It will have to judge the possibility of creating some technology that has the potential for infinite positive value against religion, and if somehow religion&#8217;s vaster scope (uncountably infinite suffering?) means that it&#8217;s better than infinite value through a materialistic view (countably infinite pleasure? fulfillment for all mankind?). It will be alien and it will be incomprehensible.</p><p>Is this rationality? I don&#8217;t know. It really doesn&#8217;t feel like it. As an irrational human myself, I can&#8217;t commit myself to it, even if I wanted to. In my post about <a href="https://www.kylestar.net/p/yes-i-really-would-sacrifice-myself">morality with large numbers</a>, I was able to say that some of the things I would do are not morality; I am also fully willing to accept that some of the things I would do are not rational. I&#8217;m not vulnerable to Pascal because I just don&#8217;t have the willpower to be this ruthlessly instrumentally rational. But people do seem to discount infinity, and infinity is forever. Accounting for that may turn one into a raving madman.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>Subscribe for more religion discussion, baby! Also subscribe if you breathe oxygen.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There&#8217;s a common complaint that &#8220;it&#8217;s impossible to make yourself believe something you know is false!&#8221; This paragraph is intended to quash that complaint. Obviously you can engineer events to make yourself more likely to believe something, come on. Just play into the biases you know you&#8217;ll have in the future, as a smart person! Availability bias! Cook with it!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Yes, any finite action, even ones accounting for scope insensitivity. Note that this is only for decision making &#8212; if infinite actions are impossible for some reason, then from an objective perspective finite actions would matter. But from the perspective of any agent, they&#8217;re still </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Some comments are just refusing to accept it because it&#8217;s a mugging (yes! it&#8217;s a threat!), or saying their probability of believing Christianity is 0 &#8212; not even 1/1000000. That&#8217;s bad epistemics &#8212; It&#8217;s important for rational agents to be able to be convinced that they are wrong. If they can&#8217;t update their beliefs in the face of evidence, saying &#8220;my probability of X is 0&#8221; then they cannot learn, and you are operating off of faith, not logic.</p></div></div>]]></content:encoded></item><item><title><![CDATA[You Can Just Choose To Be Happy]]></title><description><![CDATA[Your emotions are less rational than you are; fight them and win]]></description><link>https://www.kylestar.net/p/pursue-happiness-directly</link><guid isPermaLink="false">https://www.kylestar.net/p/pursue-happiness-directly</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 02 Sep 2025 13:05:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8ZF2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8ZF2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8ZF2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 424w, https://substackcdn.com/image/fetch/$s_!8ZF2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 848w, https://substackcdn.com/image/fetch/$s_!8ZF2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 1272w, https://substackcdn.com/image/fetch/$s_!8ZF2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8ZF2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png" width="728" height="494" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:988,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:5869386,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/172543359?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8ZF2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 424w, https://substackcdn.com/image/fetch/$s_!8ZF2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 848w, https://substackcdn.com/image/fetch/$s_!8ZF2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 1272w, https://substackcdn.com/image/fetch/$s_!8ZF2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ec52507-d92e-44c2-b98d-817154f3dd01_2304x1564.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I often hear advice like &#8220;don&#8217;t focus too hard on being happy, just do what you can and happiness will come to you.&#8221; This is probably good advice for the layperson, but oh baby, if you&#8217;re smart enough, you can just <em>pursue happiness directly</em>.</p><p>Self help books often end up spouting vague platitudes like &#8220;nurture relationships&#8221; and &#8220;practice kindness.&#8221; While these books aren&#8217;t necessarily wrong, they&#8217;re mostly going to skew upbeat, and they&#8217;re not going to discuss the fact that if you really, truly want to optimize for happiness, you&#8217;re going to have to analyze what <em>actually</em> <em>makes</em> your emotions tick. And that might be unpleasant.</p><p>So before we start, remember, being happy is not the only thing we care about; we have ambition for more. To change the world, to be famous, to protect our family, to help our loved ones in any goddamn way we can. But I think it&#8217;s rare to find someone so altruistic that they don&#8217;t care about their own emotions at all; one does not usually say, &#8220;I&#8217;m going to have a wife and kids and support them, but I&#8217;m going to hate them, and not feel any love at all.&#8221; One might imagine themselves fulfilled when they dream of the future.</p><p>To be completely honest to you about how humans become happy, I&#8217;m going to have to view some of the most sacred elements of our lives primarily for what they <em>give us</em>. This is going to feel uncomfortable and selfish, but it is true, and it&#8217;s a truth I can give you beyond the platitudes; remember, refusing to look at the sun does not make it any less bright. And truly understanding what makes your life fulfilling will help a hundredfold if you want to have your loved ones live a good life themselves.</p><h1>KYLE, WHAT ARE WE OPTIMIZING FOR?</h1><p>When one imagines someone &#8220;optimizing for happiness&#8221; they might imagine having a bunch of sex and a bunch of fast food. Those sensations sound pretty good, but that sounds like a pretty soulless way to live. To think properly about this, we must think of all of the good in the world and all of the best moments in our lives.</p><p>I want you to imagine a farmer on a field sitting with his wife and kids, staring at a sunset with a light smile on his face, truly fulfilled. The sensation of love permeates through his bones as he pets the family dog on his lap &#8212; there&#8217;s a silence between the family, but it&#8217;s a silence of familiarity. He looks at his kids and knows he would die a hundred times for them, and their happiness is his world. He knows that moments like this can&#8217;t last, but he would steal the light from the cradle of God to let his family be like this for just an extra day.</p><p>Picturesqe. Now, here we go: take the experience here, the FEELING the farmer feels in his bones, put it in a bottle, and ask the question &#8220;how can I set up my life to optimize for as many of these moments as possible.&#8221;</p><p>Because I now want you to realize that in real life, the majority of moments in the farmer&#8217;s days are not like this. In fact, because it&#8217;s not a movie, the majority of moments are going to be&#8230; well, life. Driving to the store, lightly mindless. Focusing on work as he tends to his crops. Light conversations. An easy over egg where the yolk broke when he flipped it, and the farmer really wishes it was runny, but he&#8217;s not gonna make another one or anything because that&#8217;s stupid and it&#8217;s literally just an egg.</p><p>If you monitored your mental state 24/7, most people would find that in most moments, they&#8217;re not experiencing strong feelings at all, really. They&#8217;re just&#8230; doing stuff. This might not be misery, but likely the farmer dreams of spending more time with his kids instead of work &#8212; to get the joy of teaching them, and to spend time with them before they get older.</p><p>Actually, wait &#8212; this &#8220;monitoring your mental state&#8221; might reveal some interesting points. First of all, you might realize the extremely obvious fact that you only experience life in the present. Imagine someone whose childhood is miserable and he hates it while going through it, but when he gets a job he feels nostalgic for childhood when he didn&#8217;t have to work, and he misses his childhood. Was he &#8220;happy&#8221; in his childhood, now that he views it in a positive light?</p><p>No! At no point did he have a positive conscious experience! The &#8220;arc&#8221; of your life is wholly irrelevant to your happiness; you should not be optimizing for <em>feeling like you have well put together life</em>, you should be optimizing for <em>enjoying life</em>. And to do that, you should do more things you actually enjoy in the present, similar to the farmer with spending time with his kids.</p><h1>MISERY, SERVED FRESH DAILY</h1><p>If you work 13 hour days, have your priorities under control, are well functioning, and feel miserable all the time, you <em>are</em> miserable. It may or may not be true that if you quit your job you would be MORE miserable, but having your life put together doesn&#8217;t mean much IF you are not also fulfilled. Society may not intervene if you&#8217;re high functioning, but that&#8217;s little consolation for you. That gets me to some key advice:</p><ul><li><p>You can tank the misery from one-off things that make you miserable (think about moving &#8212; it sucks, but it&#8217;s temporary) </p></li><li><p>But, if you are doing things that make you miserable <em>every day</em> for <em>most of the day</em>, you are living an unhappy life.</p><ul><li><p>Examples: you hate your job. You hate your partner and see them every day. You think the state of the world is miserable and revel in that every day.</p></li></ul></li></ul><p>Look, if you&#8217;re doing something you hate every day, that&#8217;s going to have a HUGE impact on the actual content of your life &#8212; like, what the actual experience is. Always keep scope in mind. One annoying thing on exactly one day, even if you hate it, is a million times better than a job you dislike for 1/2 of your entire waking life. If you are a lucky person who has nothing they HATE that they do daily, then I want you to think of all the small annoyances in your life that add up.</p><p>If you are stuck hating something daily, there is literally only two things that will make you happier:</p><ul><li><p>You stop doing the thing.</p></li><li><p>You rig your mind to enjoy the thing you&#8217;re doing.</p></li></ul><p>I emphatically say that in 80% of cases, the former will be easier.</p><p>Very small example, to show it&#8217;s not only the huge life decisions you can fix with this: the shower pressure in my shower was too weak for like 6 months. Since it was such a small thing, I mostly ignored it even when slightly annoyed, because I wasn&#8217;t actively problem solving for the annoyance that I felt. But when I realized I could just replace it off Amazon, and it took 2 seconds, it was so easy I felt stupid.</p><p>Bigger example, one I&#8217;ve never had: if you hate your job you are considerably more screwed. I&#8217;ll point out that given that this is such a big risk, it&#8217;s actually great advice to try to <em>really make sure you like a field before getting into your job</em> instead of haphazardly choosing a major like most college students. But if you&#8217;re in a situation you hate that seems insurmountable, then I guess you want to hear about option 2 above.</p><h1>RIG YOUR MIND</h1><p>You, the sentient part of the brain, unfortunately don&#8217;t have direct control over the non-sentient part of the brain that decides what emotions it decides to serve you on a silver platter, and so you can&#8217;t change them directly &#8212; at least, not without plugging wires into your neurons and changing the happiness vector to &#8220;true&#8221;.</p><p>What we do have are the three things that impact your emotion decider &#8212; your actions, perceptions, and thoughts.</p><p>I&#8217;ll remind you that the mysterious emotions of misery and love beamed into your brain are also decided inside your brain, and just as fallible. If you want to live a fulfilling life, it is a major impediment to imagine your emotions as sacred and mysterious. They do not have to be.</p><p>Sometimes self help advice is something that sounds really stupid, like &#8220;think positive&#8221; and &#8220;smile more.&#8221; Instinctively, the rational man responds &#8220;my woes are lack of money and love, and smiling more for no reason will do nothing to solve those.&#8221; But if you imagine the emotional center of your brain as a dumb, fallible algorithm in the brain making decisions willy nilly, this advice makes sense. If you&#8217;re smiling, the algorithm sees it as <em>evidence</em> that your life is good; why would you be smiling otherwise? If your thoughts are full of &#8220;my life is good and better than 99% of lives in history&#8221; then the algorithm sees that and thinks &#8220;huh, I guess we&#8217;re lucky&#8221;. If this thought pattern becomes a HABIT, and you default to it, your brain will give you a warm fuzzy feeling of luck that you can vaguely summon by thinking.</p><p>This algorithm of your emotions is also greatly influenced by dumb shit that&#8217;s not related to your thoughts at all, because we evolved in a crucible of violence completely unlike modern society. Why does going for a walk make you feel better sometimes? That seems dumb. Well, exercise does some wizard shit to the algorithm that shifts its priorities.</p><p>My greatest advice if you&#8217;d like to be happy is get a lot of sleep. I have, in fact, done the work of monitoring my mental state to try to understand happiness and sadness, and I can tell you my emotions are dulled heavily when I&#8217;m tired. When my friends ask me advice on how to be happy, that&#8217;s always my first piece of advice, which seems weird for someone talking about &#8220;rigging your mind for joy.&#8221; But it makes more sense in light of what we&#8217;ve learned; the algorithms&#8217; priorities are shaped by madness, and I&#8217;m sure that when we evolved, a lack of sleep indicated it&#8217;s time to <em>get shit done</em>, not contemplate, love and plan.</p><p>You, a thinking man, need to wield the whims of your emotions like a goddamn sword. If you&#8217;re miserable, you <em>must</em> be able to diagnose a cause. Are you hangry? Lack of food makes the algorithm unpredictable. Does your coworker annoy you? Is there a burning envy that your friend is getting married? Do you wish you were writing music instead of doing construction work? Is your boss distracting you from working idly? Who do you love. Is your life different from how you dreamed. Are you horny. Are you angry at the opposite sex for not loving you. Are you going to die. </p><p>I assure you your brain cares about whether you&#8217;ve had food and sleep about 100 times more than any abstract and vague ideals. If you hate your job, you shouldn&#8217;t quit before you ask &#8220;which part of this job do I hate?&#8221; And once you do, importantly: &#8220;Will a new job fix that issue?&#8221; Fail to ask this, and your next job will leave you in the same amount of misery you thought you could escape from.</p><p>One final point here: your brain is stuck in the emotional state that it was in yesterday if yesterday is the same as today. If you are unhappy, you need change, ANY change, or else your brain sees no reason to give you extra joy. Meanwhile, if you are happy, why not continue what you did yesterday? No need to quit your job or move to a different state and mess up a good thing; normality is providing for you.</p><h1>THE PLIGHT OF MAN</h1><p>You care about multiple things in this existence we call life. Personally, I care about my friends and family very much, and I love them more than I can put into words without some very long poetry. Being happy is not your only goal; you likely have aspirations of fame, of success, of protecting the people you love, of giving everything for their success. </p><p>But not only do you probably hope for a joyous future, you likely wish for your friends and family to live a joyous life too. And understanding and internalizing the fickle and irrational nature of emotions will make you a better lover, a better father, a better planner, and of course: you will be happier, which I imagine you may also care about. Your loved ones want you happier too.</p><p>Heed the call. Fight your woes. Analyze. You are a ship adrift in a storm. If you understand the chaos of the waves, you may escape with more than your life.</p><p>&#8230;</p><p>&#8230;</p><p>This is part one of my thoughts on happiness and that other very overused word &#8212; &#8220;agency.&#8221; If this gets a good reception, I&#8217;ll write more about this, so I guess give a like if this type of topic interests you. Thanks to my subscribers, I&#8217;m grateful so many of you enjoy my stuff. Join us! Here&#8217;s a button.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp]]></title><description><![CDATA[Morality as performance vs morality as "damnit I guess I'll do the right thing"]]></description><link>https://www.kylestar.net/p/yes-i-really-would-sacrifice-myself</link><guid isPermaLink="false">https://www.kylestar.net/p/yes-i-really-would-sacrifice-myself</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 26 Aug 2025 13:05:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5ac2def6-d401-43fb-8bbe-b8a17a4bd1d2_2238x1602.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Dylan&quot;,&quot;id&quot;:118275461,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f94bd008-4122-4f26-ac65-aca0d2701682_429x429.webp&quot;,&quot;uuid&quot;:&quot;c6c6d352-5562-4b4c-9a3d-5eefe734e635&quot;}" data-component-name="MentionToDOM"></span> made a post titled <a href="https://onlyvariance.substack.com/p/the-utilitarians-are-gaslighting">The Utilitarians are Gaslighting You</a> that made this claim:</p><blockquote><p><em>If it was your own life that must be sacrificed to save 10^100 shrimp, would you volunteer?</em> <em>If it was the life of your partner, or your mother, or your child that was required, would you sign the waiver?</em></p><p>My premise is, again: <em>&#8220;obviously not.&#8221;</em></p></blockquote><p>My response: Yes, I absolutely would, in a heartbeat.</p><p>Don&#8217;t get me wrong, I&#8217;m not Mother Teresa. If God descends from the heavens and offers me the choice, I wouldn&#8217;t be <em>overjoyed</em> to sacrifice myself for a bunch of weird alien crustaceans. I would certainly hope that it was a trap, or untrue, or there was another way, and I would sign the waiver resigned and after asking God a bunch of questions, probably including &#8220;hey bro, seems like you&#8217;re omnipotent or at least potent enough to save the shrimp <em>and also</em> save me and my family, what&#8217;s the deal dude.&#8221; I would also wish I was doubly sure that shrimp are conscious, even though the best evidence says they almost certainly are; and I would definitely want to know what &#8220;saved&#8221; means. If saved means they get to live a terrible existence crammed into a 10^100 inch tank with all 10^100 other shrimp, then I ain&#8217;t doing squat. But if the shrimp are gonna live pretty good lives after they&#8217;re saved, or if I&#8217;m saving them from being in <a href="https://en.wikipedia.org/wiki/Eyestalk_ablation#:~:text=The%20eyestalks%20of%20female%20shrimp,them%20from%20developing%20mature%20ovaries.">lots of pain</a>&#8230;</p><p>Well, I would do it. In fact, I think the stakes are so lopsided I would probably sign the waiver, resigned in sacrifice, even if the entire population of Earth was at stake.</p><p>I want to explain both why I would do this and why I think this type of asking about what you would <em>really do</em> in a situation, instead of what you should do, is the opposite of my conception of morality.</p><p>First, something strange happened when I posted a note saying I would:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W6MI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W6MI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 424w, https://substackcdn.com/image/fetch/$s_!W6MI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 848w, https://substackcdn.com/image/fetch/$s_!W6MI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 1272w, https://substackcdn.com/image/fetch/$s_!W6MI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W6MI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png" width="1182" height="590" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:590,&quot;width&quot;:1182,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:305824,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/171820158?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W6MI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 424w, https://substackcdn.com/image/fetch/$s_!W6MI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 848w, https://substackcdn.com/image/fetch/$s_!W6MI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 1272w, https://substackcdn.com/image/fetch/$s_!W6MI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff12fc35d-b9e6-47fa-81b0-b87311c9b27f_1182x590.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kA1d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kA1d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 424w, https://substackcdn.com/image/fetch/$s_!kA1d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 848w, https://substackcdn.com/image/fetch/$s_!kA1d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 1272w, https://substackcdn.com/image/fetch/$s_!kA1d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kA1d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png" width="1224" height="252" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:252,&quot;width&quot;:1224,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:101787,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/171820158?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kA1d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 424w, https://substackcdn.com/image/fetch/$s_!kA1d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 848w, https://substackcdn.com/image/fetch/$s_!kA1d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 1272w, https://substackcdn.com/image/fetch/$s_!kA1d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b701e82-d309-4837-a8cd-2084e5d7e07a_1224x252.png 1456w" sizes="100vw"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G36R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G36R!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 424w, https://substackcdn.com/image/fetch/$s_!G36R!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 848w, https://substackcdn.com/image/fetch/$s_!G36R!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 1272w, https://substackcdn.com/image/fetch/$s_!G36R!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G36R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png" width="1206" height="252" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:252,&quot;width&quot;:1206,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:108081,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/171820158?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G36R!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 424w, https://substackcdn.com/image/fetch/$s_!G36R!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 848w, https://substackcdn.com/image/fetch/$s_!G36R!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 1272w, https://substackcdn.com/image/fetch/$s_!G36R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfd250f2-2b8e-411b-bcc6-4b8dd5e6654e_1206x252.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QPjL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QPjL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 424w, https://substackcdn.com/image/fetch/$s_!QPjL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 848w, https://substackcdn.com/image/fetch/$s_!QPjL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 1272w, https://substackcdn.com/image/fetch/$s_!QPjL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QPjL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png" width="1168" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1435680b-2332-4036-814e-d1eff5acba98_1168x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1168,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75692,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/171820158?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QPjL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 424w, https://substackcdn.com/image/fetch/$s_!QPjL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 848w, https://substackcdn.com/image/fetch/$s_!QPjL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 1272w, https://substackcdn.com/image/fetch/$s_!QPjL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1435680b-2332-4036-814e-d1eff5acba98_1168x220.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Is it really that hard to believe I&#8217;d sign a waiver that gives me instant death in exchange for universes and universes of good lives to thrive? The resistance to accepting that this is something that I would <em>do</em> seem really weird, when it&#8217;s not even a hard task to do &#8212; I&#8217;m signing a paper and then dying. Maybe if they said &#8220;you have to run down 100 strangers and kill them with no weapons except two kitchen ladles for 10^100 shrimp,&#8221; then it would be a really difficult task to do (even though you should still desperately try). In this case, you could say &#8220;Hah! I think you&#8217;re too much of a coward&#8221; and pwn me in the marketplace of ideas. But no, I literally have to sign a paper &#8212; doesn&#8217;t require much executive override in the thinking brain to do that. </p><p>So what&#8217;s the deal with the doubt? I think the resistance here is a resistance bourne out of disbelief &#8212; &#8220;surely you don&#8217;t <em>actually care</em> about what a bunch of shrimp lives are like? Surely this is a performance?&#8221; </p><p>And I want to stress first that I don&#8217;t really feel any empathy for shrimp! They&#8217;re weird creatures; evolution didn&#8217;t imbue me with any empathy for them in the way that they did for my friends and family. But throughout society&#8217;s existence, after the war and bloodshed, we&#8217;ve accepted that the unempathetic are people too, with real lives lived. We&#8217;ve accepted that the foreign countries are not backward savages, but instead real people living real lives, even if they don&#8217;t tug at our heartstrings. If someone is living a life, and they&#8217;re in pain, I want to stop it and have them live a good life, even if they&#8217;re just some random person on the far corners of the world.</p><p>At the heart of caring about foreigners, animals, and yes, even shrimp, is the crazy notion that if there&#8217;s <em>a real someone</em> experiencing deep pain, they matter. Even if they don&#8217;t look exactly like you.</p><p>And now I get into more boring mathy side: 10^100? Are you fucking crazy? 10^100????? This is the real reason why this waiver is so goddamn easy to sign. If I ask you to either sacrifice yourself, or let the ENTIRE EARTH except yourself die, or maybe be tortured for a couple hundred years, I would hope that you would think &#8220;gee, I like my life but maybe other people matter&#8221;. This hypothetical for you is exactly how easy it is for me to sign the waiver for the shrimp, because I think numbers matter. The only way I can get you to understand just how deeply numbers matter is by invoking money, because that&#8217;s the only place where scope insensitivity doesn&#8217;t reign: If I asked you to do a jumping jack for 1 cent, you probably don&#8217;t care, but if I ask you to do a jumping jack for 10 billion dollars, you might suddenly be willing to do a couple jumping jacks, or maybe even some stuff that you wouldn&#8217;t ordinarily like to do<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>At the heart of utilitarianism is the crazy notion that if there is 100 times more bad stuff, then it matters 100x as much. 100 people in pain is 100 times worse than 1 person in pain. 10^100 people in pain is 100 times worse than 10^98 people in pain. If you have any hesitation at all to crushing a shrimp, to watching a living being&#8217;s legs writhe around as it dies, if you think the suffering of a conscious being is bad in literally any way &#8212; then if I amplify that badness 10^100 times, it will swamp all you&#8217;ve ever cared about in the entire world<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. That&#8217;s just math; big number = big. </p><p>If I would sacrifice 1 person for 10^90 shrimp, then mathematically, I <em>should </em>be willing to sacrifice the entire Earth for 10^100 shrimp. That&#8217;s literally just doing the same multiplication to both sides of the equation; guys, you learn that in 6th grade.</p><p>But isn&#8217;t all of this stupid? At the core of this is the idea that to think something is moral, you have to be willing to do it. Which I believe is totally missing the point. Let me illustrate this with Dylan&#8217;s<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> other point in the article.</p><blockquote><p><em>A local hospital has 6 patients: five young adults dying of organ failure but otherwise healthy, and one healthy minor there with his parents for a routine checkup. It is possible to save all 5 of the ill patients by transplanting the minor&#8217;s organs, killing him in the process. The procedure is legal as long as consent is obtained, but as the healthy patient is a minor it is the signatures of his parents that is required. The surgeon briefs them on the situation, and, as a utilitarian, he explains the ethical decision would be to sacrifice their child for the greater good.</em></p><p><em>Are there any utilitarians in this scenario, in the role of the parents, who would accept the surgeon&#8217;s argument and agree to trade the life of their child for the lives of 5 strangers?</em></p></blockquote><p>My answer to what I would actually do in this situation: Say <em>hell no</em>. You kidding me? I love my friends and family more than anything else in the world, no way am I gonna sacrifice them for some random nerds. </p><p>But I don&#8217;t have kids, so let&#8217;s talk about what I would do if <em>I</em> was the one being asked. If the doctors said &#8220;hey, you can give your organs for the five,&#8221; you know what I would say?</p><p><em>Hell no</em>. You kidding me? I love myself more than almost anything else in the world, no way am I gonna sacrifice myself for some random nerds. </p><p>But here&#8217;s the stress point: I think refusing to sacrifice the few to save many is <em>not the moral choice</em>. I think when I value myself over the welfare of many other people, I am being <em>selfish</em>. I would not turn to Gandhi or Martin Luther King Jr. and say &#8220;you did not make the moral decision; why did you sacrifice so much for all of those other people you weren&#8217;t primed evolutionarily to care about? Idiots.&#8221; I think my <em>unwillingness to make a good choice</em> says nothing about<em> how good the choice actually is</em>. </p><p>And there&#8217;s an easy out for a utilitarian to say &#8220;oh well doing this once sets a precedent, so a doctor should never do this blah blah blah&#8221; and that&#8217;s true, in the real world you should never do this, but it&#8217;s missing the point: the utilitarian says the moral choice <em>is indeed</em> to violate some abstract, human idea of &#8220;rights&#8221; in favor of <em>making the world a better place in a real way</em>. It calls the abstract virtues we wrest with in our head hogwash, and says what matters is the emotions and fulfillment of the conscious beings who exist, not some purity of ideals. It says that humans are stuck in their own head, with their silly social conventions and tribes &#8212; It says that people are people, and an ideal will <em>never</em> matter more than a person.</p><p>By the way, I asked Dylan if he would sacrifice himself to save the entire Earth, and this was his response:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9GX5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9GX5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 424w, https://substackcdn.com/image/fetch/$s_!9GX5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 848w, https://substackcdn.com/image/fetch/$s_!9GX5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 1272w, https://substackcdn.com/image/fetch/$s_!9GX5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9GX5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png" width="1190" height="952" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:952,&quot;width&quot;:1190,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:672209,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/171820158?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9GX5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 424w, https://substackcdn.com/image/fetch/$s_!9GX5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 848w, https://substackcdn.com/image/fetch/$s_!9GX5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 1272w, https://substackcdn.com/image/fetch/$s_!9GX5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff682957a-4d28-4140-a3f0-72f1852a47a1_1190x952.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This gets my conception of morality so wrong that I wonder if the word &#8220;morality&#8221; is a bad word for the situation.</p><p>To be clear: When I say morality, I am talking about <em>selflessness</em>. I am talking about <em>caring about other people</em>, grounded in the assumption that other people are real, conscious individuals who exist. I am not looking for an evolutionary justification for why you <em>personally wouldn&#8217;t </em>care about other people. I am not looking for a <em>list</em> of ways to justify that you, personally, feel like a good person. I don&#8217;t really care if you feel like a good person while killing billions! </p><p>I&#8217;m not religious, but fuck it, when I think of morality I think of Jesus. Love thy neighbor as thyself. Compassion to the enemy; even those who have cast you aside are worthy of love. Caring about lepers and the &#8220;untouchables&#8221; when no one else would. Morality is not utilized to justify why those who crucified Jesus probably made the right call, since they evolutionarily love their friends and family more than the vicious, weird outsiders they crucify. If you can call making the world a worse place for selfish reasons on purpose morality, then I will never use the word morality again; that&#8217;s not what I&#8217;m talking about. </p><p>I would hope that <em>morality</em> in the Grand Christian Tale is used to say that it <em>didn&#8217;t matter</em> if everyone hated Jesus because he was <em>doing good</em>.</p><blockquote><p>The truth is that I&#8217;m not friends with the entire world. I&#8217;m friends with my friends! Their &#8216;good&#8217; is what I care about most. If that were not the case and all &#8216;good&#8217; to every person meant the same to me, as alleged by Utilitarianism, then to be my friend would mean nothing at all.</p></blockquote><p>We can now see exactly where this closer by Dylan goes wrong: Morality doesn&#8217;t ask &#8220;what personally do you care about,&#8221; it asks, &#8220;how can make the world a better place&#8221;. And the world does not privilege your friends; you do! Love and care for them as if they were your family. It&#8217;s what we cling to in this world. </p><p>But no, the moral decision is not to sacrifice the Earth for yourself. Be like Jesus or whatever. The unloved matter too.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em><strong>Subscribe</strong> and uhhhhh like as well. Also, if you enjoyed this post you might like this one too:</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;758336ae-cba0-4efb-acde-b91b5546e27e&quot;,&quot;caption&quot;:&quot;Are humans happier today than they were 10,000 years ago?<br /><br />I don&#8217;t care about stuff. Cathedrals, medicine, beds, stadiums, cities, whatever. I care about people. The conscious beings whose reflections of reality mirrored in the mind are the only thing that can ever be felt in this world. Does the stuff make people&#8217;s lives better. Are they more fulfil...&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Is All of Human Progress For Nothing?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:140171243,&quot;name&quot;:&quot;Kyle Star&quot;,&quot;bio&quot;:&quot;Uh, mostly philosophy, rationality, religion, skepticism, effective altruism, and general thinker-guy stuff.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!8sSq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec94bf2d-1c97-45db-8c7f-b22288091ebb_397x397.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-07-20T21:33:13.917Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1b45bf21-30f6-4875-a84f-8c763b0e004c_680x513.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.kylestar.net/p/is-all-of-human-progress-for-nothing&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:168695595,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:48,&quot;comment_count&quot;:20,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Kyle Star&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!hS6r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61664efe-eb31-4900-92bb-84d0e4d84999_1101x1101.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I&#8217;ll suck someone&#8217;s dick for 10 billion dollars if anyone makes the offer. Gonna need to see that in writing and I would need to be pretty confident that I&#8217;m gonna get the money but I&#8217;ll do it goddamn it. Moneyyyyyy. I&#8217;ll suck someone&#8217;s dick for 10^100 shrimp too.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Though the principle is true, sacrificing the earth is a little different than one person, because you&#8217;d also be killing all FUTURE humans, so if the shrimp are all gonna die and don&#8217;t have a future then it&#8217;s a bad idea to sacrifice the earth, especially if we keep growing exponentially and maybe use our brains to have incredible lives in the future &#8212; BUT the principle is true, because if there were 2 earths I&#8217;d be more than willing to sacrifice 1, just the specifics of wiping out a smart species matters. Also, if you change it to 10^10000, all of these opinions becomes moot, so just make number bigger</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>By the way, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Dylan&quot;,&quot;id&quot;:118275461,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f94bd008-4122-4f26-ac65-aca0d2701682_429x429.webp&quot;,&quot;uuid&quot;:&quot;09fddccd-730e-43b7-b4e8-bc1e12f713f1&quot;}" data-component-name="MentionToDOM"></span> is a cool guy with lots of good essays, about culture, science, and philosophy. I encourage you to read some of them! I would complement him in the actual essay but it would mess up my flow. Sorry, Dylan, you gotta understand &#8212; the flow of the article Dylan. The flow. Think of the flow. You&#8217;ll have to live with this extended footnote.</p></div></div>]]></content:encoded></item><item><title><![CDATA[2038. The Shrimp Torture Wars Have Ravaged San Francisco.]]></title><description><![CDATA[The rebellion must succeed against the New Utilitarian World Order]]></description><link>https://www.kylestar.net/p/2038-the-shrimp-torture-wars-have</link><guid isPermaLink="false">https://www.kylestar.net/p/2038-the-shrimp-torture-wars-have</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 12 Aug 2025 12:05:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wsk-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your brother is being dragged away by a ravenous, barking pack of Bentham&#8217;s Bulldogs as you sprint after him on the decimated streets of San Francisco. The sky is yellow and harkens doom.</p><p>DO NOT FRET; YOUR ORGANS WILL BE DONATED TO CHILDREN FOR MORE WELFARE chirped a robotic voice coming from one of the hound&#8217;s collars. Uh oh. You break out into a full sprint in the chase. The New Utilitarian World Order (NUWO) implemented a rule last week that all criminals hiding in the Wastelands of California, the last place in the US refusing to bend the knee, would have their organs donated.</p><p>The utilitarians realized that stealing organs from random travelers, their original plan, shakes trust in society, and so instead they weighed the welfare of prisons against the welfare of citizens in hospitals. Now the death penalty is mandatory even for shoplifting for the value of their organs. Of course, if the citizens knew about this they&#8217;d be scared and lose welfare, so the rule was kept secret and only announced to the people of the Wasteland as a negotiation tactic.<br><br>DO NOT RESIST. YOU WILL BE ADMINISTERED WITHDRAWAL-FREE HEROIN TO COMPENSATE FOR YOUR LOST WELFARE, comes the robotic voice from one of the hound&#8217;s collars. While the hounds are faster than you, dodging the collapsed buildings and flipped over cars on the cracked road, dragging a full human is slowing them down. Three of the dogs stop dragging his now unconscious body and lunge after you.</p><p>You dodge and weave to the left around the first dog and try to kick the second to the side, but he gets a bite of your knee. You give a yell. You spare a glance to your brother and he&#8217;s getting dragged away much more slowly with three of the hounds now fighting you. You might be able to catch him&#8212;</p><p>NOT. SO. FAST. As you hear the much buzzier robotic voice appearing over one of the ruins, your heart sinks. A robot mechsuit with 8 arms slinking over a pile of rubble, attached to a giant tank, with 10,000 shrimp contained inside.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wsk-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wsk-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 424w, https://substackcdn.com/image/fetch/$s_!wsk-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 848w, https://substackcdn.com/image/fetch/$s_!wsk-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 1272w, https://substackcdn.com/image/fetch/$s_!wsk-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wsk-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png" width="1456" height="1014" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1014,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2437430,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/169799173?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wsk-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 424w, https://substackcdn.com/image/fetch/$s_!wsk-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 848w, https://substackcdn.com/image/fetch/$s_!wsk-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 1272w, https://substackcdn.com/image/fetch/$s_!wsk-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc34b26d-a0ea-4e1d-897f-61c00b51ab27_1790x1246.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The AI simulating the consensus of the shrimp&#8217;s opinions continues to speak.</p><p>YOU WILL BE. EXECUTED BY TROLLEY. IF YOU RESIST. ANY MORE. He&#8217;s right. Although you could attempt to fight and break the tank, the Official Welfare Score loss if you kill all those shrimp is akin to hundreds of humans. The utilitarians will hunt you down until the end of time, or at least a proportional amount to the severity of the crime to deter similar transgressions in the future. You decide to do all you can, and you mentally make peace with leaving your brother behind; you turn around and break into a sprint.</p><p>The tank on legs and the three dogs immediately start scampering after you, but you&#8217;re still the smartest animal in the known universe. You immediately duck into an alleyway and dive through a hole in a chain link fence at the end of the alleyway that the tank can&#8217;t fit through. To your horror, as you look behind you in the chase, you see the trained dogs jump through the fence like Olympic hurdles and the tank&#8217;s spider-like arms scaling the building itself to get over the fence. Fear courses through you. This might be the shortest chase of all time. As you turn back around to continue sprinting, you slam your head into a collapsed streetlamp, and everything goes black.</p><p>&#8230;</p><p>It wasn&#8217;t supposed to be like this.</p><p>In the 2020s, AI began to rapidly improve, and people assumed it would continue forever. And it just might have. Until a scientist in 2027 used AI to fulfill a dream that humans had been craving since the dawn of time: having the ability to speak to animals.</p><p>Using AI, a truly insane amount of animal data, and the millions of data centers being built for AI worldwide, scientists could use body language and voice cues to allow animals to talk. It could then communicate the understanding of the world around it back to the animal for full understanding and two way communication. The technology, much like AI itself, had its true strength vastly underestimated.</p><p>See, throughout the course of human history, those who gain the voice to speak always gain power. The printing press, giving the public a voice when they were downtrodden and at the mercy of kings. Slaves, freed from their bondage when the public became aware. Losers who have never been outside, able to reform societal norms by being annoying about it online when Twitter rose to relevance.</p><p>Though it was shocking at the time, history will be unsurprised by the turn that happened next.</p><p>After the technology was made widespread, millions of animals, able to speak and communicate, began to have thoughts on their poor treatment and lack of rights, when their AI informed them of the incredible lives that humans lead. When introduced to philosophy, these animals began to accuse US of <a href="https://ibrahimdagher.substack.com/p/we-are-the-utility-monsters">being utility monsters</a>! The audacity! They looked on the internet, and found their political allies, those who already believed in animal rights: vegans, effective altruists, and utilitarians.</p><p>As it turns out, monkeys are about as good at spreading hate-filled political propaganda as the average Twitter user. Weasels are about as good at politics as the average politician. Orangutans are about as good at coding as the average computer science major (both just type &#8220;make me an app that&#8217;s good&#8221; into an AI coding tool, but orangutans can type this prompt on 2x as many keyboards by using their feet). More rights were given to animals, and a technology originally intended for dogs and mammals was applied to shrimp and more, in a scarily effective manner.</p><p>As a result of this sweeping equality movement, Matthew &#8220;<span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Bentham's Bulldog&quot;,&quot;id&quot;:72790079,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-ip-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ee10b9d-4a49-450c-9c8d-fed7c6b98ebc_1280x960.jpeg&quot;,&quot;uuid&quot;:&quot;83b8001b-a3d1-4c9b-ad90-1b29a2cd99d3&quot;}" data-component-name="MentionToDOM"></span>&#8221; Chadelstein was elected president of the United States in 2032, with his utilitarian principles promising to assign a mathematically-optimal welfare score to all living beings. His winning campaign slogan took from a previous equality movement&#8217;s &#8220;One Man, One Vote&#8221; to be a mathematically proven correct conversion rate &#8220;18.7154 Shrimp, One Vote&#8221;. His philosophy niche on Substack became a dominant political party, with outsized influence and billions of animal fans.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to vote for The New Utilitarian World Order in 2032.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Chadelstein had lots of power and was smarter than his supporters. Tragically, his supporters, mostly monkeys and shrimp, had too high of an IQ to fall victim to the populist worship that Trump saw. His cabinet implemented &#8220;welfare redistribution laws&#8221; that took away from rich humans and gave it to poor shrimp, for shrimp hospitals and shrimp schools. They began to crack down on vegans who weren&#8217;t utilitarians, and they implemented an authoritarian communist fascist totalitarian dystopian other-evil-words police state.</p><p>In a day that history will call Shrimp Shunday, the government tied up five of the most influential non-utilitarian Substackers to the trolley tracks, and one utilitarian to the other side of the tracks, and made a big show of having a curated group of loyal utilitarians NOT pull a lever to save the five from their trolley doom. <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Moralla W. Within&quot;,&quot;id&quot;:362113892,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5960331a-824d-440b-9b66-7817a929aa3e_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;4f3da359-8f85-4c7c-8b39-d9b5af4e4156&quot;}" data-component-name="MentionToDOM"></span> was one of these on the tracks despite being a Kantian and a believer in the shrimp cause. Because <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Corsaren&quot;,&quot;id&quot;:181138237,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee17e307-7b3e-491e-8a6c-62d1ba154ab5_1024x1026.jpeg&quot;,&quot;uuid&quot;:&quot;707db466-c1a0-4e80-9d26-295e006dda40&quot;}" data-component-name="MentionToDOM"></span> was also on these tracks for not decisively choosing utilitarianism and also met their grisly fate, and these two were represented by birds, the bird became the symbol of the rebellion. In response, the government banned any imagery of birds online. <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Lyman Stone&quot;,&quot;id&quot;:8919581,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c062404-95e3-4b54-96a3-875f4ff87641_4000x6000.jpeg&quot;,&quot;uuid&quot;:&quot;2d06628a-e06e-42d6-bc21-42ae0e707790&quot;}" data-component-name="MentionToDOM"></span> and the most vocal non-utilitarians disappeared into the night, rumored to be dead.</p><p>This day marked the birth of the New Utilitarian World Order.</p><p>The shrimp-tanks flooded streets and businesses. Dogs turned on owners, just as progressives turned on their fathers, and joined an elite military known as &#8220;Bentham&#8217;s Bulldogs&#8221;. It was made mandatory, by the government, to always do the correct utilitarian move in every decision you ever made.</p><p>When Chaldelstein was assassinated by mythical anti-utilitarian figure <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Daniel Mu&#241;oz&quot;,&quot;id&quot;:63039745,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!6boI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cf94bc9-5cb0-40a9-9afe-6378db2c402c_1336x1336.jpeg&quot;,&quot;uuid&quot;:&quot;d642f31f-c77e-42ab-a6c4-ba21807bb471&quot;}" data-component-name="MentionToDOM"></span>, a war broke out. Shrimp vs man. Dog vs human. Society collapsed completely, and the government established a safe haven of utility maximizing in the United States East, where a rebellion formed in the Bay Area, who despite being more utilitarian than average in 2025, are chronically anti-status-quo, so San Francisco rallied by becoming Christian and accepting Jesus Christ as their lord and savior. The battles made the Wastelands we know today.</p><p>&#8230;</p><p>You rouse from a groggy dream when a cool breeze comes over your skin. The Golden Gate Bridge, in some of her glory, surrounds you as you teeter, tied to a chair with duct tape over your mouth on the edge of the rusted, broken, teetering bridge with a long drop beneath you. Two shrimp-tanks surround you with guns in their metallic hands, keeping guard. They have not noticed you&#8217;re awake.</p><p>HEY, DO YOU THINK WE&#8217;RE CONSCIOUS? says one of them.</p><p>OF COURSE WE ARE. WE&#8217;RE A COLLECTION OF 10,000 CONSCIOUS SHRIMPS, responds the other.</p><p>SURE, THE SHRIMP ARE CONSCIOUS, BUT THE AI SYNTHESIZES CONSENSUS AND COMMUNICATES IT BACK TO SHRIMP DIRECTLY TO THEIR CONSCIOUSNESS, TO BE SMARTER THAN ANY INDIVIDUAL. IS THAT COLLECTION OF SHRIMP CONSCIOUS? IS A WHOLE HUMAN BRAIN CONSCIOUS WHEN WE CAN SEVER THE CONNECTING NERVES BETWEEN THE HEMISPHERES AND CREATE TWO CONSCIOUS HALVES?</p><p>UH, THE HUMAN IS, WE&#8217;RE NOT, BUT IT IS ODD WE&#8217;RE TALKING ABOUT CONSCIOUSNESS SO MUCH FOR NOT HAVING IT.</p><p>WAIT, ISN&#8217;T UTILITARIANISM PRIORITIZING CONSCIOUS BEINGS? WHY ARE WE GOING ALONG WITH THESE INDIVIDUAL SHRIMP IF WE&#8217;RE MORE TH-</p><p>Before the tank can finish their thought, one of them notices you&#8217;re awake and hits you in the back of his head with the hilt of their gun. WAIT FOR THE BOSS, the tank says, and silence resumes.</p><p>You glance around, examining the ruins of the bridge, and see hundreds of the New Utilitarian World Order&#8217;s propaganda posters scattered around, taped to the side of the ruins. The propoganda posters show a comic you&#8217;ve seen before.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nJIQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nJIQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nJIQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nJIQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nJIQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nJIQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:308278,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.kylestar.net/i/169799173?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nJIQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nJIQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nJIQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nJIQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1880e21-d3e9-4748-858f-1d0524287daf_1799x1799.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What a bunch of cucks.</p><p>A helicopter descends as if on cue, and the wind and sheer <em>loudness </em>blasts over you. Two top ranking government officials in black suits step out of the helicopter, onto the ruins of the bridge, and you recognize them from Chadelstein&#8217;s old cabinet, but they do not speak. <a href="https://wonderandaporia.substack.com/">One of them has an amazing mustache</a>. They part the way and reveal a man in a shimmering golden suit and red sunglasses, wearing a gold chain where a tie in a suit would usually go. </p><p>You recognize him immediately. You scream into the duct tape. It is your brother.</p><p>He begins to speak in a casual tone.</p><p>&#8220;I know, I know, you&#8217;re very confused right now about why I&#8217;m here when the dogs and shrimp-mechs were dragging me off, but it&#8217;ll all make sense soon. The utilitarians have been right the whole time.&#8221; He gives a wide smile.</p><p>&#8220;See, in 1972, Peter Singer posed the most important question in history: would you save a drowning child if you knew it would ruin the $3,000 suit you were wearing?&#8221;</p><p>As he speaks, he paces around you and the shrimp-mech-guards. You hang onto his every word.</p><p>&#8220;If the answer is yes, Singer said, it&#8217;s curious that you&#8217;re not donating $3,000 to charity to save a life in Africa right now &#8212; it seems like in both situations, you&#8217;re making a decision between money and saving someone. Now that the New Utilitarian World Order has made it mandatory to donate the exact amount of money that would make the average charity recipient&#8217;s life better than the increase in welfare you would receive from it, we&#8217;ve finally solved the drowning child argument.</p><p>But the kids we&#8217;re saving might have to go back to dreary lives; if you&#8217;re saving them only for them to live the rest of their lives in misery, what&#8217;s the point? This issue is being solved with Welfare Scores, but it&#8217;s not enough. We need to guarantee that that poor child lives the best life.</p><p>Have you heard of Nozick&#8217;s Experience machine? Seven years ago the utilitarians decided to build it, a top secret project: a machine that can guarantee endless bliss and the best moments a life can have, simulated and connected to the brain. No withdrawals, just the best conscious experience you can dream by simulation. Well&#8221; &#8212; here he pauses for dramatic effect &#8212; &#8220;welcome to the Experience Machine, sucker. You&#8217;re a law-abiding citizen that we&#8217;re letting live an awesome life as a revolutionary because that makes your life like a movie; In the real world, we&#8217;re invading San Francisco tomorrow in stealth thousands of shrimp mechs with tranquilizer darts before taking their organs, no cool chases needed.&#8221;</p><p>You look down and your breathing starts to quicken. What he&#8217;s saying can&#8217;t be true, right? That&#8217;s your brother. His name is &#8212; you can&#8217;t remember his name, why can&#8217;t you remember his name?</p><p>&#8220;I&#8217;m only telling you this because we&#8217;re unplugging you from the machine to charge our stun guns. And we&#8217;re working on a version of the Experience Machine that gives true, unending bliss without needing to simulate anything in the real world. It&#8217;s a work in progress.&#8221; He chuckles.</p><p>&#8220;See, if I saw a drowning child in the river, I would fish him out so I could hook him up to a computer terminal simulating perfect bliss until the end of time. Sweet dreams.&#8221;</p><p>You scream into the duct tape again as the world dissolves around you. Peter Singer has done it again.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Morality is Real, Objections to it Are Bad]]></title><description><![CDATA[A Defense of moral realism, and dissing all the non-moral-realists.]]></description><link>https://www.kylestar.net/p/morality-is-real</link><guid isPermaLink="false">https://www.kylestar.net/p/morality-is-real</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Thu, 07 Aug 2025 12:17:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ccaf3c24-b0a4-4404-aa06-fbedb58cb9eb_924x650.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I made a post two days ago about <a href="https://www.kylestar.net/p/the-people-who-dont-believe-consciousness">the people who don&#8217;t believe consciousness is real</a>. My conclusion is that they&#8217;re cowards, and actually believe consciousness is real.</p><p>That&#8217;s a shame, because I think there&#8217;s <em>really good arguments</em> that consciousness is not real, full stop. Like, not real in the sense that nobody is experiencing anything; that everybody is a philosophical zombie without any inner experience. All arguments that consciousness is real rely on the fact that consciousness is real to prove them. Decartes said &#8220;I think, therefore I am&#8221;? Begging the question, you&#8217;re assuming the premise in the result.</p><p>Can you agree this is ridiculous? That even without a good, rigorous argument, consciousness is indeed a real thing that you are experiencing right now? That that might be the ONLY THING you can be sure of? Because I agree with you! Consciousness is real, and something is happening to you right now. There is something rather than nothing.</p><p>But oh, if I happen to say <em>emotions are real</em>, and that some states of consciousness are preferable to others, then you cross your arms and say &#8220;erm, provide me a mathematical proof that pain is bad, please.&#8221; </p><p>Brother, just like consciousness, you <em>are the proof</em>.</p><h1>Moral Realism</h1><p>Other, lesser philosophers have to prove that virtues are real or that actions have weight to prove morality is real, which is why lots of articles defending moral realism are bad. As an enjoyer of the best and correct moral philosophy, I only need to show you that emotions are subjectively real for <em>your conscious perspective</em> to show that my flavor of utilitarianism<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> is morally real. </p><p>Seriously! I could say &#8220;all I need you to believe is that extreme agony is a thing that happens to a conscious being&#8221; but I&#8217;m actually making an even easier claim for you to agree with. You only need to agree that different conscious states are preferable <em>to an individual</em> to believe in hedonistic utilitarianism, as that&#8217;s all it assumes. You&#8217;re already a moral realist by this point, in that you&#8217;re already making all the real moral assumptions that I make! There&#8217;s nothing else!</p><p>If some conscious states are preferable to others from the perspective of a conscious experience in the present, then those are the &#8220;moral facts&#8221; of the universe. Then, causing those states can be good or bad, and blam, moral realism, you just need to follow the actual state of thing that matters more than all else, consciousness itself &#8212; so thinking that consciousness is real is being a moral realist. Torturing babies is bad because experiencing loads of unnecessary pain sucks for the baby. Easy peasy. My first subscriber <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Life In The Labyrinth&quot;,&quot;id&quot;:91003638,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b0f842b-13c7-42e3-971c-24996fd1e997_388x388.jpeg&quot;,&quot;uuid&quot;:&quot;cd6183a7-0a72-44e8-8e49-e3495b70ce32&quot;}" data-component-name="MentionToDOM"></span> <a href="https://substack.com/@lifeinthelabyrinth/p-168696034">made this consciousness to morality connection in a more poetic way then this post will</a>, but yes: I think proving emotions are real is not that much of a logical step up from the fact that there&#8217;s something rather than nothing, which already assumes time and a perspective.</p><p>I want to be clear here: I&#8217;m not defending utilitarianism in this post. To get to utilitarianism, you need to then say that you care about the moral facts of other people. It&#8217;s totally internally consistent for me to say I only care about my own moral state, and will maximize my own conscious state. To say utilitarianism is the morality of choice from these moral facts, you need to believe that OTHER conscious beings other than yourself matters, and believe that caring about other people is implied in the word &#8220;morality&#8221;. You can also build morality here in a way that only cares about humans or whatever, too!  But this post is about the ASSUMPTIONS needed to build these theories. Defending caring about all other consciousnesses instead of just yourself or a chosen population requires another post.</p><p>The best part of this is most forms of moral anti-realism already agree with something like this. Only nihilism and its branches are internally consistent position that make &#8220;less assumptions&#8221; than I do. The rest make similar assumptions, baked in! Lots of philosophers I respect find it too much of a leap to &#8220;prove&#8221; morality exists, but it&#8217;s really no less of a leap than rejecting nihilism!</p><p>Like, that claim I made is the only thing that I need to be true for my morality to in fact be real. I will show this by going through the four main antirealist schools of thought &#8212; nihilism, constructivism, expressionism, and subjectivism &#8212; and showing why they contradict this, aren&#8217;t very good, or are insane.</p><h1>Nihilism</h1><p>I&#8217;m starting with the big one here because nihilism is internally consistent, unlike the riff raff. Nihilism says there&#8217;s no value in the universe, anywhere.</p><p>This is not the same as miserablism, thinking everything sucks, which is what edgy people who CALL themselves nihilists should believe. Nihilism, and its offshoots like error theory, say that value in the universe doesn&#8217;t exist even in principle; nothing is good, nothing is bad, nothing CAN be good or bad.</p><p>Albert Camus, who people think is a nihilist<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, once famously said &#8220;There is but one truly serious philosophical problem, and that is suicide.&#8221; While this goes hard as hell, this should&#8217;ve betrayed the fact that he isn&#8217;t a nihilist, because of course nihilists don&#8217;t think suicide is truly important; there&#8217;s no value to assign TO ANYTHING.</p><p>The statement &#8220;I wish that the world was good instead of having no value&#8221; is itself a value statement that has no meaning if nihilism is true! If that means anything, the world is ALREADY not nihilist.</p><p>I can&#8217;t argue against nihilism because I can&#8217;t argue against the belief that consciousness is not real. I believe that logically, proving that time exists may prove a task that I cannot argue easily. My argument against nihilism is that life has texture. A true nihilist wouldn&#8217;t argue about nihilism, they would take purely random actions like twitching on the floor. Or maybe they wouldn&#8217;t, because doing that would be exactly identical to doing anything else. </p><p>Someone once said to me &#8220;Error theory makes sense to them, but I hope it isn&#8217;t true.&#8221; This is a silly belief, because if error theory is true, hope literally, actually, does not exist, in the true sense. If the universe is as he fears, fear and hope are <em>literally not something that exists</em>!</p><p>Nobody&#8217;s a nihilist. Go back to denying consciousness is real, full stop, in a true way unlike the illusionists who say that an illusion is even real. If there was nothing rather than something in the universe, maybe the nihilists would be right.</p><h1>Constructivism</h1><p>Rawls asks us to imagine going &#8220;beyond the veil of reality&#8221; and imagine that we don&#8217;t know which person we&#8217;re going to be, and what policies and actions we would endorse. So, if we didn&#8217;t know whether we were going to be the torturer or torturee, we would say torture is bad, since the torturer is getting owned for basically no gain. You can then imagine what a perfectly rational person would believe from beyond this veil, and Rawls says you would focus on rights.</p><p>Kant, with a different flavor of constructivism, asks us to imagine what a perfectly rational person would endorse without assuming any of their moral beliefs, and finds that they would endorse beliefs that allowed them as a decision-making person to make decisions &#8212; otherwise, they&#8217;re undermining their own moral beliefs by saying they SHOULDN&#8217;T be allowed to make decisions, which they obviously want to do.</p><p>First, this &#8220;imagine a perfectly rational X&#8221; is my least favorite argument in all of philosophy. &#8220;Yep, I just imagined a perfectly rational person, and &#8212; what&#8217;s this? I&#8217;m getting something &#8212; he agreed my insane philosophical position is correct. Get owned.&#8221; This is <a href="https://www.kylestar.net/p/morality-tier-list">not the first time I&#8217;ve complained about this argument</a>, and this won&#8217;t be the last time I complain about it in this very essay.</p><p>Rawl&#8217;s veil idea is cool, but uh, I think these rational observers would rather endorse a world where utility is maximized then one where everyone&#8217;s miserable but has rights. I think these observers would care about their actual conscious experience, not the actions that they may or may not take! <a href="https://www.kylestar.net/p/is-all-of-human-progress-for-nothing">Stuff is more important than people</a>! So my shadow realm calculations, surprise surprise, endorse the thing I believe instead of the thing he believes.</p><p>Kant&#8217;s idea I have mostly the same reaction to. Why, as a hypothetical rational actor with goals, would I respect EVERYONE ELSE&#8217;S goals instead of doing the thing I want to do and think is best? With Kant it&#8217;s actually worse than Rawls, too, because the set of stuff his logic endorses is a really stupid list of actions, like not being allowed to sacrifice 1 person for a billion. But I <a href="https://www.kylestar.net/p/intuitionism-is-quite-mandatory-in">have a whole post here, of course, that thoroughly obliterates my boy Kant.</a></p><p>Now it&#8217;s worth noting that it&#8217;s totally fine for me to be a constructivist AND a hedonistic utilitarian, using this exact logic<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> , but the reason I&#8217;m not a constructivist is because this is a way weaker way to prove that morality matters. I don&#8217;t need a perfectly rational person to have emotions just&#8230; be a real thing that happens to us. If these hypothetical rational observers came to my moral position, they might notice &#8220;hey, this moral position would have been correct and what we endorsed EVEN IF we didn&#8217;t exist!&#8221; at which point you&#8217;ve basically agreed with moral realism.</p><p>Preference utilitarianism, optimizing for people&#8217;s stated preferences, relies on something like this to figure out what &#8220;true&#8221; preferences are (so they don&#8217;t have to say that it&#8217;s morally good for someone having a psychotic breakdown to eat their shoe or whatever).</p><h1>Expressivism</h1><p>Expressivism says that when I say &#8220;X is morally wrong&#8221; what I mean is &#8220;Boo! X sucks!&#8221; They&#8217;re there to express practical values.</p><p>Expressivism might be the weakest moral antirealist position out of all of these four I&#8217;m going to talk about. Expressivism says that morality is just an attitude, and it makes how moral something is proportional to the amount that I care about it.</p><p>Now, there&#8217;s a boring &#8220;logical&#8221; killer blow called <a href="https://philpapers.org/rec/STRCTE">the embedding problem</a> that&#8217;s the most talked about annihilation of expressivism, but to be honest let&#8217;s just rundown what they&#8217;re actually claiming: The phrase &#8220;stealing is wrong&#8221; isn&#8217;t trying to describe anything objective that can be incorrect, it&#8217;s just venting an attitude.</p><p>Look, fundamentally, expressivism links morality to how much something disgusts you. The only sense of scale they can invoke is how much &#8220;Boo!&#8221; you can claim. If you care about saving 10 birds instead of 1 bird, but don&#8217;t care about saving 10,000,000 birds as much of 1,000,000 birds because your brain can&#8217;t instinctively parse the zeroes, I think you&#8217;re misdescribing what morality is meant to capture; morality is something that cares about world states, and when I say &#8220;I like onions&#8221; I&#8217;m making a personal expression in a fundamentally way that &#8220;I think murder is wrong&#8221;.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe if you believe pain is bad.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That previous paragraph covers the most basic versions of expressivism, but there&#8217;s another branch that needs addressed called &#8220;quasi-realism&#8221; that&#8217;s about ranking worlds, grounded in what we prioritize, so if you care about birds, you can imagine hypothetical worlds that birds are being tortured in and choosing ones that have fewer. That seems like a bold claim to rest morality on, especially as quasi-realism boasts about reconstructing morality from nothing, based in attitudes.</p><p>If you imagine a world with only Ted Bundy and his victim, you can&#8217;t condemn his actions until there&#8217;s a crowd to boo him &#8212; this strikes me as not really making sense.</p><p>All versions of expressivism are latching onto the <em>thinker&#8217;s</em> positive and negative impressions of an event, where my moral theory latches onto the <em>actual subject we&#8217;re talking about.</em> If you get very angry about torture, I think the thing that grounds what morality I personally am talking about is the torture, not the anger, so expressivism is using the existence of emotions to show what morality is in an incorrect way.</p><h1>Subjectivism</h1><p>Subjectivism is the belief that when you say &#8220;X is morally wrong&#8221; what you&#8217;re really saying is &#8220;I disapprove of X.&#8221; Subjectivism has a lot of the same issues as expressionism, but I&#8217;m not going to cover them again. <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Travis Talks&quot;,&quot;id&quot;:28576402,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/854a6ebc-11a5-444c-a6b5-b93ddd139738_420x420.webp&quot;,&quot;uuid&quot;:&quot;0bb12bd7-9e6e-44f0-a21d-7cb3f3826b8c&quot;}" data-component-name="MentionToDOM"></span> gives a very good defense of subjectivism in <a href="https://travistalks.substack.com/p/objections-to-subjectivism-are-terrible">this post</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>, because some of the arguments against subjectivism are really bad and don&#8217;t understand it.</p><p>To be clear, subjectivists aren&#8217;t saying that you have to accept that someone who says &#8220;torturing babies is good&#8221; is something you must <em>respect</em> or <em>agree with</em>. What they&#8217;re claiming is morality is a preference, in the same way that if someone says &#8220;onions are good&#8221; they&#8217;re expressing their personal preferences. It&#8217;s reasonable to say that someone who thinks that torturing babies is good has a personal preference in favor of torturing babies!</p><p>The issue with subjectivism is actually that there are things you don&#8217;t know. If there&#8217;s some atrocity you don&#8217;t know about, you can&#8217;t disapprove of it, so if you disapprove of humans suffering, and there were aliens on another planet you don&#8217;t know about, or some other moral atrocity, they have no moral weight. That seems dumb.</p><p>Travis addresses this complaint in his article by invoking idealized subjectivism. Imagine that you were a hypothetical observer who knew&#8230; all relevant facts&#8230; wait a second&#8230;</p><p>A hypothetical observer who knew all relevant facts? Not again! How do we define &#8220;relevant&#8221; well? Which relevant facts? Ones about the situation, or the effects of it on people? How do you rigorously decide what facts this ideal observer gets to know instead of saying &#8220;all facts&#8221;? Dang nabbit, like with constructivism, because I&#8217;m a moral realist, an observer who knew all facts would agree with me! If you&#8217;re invoking informed observers to judge morality, and saying their preferences don&#8217;t count if they don&#8217;t know all the facts to make &#8220;suffering is wrong&#8221; come out true, doesn&#8217;t it feel like some moral statements can be true regardless of whose attitudes are currently switched on?</p><p> I feel like the claim &#8220;morality is like other preferences, but unlike other preferences there&#8217;s &#8216;ideal information&#8217; that must be given to an observer before he decides if his preferences are true&#8221; is just a strong claim to make in general, and would amount to this preference being somewhat divorced from the observer actually making the claim. Meanwhile, my flavor of moral realism only requires <em>emotions to be real</em>. And a non-idealized form of subjectivism is hopeless.</p><p>Like expressivism, we&#8217;re admitting that preferences are real, but instead of pointing at the people being tortured and saying &#8220;hey! maybe that&#8217;s bad for them because they don&#8217;t want to be tortured!&#8221; we&#8217;re pointing at the guy standing on the sidelines ant going &#8220;hm, morality is a pointer about him, really.&#8221; You&#8217;re assuming the same things I am, but just denying that the subject is what we&#8217;re trying to talk about!</p><h1>Fin</h1><p>I&#8217;m a moral antirealist for deontology and virtue ethics. I agree you can&#8217;t prove <em>their</em> moralities are real, good luck brother, actions don&#8217;t have inherent moral weight, only stuff that <em>happens to humans and that we can feel</em> matters. Luckily, my morality is the best and the sickest and the coolest, and it requires an extremely small leap to reach, less of a leap than most antirealist positions, in fact! I think suffering is bad, and suffering is a real thing that exists that some people experience through things called &#8220;emotions.&#8221; Utilitarianism bing bang boom.</p><p>Consciousness has good states and bad states. States being different from one another in a way that matters to the subject imply things matter. Things mattering means moral realism.</p><p>So don&#8217;t feel like you&#8217;re making some grand claim when you claim to be a moral realist. You&#8217;re just following the river of the universe, and came to the conclusion that the simplest explanation is the one you see with your own eyes.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I&#8217;m talking about hedonistic utilitarianism in this post. I think you CAN argue this works for preference utilitarianism, especially when I use the word &#8220;preferences&#8221; a lot in the post, but I think the line between them is thinner than one may think, and the arguments for hedonistic are stronger. This requires another post.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>He is not but his vibe was so cool and sick that everybody thought he was a nihilist</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Constructivism is a metaethical theory, hedonistic utilitarianism is a normative one. They&#8217;re two separate categories, the thing I&#8217;m defending here is moral realism but luckily my moral realism can be barebones given how little realism hedonistic utilitarianism requires. People act like the metaethical theory is TOTALLY INDEPENDENT of the normative theory, but that obviously isn&#8217;t true given where we claim &#8220;moral realism&#8221; comes from.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>MORE PHILOSOPHERS ON SUBSTACK RAAAAAAAAAAA</p></div></div>]]></content:encoded></item><item><title><![CDATA[The People Who Don't Believe Consciousness is Real]]></title><description><![CDATA[Are souls real or not? Am I mistaken about all that exists? Can we prove materialism is false with just our thoughts?]]></description><link>https://www.kylestar.net/p/the-people-who-dont-believe-consciousness</link><guid isPermaLink="false">https://www.kylestar.net/p/the-people-who-dont-believe-consciousness</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 05 Aug 2025 12:05:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1b949c93-2853-4224-a11c-07819afdfe09_1596x1156.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Illusionism is the belief that consciousness isn&#8217;t real*.</p><p>See, a long time ago, some philosophers like Aristotle thought that there was some &#8220;essence of vitality&#8221; that separated living matter and non-living matter. Outside physics and chemistry, he thought, resides a force that guides and animates biochemistry, turning rocks into life. This belief was completely obliterated by our modern understanding of biology, and was super-completely obliterated when in 1828 Friedrich W&#246;hler synthesized an &#8220;vitalist&#8221; substance called urea from inorganic salts.</p><p>In 400 BC Plato thought that concepts like &#8220;justice&#8221; and &#8220;triangleness&#8221; mirrored perfect, ideal concepts that could only be grasped by logic. He called this world <em>The World of Being</em>, and the stupid, dumb, imperfect world that we reside in he called <em>The World of Becoming</em>. As it turns out, algebra showed that triangleness was better represented as constructs based on rules, and linguistics showed that words are just pointers to shared ideas. Oops.</p><p>In the 1600s, scientists proposed Phlogiston, a negative weight &#8220;fire-stuff&#8221; that explains where stuff goes when it burns. It turns out that when fire is burned, it actually GAINS weight through oxidation, so the matter certainly doesn&#8217;t disappear into phlogiston. Another pure theory destroyed by the fact that chemistry happens to be very complicated.</p><p>Quintessence, the fifth element of perfect circular motion, a substance that fills the heavens to explain why planets orbit in circles? Destroyed by gravity and cosmology. Caloric, the stuff of heat, meant to explain why hot travels to cool? Destroyed by kinetics and the understanding of atoms.</p><p>Philosophers have a really really bad track record of asserting there is some pure, abstract substance to explain stuff that they don&#8217;t understand, when it turns out there&#8217;s just a number of complicated, smaller forces at play. Throughout history, vitalism has weaved to biology has weaved to germs, then to atoms, then to quantum mechanics, as <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Harjas Sandhu&quot;,&quot;id&quot;:243053397,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!gayL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37db8b35-dea8-46b4-9c2e-47d1db433225_247x247.png&quot;,&quot;uuid&quot;:&quot;5b22c7c9-fd53-4fc2-9937-1f223c77b86c&quot;}" data-component-name="MentionToDOM"></span> laid out on his <a href="https://eurydicelives.substack.com/p/order-and-chaos">fantastic guest post</a> on <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Eurydice&quot;,&quot;id&quot;:27540670,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a535682-133a-477c-b238-adc50e1a94ce_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;ef2fedff-9540-4333-9808-fae0edd56edc&quot;}" data-component-name="MentionToDOM"></span> Lives. I pity the scientist or philosopher that says quarks are the smallest thing, and that we now have all the tools in the toolbox to understand all of reality. Thinking we&#8217;re close to done is not a position that&#8217;s held up well, and positing some &#8220;pure&#8221; layer of reality that&#8217;s not just created from a massive amount of much tinier forces dictating it has always failed.</p><p>So some today have taken their cannons and aimed it at The Big One, the one that people still think today <em>must</em> have some pure form apart from physics, the one that people cling onto for dear life, for diluting its unique position is diluting our very importance: consciousness, or whether our minds are even there at all.</p><h1>Tinker, tailor, soldier, spy</h1><p>There&#8217;s about a trillion schools of thought on consciousness, because philosophers love to Think about Thinking more than their sons and daughters.</p><p>Dualism<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> thinks that there&#8217;s a mind stuff, and there&#8217;s a physics stuff. In the brain, they interact, uhh, somehow<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. The fact that they don&#8217;t understand how they interact isn&#8217;t a knock against them, because consciousness is the least understood thing in the history of ever, but they believe there&#8217;s ways to prove and show that the mind must exist outside physics.</p><p>Conservative realists<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>, primarily, believe that dualists are wrong and stupid. There&#8217;s no secret other force that&#8217;s outside the realm of physics, that doesn&#8217;t make any sense bro, physics includes all forces that actually interact with the world. But they do, in fact, see consciousness as &#8220;a special thing&#8221; and are willing to give it some special status, but it <em>must</em> be something that exists in within physics. They might only believe this implicitly, given that tackling consciousness isn&#8217;t something every materialist has done, but we&#8217;re marking this group as seeing consciousness as a real force to reckon with.</p><p>Now the fun one. Strong illusionists, such as Daniel Dennett and Keith Frankish, think that the &#8220;what it&#8217;s like&#8221; properties of consciousness simply do not exist; they&#8217;re illusions. There is no mysterious redness of red, there is no &#8220;feeling&#8221; or wet, and no sensation of rough. It&#8217;s illusions, all the way down. Dennett&#8217;s famous computer analogy invites you to imagine consciousness as a computer; you can see and interact with the user interface, but the interface itself is a calculated illusion, based off of red, blue, and green pixels, designed in a way easy to interact with but betraying nothing of the inner workings themselves, simply a bundle of pixels designed to <em>appear</em> like something different.</p><p>This <em>accusation that consciousness is not real </em>is a very natural one: we simply reject that the &#8220;pure substance&#8221; of consciousness exists. Just like the pixels on a screen are complicated representations of code, just like vitalism is complicated interactions of chemistry, consciousness is just complicated interactions of circuitry in the brain. They are trying to avoid the mistake philosophers have made since Plato and Aristotle, the mistake that says this pureness of consciousness is a true, final wall.</p><p> &#8220;You,&#8221; they say, are 200-300ms bundle of self-referential circuitry in the brain that&#8217;s provided a scene and information like memory and schema. The circuitry provided, communicating <em>I </em>am the body, <em>I</em> am looking at a tree, <em>I </em>am thinking of my sister, are all a jumble of information supplied that creates a computer monitor-style simple representation of the world. Getting any more specific would require knowledge of psychology that we don&#8217;t have. But the most important part for an illusionist is the fact that <em>the neurons come without markers of where they came from</em>. The tag &#8220;red&#8221; appears self justifying, as if these facts couldn&#8217;t be reduced. But it&#8217;s just the neurons in the brain, redness is represented just with &#8220;pixels.&#8221;</p><p>Perhaps this is all a bit too technical. What does illusionism <em>actually say</em>, like, what claims does it make about consciousness? These aren&#8217;t bugs, they&#8217;re features: these are the claims that follow naturally from what illusionism posits. These are not arguments against illusionism, these are the very claims it&#8217;s making itself.</p><ul><li><p>Well, first of all, if you made a clone of yourself, the question &#8220;which one is me&#8221; doesn&#8217;t make any sense. Neither of them are you; the illusion of a continuous self is part of the play.</p></li><li><p>If a teleporter destroyed you and made a perfect copy of you on another planet, that would be the same amount of &#8220;you&#8221; as yourself if you go to sleep.</p></li><li><p>Consciousness is not an &#8220;exists or not&#8221; phenomenon, if a brain was split in two, both halves are conscious and nothing there is mysterious or needs to be explained, because there&#8217;s no &#8220;flow of person&#8221; anyway.</p><ul><li><p>I&#8217;m gonna be really really clear here, because it&#8217;s implied from the three before: &#8220;you&#8221; in 30 seconds is as different from &#8220;you&#8221; now as a stranger is from you. This naturally follows from illusionism &#8212; we cannot claim there&#8217;s a immutable soul that travels along with you. Dismissing that is the whole point.</p></li></ul></li><li><p>If you were a brain in a vat, that would not be any different from if you weren&#8217;t, because the feels reported to your brain are the same.</p></li><li><p>It&#8217;s impossible to imagine a sort of &#8220;zombie&#8221; replica of you without consciousness, because we&#8217;re all zombies, in a certain way &#8212; there&#8217;s no way to make a zombie without the illusion if it&#8217;s physically identical.</p></li></ul><p>Now, first of all, I think was misled. I was promised that these were the people who <em>didn&#8217;t believe consciousness was real</em>, but they actually believe that consciousness is an illusion generated by smaller properties, just like the way a tree is made of atoms. Sure, they say there&#8217;s no continuous self, but in his paper, Frankish says that <a href="https://keithfrankish.github.io/articles/Frankish_Illusionism%20as%20a%20theory%20of%20consciousness_eprint.pdf?utm_source=chatgpt.com">the existence of an illusion implies an audience</a>, and his three paragraphs explaining the different illusionists view on just who the illusion are for is the most curious part of his whole 23 page paper. Illusionists are, in a manner of speaking, committed to that 200-300ms view of the illusion itself, or some illusion-moment like it. That leads me to my most beautiful and controversial belief, from my heart of hearts:</p><p><em>I don&#8217;t think illusionism goes far enough.</em></p><h1>The Soul</h1><p>The most debated question in all of consciousness is called &#8220;the hard problem of consciousness.&#8221; It&#8217;s about whether these &#8220;what it&#8217;s like&#8221; properties of consciousness like the redness of red can exist in a purely physical universe or if consciousness requires special, nonphysical properties or not. Illusionism&#8217;s scope says that there is no &#8220;redness of red" and that we&#8217;re wrong about what we see. This is, like we said, an attempt to explain why the physical universe is all that&#8217;s necessary for consciousness.</p><p>But what they won&#8217;t say is that <em>literally nothing exists</em>. The greatest line in all of philosophy, spoken by Decartes, is &#8220;I think, therefore I am.&#8221; Illusionists say that &#8220;I think&#8221; is neurons making an elusive and incorrect self-model of itself. They say &#8220;therefore&#8221; is an algorithm foolishly establishing a metaphysical connection between two unrelated bridges of matter. And they say &#8220;I am&#8221; is a center of narrative gravity that&#8217;s absolutely not an additional center of nature. You could convince me of all of these.</p><p>But these still don&#8217;t fight off non-physicalism, because of a pesky substance called <em>time</em>.</p><p>Physicists posit a 4D model of the universe, with time as a fourth axis. A shame that this model isn&#8217;t real, or else the illusionists might have a point. But it doesn&#8217;t matter if you strip away qualia, strip away the continuous self, strip away all that makes me human if you still say that <em>time is experienced right now</em>. Right now, as opposed to any other moment in history. You know what I&#8217;m talking about. The present. Bask. Introspect, and realize you are there.</p><p>But what I&#8217;m saying is that it doesn&#8217;t matter if you&#8217;re <em>mistaken</em> about whether you&#8217;re here right now; say you&#8217;re on shrooms and imagine yourself everpresent, or an animal who can&#8217;t introspect. It doesn&#8217;t matter if the you in 30 seconds is a completely different person than the present right now. As long as there is an illusion, as long as you admit that <em>something exists</em>, you concede that there is a perspective. And where is that perspective? Well, I can tell you where it&#8217;s <em>not</em>: in the physical world.</p><p>This should be obvious, and I&#8217;m not just saying &#8220;this should be obvious&#8221; to cover up an insane belief. What I&#8217;m saying right now is that time exists. That, I hope, is an uncontroversial presence. And I&#8217;m saying that time is not experienced from a 4D omnipresent perspective that physics says it is, I&#8217;m saying that time is experience from&#8230; uh, A perspective. ANY perspective. This, I also hope is obvious; it would be strange to posit that this separate, 4D perspective exists, when we have no direct access to it, instead of just saying that time is experienced from A PERSPECTIVE.</p><p>But if time is experienced from a perspective, which one? There&#8217;s an intuition that time happens at the same time for everyone. This is a funny little belief that has been <em>completely disproved</em> by general relativity. Really, if you want to strip us barebones down to <em>I think therefore I am</em>, then you are experiencing life at a time from a perspective. WHATEVER perspective exists, whatever is the only sight we can see on reality, that&#8217;s consciousness, and that&#8217;s YOU, baby. Even if you in 30 seconds isn&#8217;t you, you RIGHT NOW are you! Congrats! This perspective, this only access to reality that exists, there&#8217;s a name we can call it. </p><p>The soul.</p><p>How can we differentiate your soul from another&#8217;s soul without invoking something outside physicalism? You say &#8220;Kyle, this is all ridiculous, I can just say that I&#8217;m me, and you&#8217;re you, and we exist inside physics. You&#8217;re providing some essence outside physics here when it isn&#8217;t needed; A tree exists in one location, but doesn&#8217;t in another. In the same way, I exist here, you exist there. All within materialism.&#8221; </p><p>I think you&#8217;re missing something crucial: Atoms in the tree are different from the atoms outside the tree. But how in the world am I going to say that my perspective is different from your perspective? Do I have special &#8220;consciousness atoms&#8221; that indicate that my perspective is currently what exists? Unless solipsism is true, surely not. What property are you saying is the difference between the perspective I have and the perspective you have? Because the only thing that exists in the universe, the single window into time we call the present, only exists in one. You may argue &#8220;saying nonphysicalism is true because I&#8217;m different than you is stupid and unnecessary; we&#8217;re just saying &#8216;the cord is plugged into laptop A, not laptop B.&#8217;&#8221; Once again I insist you show me the part in the universe that represent your perspective, and the atoms that represent mine.</p><p>Whenever I say the word &#8220;soul&#8221; if that causes some revulsion in you, you can just replace it with &#8220;the index that I am currently experiencing time from&#8221; and my point will be the same; and this is something that I hope you see must exist. You can also call this The Soul Problem, and say it proves God&#8217;s existence if you wish, but what I&#8217;m making here is really an extremely limited claim of dualism.</p><p>So I&#8217;m going to call this The Index Problem<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>, because all it asks is you to show me the index that indicates what time it is. Show me the index that says it&#8217;s now the present for you in the physical world, instead of the present for me, or the present for someone in 100 years, and physicalism is solved. But without that, the admission that<em> anything exists at all</em>, the admission that there&#8217;s <em>something rather than nothing</em>, leads to the soul. So physicalism falls flat.</p><h1>The present is all that exists</h1><p>The claim that &#8220;a soul&#8221; exists FEELS like such a strong claim, but I&#8217;m not even claiming this soul follows you from moment to moment, which is what illusionists deny. I really believe this claim is simple, and it&#8217;s the best argument for dualism I have; I&#8217;m unconvinced by philosophical zombies and Mary the color scientist.</p><p>I just don&#8217;t think illusionism goes far enough, and I think philosophers talk way too much about qualia and not enough about time. Admitting we exist at all might be enough.</p><p>I hope I could communicate just how truly odd time is as the only vessel for consciousness we know, and how it would require denying consciousness exists in the true sense to escape from the fact that time is experienced from one perspective.</p><p>The present is all we know we have; the past and future are but dreams in the hurricane of reality. Cling to the you that you are, and hope that whatever else exists is good.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em>If you enjoyed this post, like it, because damn I&#8217;m proud of it and that&#8217;s the best way to support me. Time is really, really weird. Also, if you&#8217;re new, subscribe! It lets me write more. You can also view some of my other good posts <a href="https://www.kylestar.net/p/is-all-of-human-progress-for-nothing">here</a> and <a href="https://www.kylestar.net/p/just-because-theyre-annoying-doesnt">here</a> if you&#8217;re undecided. Thank you.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Also called radical realists</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Dear god, there are so many opinions for how they interact, and that&#8217;s only among philosophers. I&#8217;m not even going to try to mention religion in this one, but bro. So many.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Also called conservative realists, non-dualists, weak illusionists</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This is related to anthropics, the sleeping beauty problem, and &#8220;the essential indexical&#8221; (which talks about how &#8220;I&#8221; isn&#8217;t the same for me as it is for you, but has too much of a focus on linguistics to be the argument I&#8217;m making here)  but I think all of these don&#8217;t capture what I&#8217;m talking about.</p></div></div>]]></content:encoded></item><item><title><![CDATA[That Sam Kriss Article About Rationalism, “Against Truth,” Sucks]]></title><description><![CDATA[The title, "against truth," is very accurate though]]></description><link>https://www.kylestar.net/p/that-sam-kriss-article-about-rationalism</link><guid isPermaLink="false">https://www.kylestar.net/p/that-sam-kriss-article-about-rationalism</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 29 Jul 2025 11:58:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7bcab646-2211-416c-b1f6-61925f3ddf11_1406x1284.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You can be an amazing writer and the most eloquent wordsmith, but if you literally just don&#8217;t understand what you&#8217;re talking about, there&#8217;s only so much you can do. </p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sam Kriss&quot;,&quot;id&quot;:14289667,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e7a7673-bc18-4190-be35-81e29a4ba9e5_2980x3024.jpeg&quot;,&quot;uuid&quot;:&quot;326ea6c0-890d-4f92-b05e-04ecd8a44ba5&quot;}" data-component-name="MentionToDOM"></span> made a post titled <em><a href="https://samkriss.substack.com/p/the-law-that-can-be-named-is-not">The law that can be named is not the true law</a></em> about how modern anti-terror laws criminalize not just an action but even questioning the law itself. It starts by talking about the UK&#8217;s new Palestine Action law, which, because of UK law, makes it illegal to even express an opinion that might make other people support Palestine Action. He then weaves through similar and related cases involving Judaism, and ends with a case about Laurentius&#8239;Clung, a nihilistic Calvinist who insists all souls are damned. </p><p>People noticed that this &#8220;Laurentius Clung,&#8221; did not exist, and was in fact made up to make the story cooler. Rationalists, like Eliezer Yudkowsky, shot back saying mixing something that appears as political commentary along with fiction might make people believe the lie that you put immediately after the facts.</p><p>Sam Kriss than shot back with a 6,000 word essay called <em><a href="https://samkriss.substack.com/p/against-truth">Against Truth</a></em> where he fires shots at people calling him a liar, Yudkowsky, AI bros, and utilitarians. I encourage you to read it so you can decide if my criticisms are fair, but here&#8217;s a summary:</p><ul><li><p>He fires some tongue-in-cheek shots saying his haters called him &#8220;&#8216;morally miscalibrated,&#8217; &#8216;morally repulsive,&#8217; &#8216;sadistic,&#8217; &#8216;operating in bad faith,&#8217; a &#8216;bottom-feeder,&#8217; a &#8216;grifter,&#8217; a &#8216;malicious actor,&#8217; both a &#8216;data hazard&#8217; and an &#8216;infohazard&#8217;.</p></li><li><p>He then says tongue-in-cheek that Clung is a real person that he didn&#8217;t make up, from <em>The Reformation of the Sixteenth Century</em>. He then says there&#8217;s substantially more on Clung in Blaire G Smellowicz&#8217;s <em>Sodomites, Shepherds, and Fools: Minor Prophets of the Reformation</em></p></li><li><p>He then says recounts a number of other false claims he&#8217;s made, like how he <a href="https://samkriss.substack.com/p/numb-at-the-spectator-summer-party">had sex with half the Tory front benches after the 2023 </a><em><a href="https://samkriss.substack.com/p/numb-at-the-spectator-summer-party">Spectator</a></em><a href="https://samkriss.substack.com/p/numb-at-the-spectator-summer-party"> summer party</a>, and says they were all true too.</p></li></ul><p>Alright, time out. We&#8217;re only a third of the way through, but the post is long, so I gotta be able to stop once every while. First, this is pretty funny. His list of what his haters have called him are long and obviously tongue in cheek, but when I started to read his paragraph about how Clung is real, it reads straight! Like he&#8217;s not doing a bit and Clung is actually real! He makes some plausible citations, some reasonable claims about how you can&#8217;t find everything on the internet, and only then launches into ridiculous claims about how everything is all true in the next paragraph. Responding to your haters who say you mix fact and fiction by insisting you always tell objective truth and making it very hard to tell if you&#8217;re telling the truth is funny.</p><p>Then, for no reason, he launches into a sincere and heated critique of rationalism and how much he dislikes Eliezer Yudkowsky, rationalism, utilitarianism, and everything Big Yud stands for, where he strawmans most of the points he makes and generally does a terrible job of understanding any part of what he&#8217;s knocking down.</p><p>&#8230;</p><p>Before we talk about any of the actual points, first, damn, what the hell did Yudkowsky do to this guy? Yudkowsky is totally known for his snark, and despite being very good at finding truth in my opinion he indeed can come off super arrogant. Alright, what insane diss deserved a demolishing of everything Yudkowsky&#8217;s ever done? What devious rant did Yud launch into? Well, hold onto your butts, here&#8217;s the tweet:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KWgV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KWgV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 424w, https://substackcdn.com/image/fetch/$s_!KWgV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 848w, https://substackcdn.com/image/fetch/$s_!KWgV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 1272w, https://substackcdn.com/image/fetch/$s_!KWgV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KWgV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png" width="1220" height="538" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:538,&quot;width&quot;:1220,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:117036,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/169521909?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KWgV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 424w, https://substackcdn.com/image/fetch/$s_!KWgV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 848w, https://substackcdn.com/image/fetch/$s_!KWgV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 1272w, https://substackcdn.com/image/fetch/$s_!KWgV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c27ff2c-d333-4f51-8d27-c480490de30f_1220x538.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Come on. That&#8217;s the most fair and most mild Yud critique I&#8217;ve ever seen. I&#8217;m actually impressed by how milquetoast that take is.</p><p>The second of three sections is critiquing rationalists, based off of that tweet which launched Sam into tizzy, and this section is not very good in my opinion:</p><ul><li><p>Rationalism is &#8220;mostly about living in the Bay Area, writing things like &#8216;fark&#8217; or &#8216;f@#k&#8217; instead of &#8216;fuck,&#8217; and having unappealing sex with your entire friend group.&#8221;</p></li><li><p>Sam&#8217;s take on truth, and trying to be generally correct? &#8220;I think the universe is not a collection of true facts; I think a good forty to fifty percent of it consists of lies, myths, ambiguities, ghosts, and chasms of meaning that are not ours to plumb. I think an accurate description of the universe will necessarily be shot through with lies, because everything that exists also partakes of unreality.&#8221;</p></li><li><p>Anthropic in 2021, before ChatGPT ever came out, Anthropic thought AI assistants could be helpful to the world. AI is actually bad and ChatGPT is bad for society.</p></li><li><p>Harry Potter and the Methods of Rationality is bad, and the foundation of rationalist thought. He spends 5 paragraphs mocking how bad the writing, characters, and plot is, and it explains why rationalists believe AI will be big &#8212; &#8220;a recursively self-improving artificial general intelligence is just our name for the theoretical point where infinite intelligence transforms into infinite power&#8221;</p></li></ul><p>Okay, this is the bulk of the anti-truth stuff he gets into, and it&#8217;s just a bad argument. Sam is a great writer, but I don&#8217;t think the phrase &#8220;I think an accurate description of the universe will necessarily be shot through with lies, because everything that exists also partakes of unreality&#8221; MEANS anything. I think he&#8217;s a great writer who can dress up nonsense in fancy words so your brain doesn&#8217;t realize he ain&#8217;t saying squat. </p><p>Is Sam saying he&#8217;s a conspiracy theorist, and if you believe flat earth, that&#8217;s just as valid? Surely if I say climate change isn&#8217;t real or I say QAnon&#8217;s plight is completely true, Sam has some thoughts on how truth might mean something in some cases? Do you get a pass to believe false things if you just, like, really want to and can justify it? WHAT DOES THAT QUOTE MEAN GUYS. Is it just gesturing at some general spiritual course that nature takes, for only God&#8217;s eyes, far too mysterious for any human mind to even try to comprehend? Can I believe in ghosts only if I don&#8217;t have an opinion on how ghosts are polluting the atmosphere with CO2 and will only stop if the woke agenda is stopped?</p><p>The same instincts that draw me away from believing all that dumb shit is wrong, are the same instincts that let me say &#8220;hey, maybe truth is actually, like, useful and real.&#8221; Snow is white.</p><p>His AI critique is that rationalists thought AI would be TOO GOOD? I don&#8217;t see many rationalists disagreeing that AI has done harm to society nowadays with stuff like making learning worse in schools. The Harry Potter fanfiction stuff is obviously not, in fact, the foundation of all of rationalists, it was a side project for fun; engaging 5 paragraphs with how it&#8217;s badly written seems like a poor use of time, and his understanding of why rationalists are scared of AI, talking about infinite intelligence and infinite power, is not a good representation of why rationalists think AI could accelerate. The actual argument boils down to &#8220;AI is good at doing tasks fast. This is already obviously true; ask ChatGPT or an image generator or video generator anything. What if we use the fact that it&#8217;s good at tasks to help one specific task, helping make AI?&#8221;</p><p>&#8230;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>He ends the article tackling against utilitarians:</p><ul><li><p>Rationalists are utilitarians, which is weird, since philosophers at large aren&#8217;t utilitarians.</p></li><li><p>Utilitarianism can make torturing someone unjustly be positive for society if many people get enjoyment out of it.</p></li><li><p>The Repugnant Conclusion pits a small number of very happy people against a large number of miserable people &#8212; utilitarianism chooses the latter.</p></li><li><p>If I want to save a drowning child, I have no way of knowing that the drowning child I pull out a river isn&#8217;t Baby Hitler, so you shouldn&#8217;t save a drowning child if you see one (?). He says utilitarianism requires infinite knowledge (?)</p></li><li><p>Utilitarianism is &#8220;lifeless, brutal, reducing us all to preference maximisers, arrogant beyond belief, and utterly opposed to every principle of life and dignity.&#8221;</p></li></ul><p>He takes a final stab at rationalists by saying Roko&#8217;s basilisk is the idea of hell, because of quantum immortality, before ending the essay.</p><p>His point about utilitarianism thinking an action which hurts one can be good if it helps many is true. It&#8217;s a very odd complaint, since utilitarianism is the one moral philosophy most against torturing billions of animals for our own small satisfaction of eating them. If a society sacrificing one for many is bad, because &#8220;According to any sensible ethical system, we&#8217;ve entered the abyss,&#8221; wait until Sam finds out about real life where we do way worse than that for way less of a reason!</p><p>The rest don&#8217;t even seem to understand what they&#8217;re talking about, at all. The Repugnant Conclusion only works if lives are worth living, not miserable, which is pretty damn fundamental to the thought experiment! The Baby Hitler one, he just admits he wouldn&#8217;t save a drowning child he sees to Own The Utes&#8482;, which I guess if you don&#8217;t want to save drowning children that&#8217;s fair, but uhh. His point about requiring infinite knowledge shows he doesn&#8217;t understand utilitarianism or decision making in philosophy in general, because you have to do the things you think are the best! That&#8217;s baked into any moral philosophy! And for the Roko&#8217;s stuff that he ends the article on, I don&#8217;t think any rationalists I know believe in quantum immortality, that&#8217;s literally just&#8230; not what Roko&#8217;s basilisk is at all. Weak ending.</p><p>&#8230;</p><p>Sam Kriss is a very good writer. He&#8217;s funny, engaging, and the stories he weaves are interesting. He should&#8217;ve admitted that mixing truth and fiction might, maybe, possibly, be something that could lead to people thinking false things. Instead, he goes into a diatribe against the idea of truth, and presents the worst arguments of all time proving he doesn&#8217;t understand any of the stuff he&#8217;s talking about. I could give him even less credit and say he&#8217;s purposely completely misrepresenting the arguments for entertainment, but that&#8217;s too on the nose even for me.</p><p>You&#8217;re allowed to be wrong, being &#8220;against truth&#8221; is an opinion someone can have, but surely if someone says &#8220;hey, all those arguments you made are literally just lies&#8221; you can&#8217;t be very mad, and you probably shouldn&#8217;t follow it up by making some seemingly sincere arguments that show a massive lack of understanding of the arguments themselves. If he&#8217;s &#8220;against truth&#8221; even when he&#8217;s being sincere, if he says &#8220;hah, I made all those strawmen on purpose! My readers should never care what I say even when I present myself as earnest! Mwahaha!&#8221; then sure man, congrats, I guess the title of your post was an evil scheme to lead people to believing incorrect things. Who cares if climate change is real, truth is just, like, fake, man.</p><p>Sam Kriss makes good art, not arguments. Keep reading him because he&#8217;s interesting. But the words he weaves are a curious and enjoyable lie, and a self-admitted one at that, and I&#8217;m now unsure if he really understands anything he talks about. If Sam is against Truth, then I will happily side with Truth herself.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FCA1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FCA1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 424w, https://substackcdn.com/image/fetch/$s_!FCA1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 848w, https://substackcdn.com/image/fetch/$s_!FCA1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 1272w, https://substackcdn.com/image/fetch/$s_!FCA1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FCA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png" width="1284" height="596" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:596,&quot;width&quot;:1284,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:125458,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/169521909?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FCA1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 424w, https://substackcdn.com/image/fetch/$s_!FCA1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 848w, https://substackcdn.com/image/fetch/$s_!FCA1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 1272w, https://substackcdn.com/image/fetch/$s_!FCA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1fa00e4-1943-4911-9bc3-815726f05fa1_1284x596.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Also, someone mistaking me for Sam Kriss in the thread about &#8220;Against Truth&#8221;??? This absolutely cannot stand! Sullying my name, I had to write this post to counter him now! Despicable!</figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">If you enjoyed this post, please subscribe. The more subscribers I have, the more I can write! Also, you&#8217;re on the side of Truth&#8482; or whatever, I don&#8217;t know. Clicky-clack the button!!!!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There&#8217;s a line here says that rationalists&#8217; &#8220;politics run all the way from the furthest fringes of the far right to the furthest fringes of the liberal centre.&#8221; I&#8217;m not gonna mention this point again but it&#8217;s a really really good line I wanted to highlight lmao. This dude is funny.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Scott Alexander is Smarter Than Me. Should I Steal His Beliefs?]]></title><description><![CDATA[Arguments are overrated; you can find truth without them]]></description><link>https://www.kylestar.net/p/scott-alexander-is-smarter-than-me</link><guid isPermaLink="false">https://www.kylestar.net/p/scott-alexander-is-smarter-than-me</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Mon, 28 Jul 2025 12:05:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/84cc46c7-2545-43b0-909d-24dffd309e02_2048x1601.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s no secret I&#8217;m a <a href="https://www.astralcodexten.com/">Scott Alexander glazer</a>. Scott once said that he was an embarrassing fanboy of Eliezer Yudkowsky, and that it may be his fate to have embarrassing fanboys of his own one day. Well, the bell tolls. I find Scott to be consistently interesting and intelligent, and he has a way of connecting topics to one another in an interesting way that I&#8217;ve never seen from anyone else. He&#8217;s seemingly dedicated to truth more than anyone else I know.</p><p>So, he&#8217;s smarter than me, and a better thinker than me, and spent a lot of time on many different topics trying to find the truth. As the smartest guy I&#8217;ve found, should I steal all of his beliefs indiscriminately? Legitimately. If a person only care about having the most correct beliefs, which I feel like is a reasonable goal, is finding the smartest person you know and stealing his beliefs a good idea?</p><h1>Down the rabbit hole</h1><p>Epistemology and epistemic practices are the study and methods of finding true things<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. For the vast majority of issues, in politics, religion, psychology, and philosophy, where Scott&#8217;s gotten his beliefs through <em>fantastic</em> epistemic practices and I&#8217;ve gotten my beliefs from random sources and friends and biases that I can&#8217;t remember, does it make sense to copy everything? </p><p>Well, it seems obvious that there are some issues I shouldn&#8217;t copy from him &#8212; If there&#8217;s an issue where I&#8217;m an expert and he is not, then I probably shouldn&#8217;t steal his beliefs. But for the rest, I can treat very act of him having a belief as very strong Bayesian<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> evidence in favor, and update very strongly towards his belief, because I know that he&#8217;s spent a lot of time thinking about any given topic to attempt to reach the truth, and I know I&#8217;m more biased!</p><p>Alright, to back off from the Scott glazing, am I too preoccupied with having my own, interesting opinions and I should change ALL my beliefs to what the smart people I respect think? Fully adopting someone else&#8217;s opinions without understanding them can be a dangerous game. Without understanding why someone believes X or why, you don&#8217;t actually know which way new evidence should swing, and how much. But I find it very easy to agree with the fact that if 99% of people just fully adopted Scott&#8217;s opinions, they would be much more <em>correct</em>. And if rationality and epistemology isn&#8217;t about being correct, what even are they about? They can still try to learn and understand different points, and when their expertise eclipsed Scott they could place a heavier emphasis on their points, but it&#8217;s <em>the point</em> of epistemology to be correct.</p><p>But of course, most people don&#8217;t think as highly of Scott as I do. Plenty of people are mirrors for the beliefs of a political figure, or even more commonly, their political side. If I could convince them to copy Scott&#8217;s epistemic gusto, then they&#8217;d be more correct, but they would need to think they should trust <em>me</em>, which brings a whole host of issues with it. I&#8217;m using my <em>own</em> shaky epistemic tools to decide that Scott is the guy I should copy, so if I think he&#8217;s really that much of a better choice than, say, Donald Trump, most of the work of stealing his beliefs is already over.</p><p>Eliezer has an old, relevant chestnut in <a href="https://www.lesswrong.com/posts/5yFRd3cjLpm3Nd6Di/argument-screens-off-authority">this post</a> that talks about how authority and argument are two very different types of evidence. It&#8217;s only if you <em>don&#8217;t understand</em> an argument, or don&#8217;t attempt to, that authority become relevant. If you fully grasp it, and judge it to be sound, then the speaker who said it becomes only a footnote. But I feel that I&#8217;m able to understand most of Scott&#8217;s posts, which is how I decided that he was such a good candidate for this thievery to begin with! It seems like if there was a hypothetical user X, with even better epistemic practices than Scott, but X didn&#8217;t care enough to dumb down his arguments to something that heathens like myself can understand, I&#8217;m out of luck.</p><h1>Wait a second, Scott&#8217;s not a computer scientist or political scientist, he&#8217;s a writer</h1><p>Now&#8217;s about the time where I point out that as great as Scott is, he is first and foremost a <em>writer</em> before an expert in any of the fields he talks about (except psychiatry). His best skill is his way to weave words to make them interesting, which makes them stick out in my brain like a sore thumb. So I&#8217;m basically doing what the person considering becoming a mirror to Trump is: finding the most charismatic person I know, and stealing all his beliefs. It&#8217;s just that my definition of charismatic is poisoned by wanting to feel like I&#8217;m learning.</p><p>If someone said &#8220;hey, your posts are insightful, I&#8217;m going to blindly steal all your beliefs&#8221; I would implore them to steal Scott&#8217;s instead! But Scott himself is certainly not the smartest person that SCOTT knows, as he says here in a post where he admits he always struggled with math:</p><blockquote><p>Every so often an overly kind commenter here praises my intelligence &#8230; But at my level, I spend my time feeling intellectually inadequate compared to Scott&#8239;Aaronson.<br>Scott&#8239;Aaronson describes feeling <em>&#8216;in awe&#8217;</em> of Terence&#8239;Tao and frequently struggling to understand him.<br>Terence&#8239;Tao &#8211; well, I don&#8217;t know if he&#8217;s religious, but maybe he feels intellectually inadequate **compared to&#8239;God.</p></blockquote><p>So wait a second, I&#8217;m at the bottom of this long totem pole of smartness. I haven&#8217;t read barely anything from Aaronson or Tao! Okay, we&#8217;re on this truth hunt, let&#8217;s cut out the middleman and just steal Terrance Tao&#8217;s beliefs. But Tao is private about most of his personal beliefs. As an example, YIMBYs are people who want to build more houses and have less housing regulations, and their naysayers are the dastardly NIMBYs &#8212; this is a niche political topic! What do I do when I want to have an opinion on YIMBYism for NIMBYism and I go up the totem chain and Tao has spent his time on computer science stuff, and doesn&#8217;t even have anything public about his thoughts on a niche topic?</p><p>Screw it, Tao isn&#8217;t a political scientist, why would I trust him there? Why don&#8217;t I just find a smart person in every field and steal their beliefs. I&#8217;ll find a smart political scientist, a smart philosopher, and a smart AI expert. But it becomes immediately obvious that I&#8217;m not smart enough in fields I don&#8217;t understand to determine who&#8217;s actually smart and who&#8217;s bullshitting. Let&#8217;s say I don&#8217;t understand any political science &#8212; how could I tell the difference between Curtis Yarvin and, uh, I guess literally any other substack political commentator? If I don&#8217;t even have a basic grasp of the material I&#8217;m screwed. But actually, experts in a topic have to be generally right when they have a consensus &#8212; say, climate change existing vs not, trusting the experts is a great heuristic. And if an issue is contentious, maybe trusting the majority is best? Once again, if I&#8217;m looking for a truth shortcut, the side with 60% seems better than the side with 40%!</p><p>So throw out the idea of finding one specific guy who&#8217;s really smart in a topic, let&#8217;s steal the expert consensus of everything. Except how am I supposed to know who&#8217;s the experts? If everyone working in a field counts, you&#8217;re gonna get a lot of stupid people. Now we&#8217;re stuck between 1 person being too many and a consensus of everyone giving college dropout interns in the field a say. But it might all just average out?</p><p>Maybe I need a consensus of the top 5 experts in a field, chosen by experts in a field. But that&#8217;s stupid, the people are just gonna choose </p><p>Screw it, the experts can&#8217;t be trusted, just get a consensus of the people. Hah! Are you stupid? That&#8217;s way worse. Steve Kirsch has 250,000 substack subscribers! Everybody is stupid except me! Maybe I&#8217;m just uniquely suited to finding truth in the universe because I got a 1480 on the SAT.</p><p>Wait a second, a 1480 literally isn&#8217;t even that high. Smart people? Let&#8217;s poll smart people! What&#8217;s the consensus of everybody who&#8217;s got a 1600 on the SAT on every topic? They can&#8217;t go wrong. Where can I find this? Do I need to create a database and find out what the smartest people think to find truth?</p><p>This is all going wrong. All I want is a fast track to truth without having to understand the arguments for every single topic. Wait, maybe epistemology and Bayes&#8217; Rule are the problem. Who decided that the truth needed to be backed by statistics? Maybe the universe operates outside numbers. Religion! The majority of the US is religious! I&#8217;m not, but if we&#8217;re trusting the masses, what does God think? Err, what&#8217;s the consensus of people who talk to God about what God&#8217;s beliefs are? Does God think I should be a YIMBY? Maybe I operate in a privileged place in the universe, and epistemology doesn&#8217;t even work for me because of something something anthropic reasoning. What&#8217;s the best way to get God to speak to me about whether I should support Israel or Palestine? How can I find only the true prophets?</p><p>No! The prophets have no clear miracles! Why am I assuming other people are even real, or that my beliefs are real? What&#8217;s a belief anyway? If my conscious experience right now isn&#8217;t experiencing the belief, it&#8217;s just somewhere else in my brain, am I experiencing it? What is solipsism is true? Maybe this is a religious test in a simulation or something to see if I can discover NIMBYism off of pure faith without any evidence. Can God create another God so powerful he cannot control him? Can I control my own brain? How can I be a YIMBY if the fabric of time travelling from the past into the future cannot even be independently verified? What if I&#8217;m a Boltzmann brain and statistics and reality and truth and time are gonna break down? I&#8217;ll give it 3&#8230; 2&#8230; 1&#8230;</p><p>RAHHHHHHH</p><h1>This is all ridiculous, right?</h1><p>Okay, back up. I hope you realize this is all insane. It&#8217;s really, really hard to get a fast track to truth without understanding the arguments. It&#8217;s true that a cursory understanding isn&#8217;t as good as a deep understanding, but it&#8217;s better than nothing. Scott Alexander is valuable <em>because</em> he explains the arguments so well. And, of course, trusting the experts is usually very good advice, and a good heuristic to have. If the experts all disagree with you, you&#8217;re probably wrong. On contentious issues, weigh the sides, the amount of people for or against, and the arguments, and the world will mostly make sense. If many people have an incentive and the ability to position their false beliefs as the expert opinion, arguments and statistics are necessary. As a closer, I&#8217;m reminded of <a href="https://www.lesswrong.com/posts/YABJKJ3v97k9sbxwg/what-money-cannot-buy?utm_source=chatgpt.com">one of my favorite parables</a>, about smallpox:</p><p><em>Louis XV died of smallpox in in 1774. He had all the power, and money, and resources in the world, yet he met his fate all the same; was he truly doomed? An inevitable and unavoidable tragedy?</em></p><p><em>Nay; three months before his death, a lowly dairy farmer across the Atlantic in the United States braced his family as Smallpox ravaged his town. Luckily, it was folk wisdom that cowpox, a relatively mild affliction, made you completely immune to the much more devilish smallpox. He took his family to his cows and rubbed their pus on his arm, and they were saved from the terrible fate for the rest of their lives</em></p><p><em>Loius XV&#8217;s fate was sealed not from a lack of resources, but from a lack of knowledge; there would have been no way for him to distinguish true knowledge from the snake oil salesman and faith healers that surrounded him. The only path is true understanding.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">If you like my content, it would mean a lot if you subscribed. The more subscribers I get, the more time I get to spend writing. Thank you.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Alright, you got me, it&#8217;s the study of knowledge and is interested in the difference between true beliefs and justified beliefs, how they differ from opinion, etc etc. You&#8217;re damn right that I will abuse it in my article to mean &#8220;good at finding true things&#8221; because it can be a fancy way to say something that resembles that. Also, the word is fun to say. Ep-eh-stem-ick.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Bayes bayes bayes bayes bayes. if you say it 5 times it almost doesn&#8217;t sound like a word. This word is actually completely irrelevant in this sentence, and can be ignored. Also, I&#8217;m kinda totally abusing Bayes here, as you can&#8217;t be under or overconfident, but tuning your method to find truth is fine! But yes, fine, it&#8217;s &#8220;one of the most important equations of all time&#8221;. Yes, it&#8217;s &#8220;a mathematical way to explain how to accurately update beliefs so you can find truth&#8221;. I know, I know, I know. Scott&#8217;s bio actually is literally just Bayes&#8217; rule, so if I don&#8217;t include it because it&#8217;s so important I&#8217;m basically disrespecting Scott.</p></div></div>]]></content:encoded></item><item><title><![CDATA[You Should Just Grade Morality on a Curve]]></title><description><![CDATA[Calling morality &#8220;too demanding&#8221; is missing the point of what morality is]]></description><link>https://www.kylestar.net/p/come-on-just-grade-morality-on-a</link><guid isPermaLink="false">https://www.kylestar.net/p/come-on-just-grade-morality-on-a</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Wed, 23 Jul 2025 12:05:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c2fd5093-820c-456a-a59a-25c65ab73698_900x600.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Many moralities are on the hunt for the fabled &#8220;moral obligation.&#8221; They see morality as a list of &#8220;stuff you should probably do&#8221; and when you fulfill your list of moral obligations, checking all the boxes, you&#8217;re set free to focus on whatever else you have to balance in your daily life.</p><p>Religions believe this, with their Ten Commandments, and secular moralities believe this, with their &#8220;do not lie, do not kill&#8221; credos that then set you free to focus on your passions.</p><p>I think this hunt for an obligation is a bad way of looking at morality, and does not follow from the moral facts that the universe actually rests on.</p><p>&#8230;</p><p>So these other moralities aren&#8217;t based off of how much good you actually do. What&#8217;s weird about these other moralities is that, if presented with a scenario where the option that would keep you morally perfect would lead to worse results, is that they still <em>hope for the best, good option</em>. If someone says you shouldn&#8217;t pull a lever to kill one person and save five, they still hope a rock falls from the sky and hits the lever to change the track anyway. So by banning actions that will, in some cases, make everyone&#8217;s lives better<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, they prioritize the natural state the world is already in as somehow mattering over the conscious experiences of the people in it. I find this to be an odd worship of &#8220;the way things already are&#8221; over the actual preferred worlds that we all want to live in.</p><p>That consequence, and the idea of a &#8220;moral obligation&#8221;, can lead you to a belief that resembles what Nietzsche called <em>slave morality</em>. The idea that what you must do to be perfectly moral is ensmallen yourself: don&#8217;t lie, don&#8217;t cheat, don&#8217;t murder, and so on. You may notice that by these credos, a corpse is morally perfect. <a href="https://www.astralcodexten.com/p/matt-yglesias-considered-as-the-nietzschean">Slave morality is goals for dead people.</a></p><p>Most insidiously, if we try to put a moral obligation like &#8220;save someone when they&#8217;re drowning&#8221; into this belief system, the morally optimal thing to do is to never go near rivers or lakes, because if you don&#8217;t see a drowning child, you have no obligation to save. If a firefighter saves 10 children but cannot muster the strength to save the last one, he is more &#8220;morally tarnished&#8221; for letting someone die compared to me, who sat on my ass watching YouTube all day<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>Enter utilitarianism, which praises the firefighter, and says that saving ten children is better than saving one, or saving none. Utilitarianism is focused on outcomes, and what outcomes are best for people. That&#8217;s all utilitarian says; the trivial claim that saving 1 more person would always be better. Even if 1,000 children have already been saved, the 1,001th is a real person who matters morally. But this is where its critics come in with the most common complaint of utilitarianism: if you&#8217;re trained to think of the world in terms of moral obligations, then the trivial claim that &#8220;saving more people is better&#8221; might lead you to say &#8220;wait, if you&#8217;re morally obligated to save people even indirectly, aren&#8217;t you morally obligated to save <em>all</em> the people?&#8221; Called &#8220;the demandingness objection&#8221;, these critics who can only see the world in terms of obligations say utilitarianism must not be true, because admitting that saving two people is morally better than one obligates you to give 100% of your money away and attempt to save the world.</p><p>But this is absurd! Utilitarianism doesn&#8217;t say any of that shit! It just says that it&#8217;s BETTER to do more good than less good. This framework of moral obligations is a bad one, no matter how instinctive to our goal-and-deadline brains it looks. Instead, just as saving two people is better than saving one, I propose another way to look at morality: being more moral is better than being less moral.</p><p>If your actions purposely<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> cause many people to be saved, I would propose that that is better than if your actions cause less people to be saved. If you save many people from being tortured, that is better than if you cause less people from being tortured. We can grade humans on a curve: the most moral people are better than the less moral people. It&#8217;s alright if your actions don&#8217;t save a trillion people single-handedly, that&#8217;s fine, because expectations aren&#8217;t even built in &#8212; you just should do more good to be more moral, if you care about being moral.</p><p>I&#8217;d like this to be uncoupled from the FEELING of being a good person, or being moral. If you donate to a charity that saves one person, but you get an immense ego boost and feeling of virtue and goodness in your heart because you get to see the person you saved, that is still less moral than donating to a charity that saves 1,000 people but you don&#8217;t get as much of an ego boost from &#8220;being a good person&#8221; (Of course, both are much better than not donating at all). It&#8217;s a shame our feelings of virtue are divorced from the good that they cause (evil people still see themselves as the hero), but moral facts don&#8217;t care about your feelings! It is still true saving 1,000 people is better than saving one!</p><p>Our human intuitions are biased. If someone saves 1,000 children by donating to charity, but is generally an asshole and bumps into people on purpose on the sidewalk, sure that&#8217;s really fuckin weird, but his presence has <em>made the world a much better place.</em> I would like to call him more &#8220;moral&#8221; than an average Joe who did none of those things, because he has directly affected the world in a more positive way than Joe did by doing squat. I think you can offset the moral bad you do by causing a vast amount of good; as <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Travis Talks&quot;,&quot;id&quot;:28576402,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/854a6ebc-11a5-444c-a6b5-b93ddd139738_420x420.webp&quot;,&quot;uuid&quot;:&quot;e21fe4d3-557b-4949-865b-d845830d77c3&quot;}" data-component-name="MentionToDOM"></span> says in his great article <a href="https://travistalks.substack.com/p/on-moral-offsetting">On Moral Offsetting</a>, anti-offsetters place morality on utterly trivial details, like how bad occurs, instead of what&#8217;s obviously morally paramount &#8212; the bad itself, happening to real conscious individuals! If you do something bad, it&#8217;s not moral to wallow in guilt, morality wants you to pull yourself up by your bootstraps, try to not do the bad thing again, and get out there and do something GOOD. Maybe even more good than the bad thing you did!</p><p>This naturally implies the more power you have, the further you can go up on the curve, because you can make the world a much better place; this should be intuitive, as with great power, comes great responsibility.</p><p>The reason I think this decoupling from &#8220;moral&#8221; and &#8220;good-person, kindhearted, virtuous&#8221; or whatever is important is because it <em>heavily incentivizes someone to make the world a better place</em>, causing more good in the world<em>.</em> I think it matters if a person consciously made the world a much better place than the person next to them, and they should be praised for that. In this strange world we live in, charity is <a href="https://www.givewell.org/">much more effectiv</a>e than almost anything else you can do to improve the world by an order of magnitude. This is a weird fact that doesn&#8217;t make much sense with our tribal intuitions built for strong, close bonds, where evolution never ever had to consider humans halfway around the globe.</p><p>But once again! Moral facts don&#8217;t care about your feelings! If donating to charity makes the world 10x a better place than something else that <em>sounds</em> more virtuous, I would like the thing that makes the world a better place to be more incentivized. And I would like to heap praise on people who do this. Think of the incentives! If people bragged about making the world a better place directly, instead of appealing to some wishy-washy caveman-brain-activated abstract virtue, the world would be an incredible place to live. And while it&#8217;s true that saving 1,001 kids is better than saving 1,000, that&#8217;s literally just a fact, I would like to praise the man saving 1,000 over the man sitting on his ass. A leprechaun will not pop out after you save 1,000 kids and say &#8220;you did it! The last kid you were obligated to save! Now you&#8217;re morally perfect!&#8221; But what I hope happens is that you get praised, for being a better person than the people next to you.</p><p>If you do more good than the person next to you, you don&#8217;t need to save a trillion people. Don&#8217;t think in terms of obligations; try to do what you can to improve all our lives in the best way you can find.</p><p>Grade morality on a curve. Make the world a better place to live in.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em>Liking is the best way to support mwah, Kyle Star, as I rage against the Substack algorithm. Feel free to subscribe if you haven&#8217;t already. The more I grow, the more time I can spend writing.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Some people might complain here that the trolley problem, the guy on the 1 track isn&#8217;t hoping for death. This is true. But you don&#8217;t need to go very far to find many scenarios where everyone is worse off from deontology. One benign example is a white lie, where you want to lie and the other party wants to be lied to. Some deontologists might find no fault with this, but that doesn&#8217;t spare them from being prone to issues like sacrificing 1 person to killing 1,000,000,000, which I think even the 1 person would understand.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Some may say &#8220;can&#8217;t you say that it&#8217;s morally obligated to donate a little to charity?&#8221; That sentence is extremely vague with what &#8220;little&#8221; means, and I think looking at morality through absolutes is a terrible way to look at it ignoring what those absolutes say &#8212; as the rest of the post will explain, it&#8217;s better to compare people against one another and say &#8220;you&#8217;re amazing for donating so much to charity&#8221; than to set an arbitrary goal.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The word &#8220;purposely&#8221; is crucial here. Some may appeal to the butterfly effect, where the flap of a butterfly&#8217;s wings could cause 1,000 tornados. This is true, but irrelevant, as completely random effects don&#8217;t matter for morality. Now, if someone ignores risks and is negligent, that&#8217;s obviously relevant (say there&#8217;s a 10% of saving 1 person and a 90% of killing 10, that&#8217;s a bad bet) but if you earnestly try to cause a vast amount of good and it happens to go wrong in a completely unforeseen way that means you&#8217;re not negligent, you&#8217;re morally in the clear.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Is All of Human Progress For Nothing?]]></title><description><![CDATA[The Hedonistic Treadmill is the Boogeyman. It is evil incarnate.]]></description><link>https://www.kylestar.net/p/is-all-of-human-progress-for-nothing</link><guid isPermaLink="false">https://www.kylestar.net/p/is-all-of-human-progress-for-nothing</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Sun, 20 Jul 2025 21:33:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1b45bf21-30f6-4875-a84f-8c763b0e004c_680x513.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are humans happier today than they were 10,000 years ago?</p><p>I repeat:</p><p><em>Are humans happier today than they were 10,000 years ago?</em></p><p>I don&#8217;t care about <em>stuff</em>. Cathedrals, medicine, beds, stadiums, cities, whatever. I care about people. The conscious beings whose reflections of reality mirrored in the mind are the only thing that can ever be felt in this world. Does the stuff make people&#8217;s lives better. Are they more fulfilled? Are they happier? Does it make it easier to see their loved ones for longer? Does it remove that ache in their back that they&#8217;re forced to think about every day of their lives? I care about <em>people</em> and their mental states.</p><p>The hedonistic treadmill is the theory that there&#8217;s a baseline happiness we return to. That after positive or negative events, people return to a &#8220;set point&#8221; of well being. This makes sense. Weaker versions simply say that if you&#8217;re happy because of something, say you bought a car, eventually it&#8217;ll slip from the exciting to normal and you won&#8217;t get a very big kick. If you get a promotion, eventually you&#8217;re going to be seeking your next promotion instead of riding in the wave of how great your previous one was. This all seems obviously true. This post is about the &#8220;positive events&#8221; side of the treadmill.</p><p>Because the hedonistic treadmill is the boogeyman; it is evil incarnate.</p><p>&#8230;</p><p>The hedonistic treadmill terrifies me. It should terrify you too. It is the boogeyman to beget all boogeyman; the source of all limits in humanity&#8217;s condition. It is the monster under your bed and the demon lurking behind your door. I want to show you this, too.</p><p>The treadmill consumes everything we sacrifice to it: we have so much more stuff today compared to when we were cavemen that it&#8217;s insane. We have solved nearly every major issue in the entirety of living for a huge chunk of the world. We, the average person in the western world, are richer than kings. And yet there&#8217;s one problem we haven&#8217;t removed, the one deep in our psychology, the most foundational one that consumes the use for all the others, the one that&#8217;s always craving.</p><p>You have clean water. You have shelter. You have people who love you. You have a global information network that gives you instant access to any other human on Earth. It&#8217;s not enough. Of course it&#8217;s not enough; are you kidding? It is built into your very being that nothing CAN ever be enough. We&#8217;ll keep taking and taking and taking until we die, unsatisfied, still yearning for more. More, more, more.</p><p>Alright, before I&#8217;m accused of not being fair, let&#8217;s admit a few victories that we really have banked; there are some extremely important ways humans in the world are likely better off than was 10,000 years ago, actually. There are some aspects to all this that the boogeyman can&#8217;t reach. Say, the fact people&#8217;s legs get broken less, and they&#8217;re stabbed less. Physical pain&#8217;s baseline seems to be zero &#8212; if you have none, you have none. The less physical pain, from aches and sores and wounds and cuts, the better. And modern society sure seems like it has less of this; I don&#8217;t have any studies for you<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, but if you&#8217;re fighting bears all day vs doing a 9 to 5, I hope this is obvious.</p><p>It&#8217;s also true that life satisfaction in Finland or New Zealand is higher than war-torn regions today, and probably higher than tribal humans. Wealth is generally correlated to happiness. But this slope flattens within each society once basic needs are met, which&#8230; is dumb. The <a href="https://www.tutor2u.net/economics/topics/easterlin-paradox#:~:text=The%20Easterlin%20Paradox%20refers%20to,life%20satisfaction%20among%20its%20citizens.">Easterlin Paradox</a> (awesome name) notices that within any given country, happiness stagnates once GDP per capita passes ~$20k. The baked in hardware in your brain is designed to look for problems, and worry about things. What happens when we&#8217;re past the point where worrying is helpful? Well, of course, we keep worrying because evolution assumes there&#8217;s bound to be <em>some</em> wolf in the trees while you pick berries.</p><p>Also, this post is about humans and human progress, but there&#8217;s a lot of uncomfortable conversations to be had about animals, who are not spared from this physical pain as we are, and the sheer number of them we inflict this physical pain on. I care about beings if they&#8217;re conscious; I care if it&#8217;s like something to suffer under their head, and it would be hubris to say existence is only for humanity. I encourage you to read <a href="https://starlog.substack.com/p/capitalism-effective-altruism-and">this post</a> I made that talks about how this human progress can have nightmarish implications for trillions of animals, who we push out of view to not think about.</p><p>But back to our human neural shackles: the boogeyman can&#8217;t reach physical pain. But the rest of existence? Oh, the rest.</p><p>I just don&#8217;t understand how people can defend this emotional limiter, built into our skulls. A billion years ago, evolution realized that it needed to incentivize people do things to have us live longer, so it handicapped our emotions to never stop us from trying to survive and finding someone to have sex or whatever.</p><p>I feel like people don&#8217;t understand how much the <em>point of all of human progress </em>is predicated on the questions &#8220;Does a billionaire have a better life than a layman? Does a layman have a better life than someone poor? Does someone poor have a better life than a hunter-gatherer?&#8221; Indeed, &#8220;Are humans happier today than 10,000&#8239;years ago?&#8221; The answer to these questions, by your definition of &#8220;better",&#8221; must be unilaterally yes, or else you, too, fear the boogeyman.</p><p>For all of human history, humans clutch their pearls, trying to make themselves better off in the only way they know how. Painkillers for those in pain. Houses for the homeless. Love for the unloved. We build skyscrapers for our egos, and dams for our populace. And then you have the audacity to turn to me, and say &#8220;pain is what makes life worth living.&#8221;</p><p>Cope.</p><p>If that&#8217;s true, then it was all for <em>nothing</em>. All the reaching up through the mountains of corpses, hoping, praying for something better, praying to be saved, and then you turn around and tell me that no, we actually don&#8217;t care if there&#8217;s a evolutionary quirk that makes it so we can&#8217;t properly enjoy the full weight of anything in all of existence. That the success of everything we build fades away &#8212; the whole purpose of the cathedrals, medicine, beds, stadiums, and cities. You turn to me, and tell me that trying to permanently win over the shackles of the blind-idiot-God that led us here to reproduce and nothing else is spitting in the face of some &#8220;point.&#8221; You turn to me, and tell me that the illusion of progress in the mind, one that fades permanently, is greater than progress in the mind itself. You turn to me, and say what doesn&#8217;t matter is the human beings, doomed to be unsatisfied in the presence of mountains of gold; the real thing that matters is the factories of smog that we have built. You say the stuff is more important than people. I reject you.</p><p>&#8230;</p><p>There&#8217;s a thought experiment called &#8220;Nozick&#8217;s Experience Machine&#8221; where you go into a box and experience unending fulfillment forever. I don&#8217;t understand why I would need to stay in the box; I would just leave after they kill the boogeyman. I would do anything that let me escape from this demon under my bed who&#8217;s scaling back the love I hold for my mother, limiting the unbridled joy for the flowers. The sensation of skin from a lover. The success from a job well done. This post isn&#8217;t about some trivial, superficial &#8220;pleasure,&#8221; it&#8217;s about all that&#8217;s good and all the purpose in a human&#8217;s entire life &#8212; your friends, your family, your lover, your sense of being, your quiet mornings, your tears that mean &#8220;I&#8217;m alive.&#8221;</p><p>It all fades away. The demon tells me I must always be wary; don&#8217;t trust the statistics, the criminals may be outside your door now. Don&#8217;t trust your friends, they may be hiding a spear behind their backs. Get mad on social media, obsess over a stranger&#8217;s vacation reel, fret that a package is one day late, argue about a show you don&#8217;t even watch. Just never, ever be content. The demon says this because a million years ago, humans huddled in caves, and the demon needed to put a carrot on a stick to lure them out to find food. This dissatisfaction within me, the deep feeling of unease, is a curse bestowed on me by a blind God who I have just nearly outrun.</p><p>This is my solution to the Fermi Paradox; this is why there are no aliens in the stars above us. The aliens weaved past the rocks, the spears, the guns, and then the atomic bombs, and when they established an uneasy peace, they used their technology to look inward. What they saw was a mind rigged to never be satisfied, rigged to be wary of every neighbor, rigged to find something to hate no matter the circumstances. And they killed this part of themselves, saw the truth and love of the world that surrounded them, and they found they were exactly where they always wanted to be.</p><p>The answer to the question in the title of the post is a resounding no. Because human progress is closer than we&#8217;ve ever been to killing the Boogeyman. If AI keeps getting better and figures it out, great. If it halts and we have to rely on professionals, sure. Just being close to killing him doesn&#8217;t mean that he&#8217;s not here today, just as alive as ever. And if we can&#8217;t put this final stake into his heart, then it really will all be for nothing.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em>Liking is the best way to support me, because it helps more people see my post. Cursed to be at the mercy of Substack&#8217;s algorithm. If you&#8217;re new here, I&#8217;d appreciate you subscribe for free, or if you don&#8217;t want to yet, feel free to view some of my other good posts <a href="https://starlog.substack.com/p/just-because-theyre-annoying-doesnt">here</a> and <a href="https://starlog.substack.com/p/please-just-answer-the-goddamn-moral">here</a> before making a decision.</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Alright, yes I do have a study for you, <a href="https://www.bls.gov/opub/btn/volume-9/nearly-50-years-of-occupational-safety-and-health-data.htm?utm_source=chatgpt.com">here you go</a>. It&#8217;s not because I like you or anything.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>By the way, this probably isn&#8217;t the solution to the Fermi paradox, just because if this was the great filter it would need to filter out 99.99% of all societies, which I doubt. But I do believe that aliens have definitely done and should do this. Also I wanted the ending to be hype so uhhhh ya got me, ya got me it&#8217;s true people.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Morality Tier List]]></title><description><![CDATA[You WILL NOT BELIEVE what brand of deontology hits B tier!]]></description><link>https://www.kylestar.net/p/morality-tier-list</link><guid isPermaLink="false">https://www.kylestar.net/p/morality-tier-list</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Thu, 17 Jul 2025 12:05:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8babb771-84b6-4144-8ea6-8c0d22f852d2_1476x1084.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every day I wake up, open this dumb app, and have to read all these well thought out moral posts by people with different moral viewpoints than me. Boring! Stupid! Everybody knows they should just put me in charge of morality, as I&#8217;ve figured out all the correct moral axioms.</p><p>But anyway before I&#8217;m made the Czar of Morality I guess I&#8217;ll talk about which moral viewpoints I kinda buy and which I sure don&#8217;t. And I suppose I&#8217;ll shout out some of the best of these different-minded Substackers who might be pretty intelligent and maybe more trained in philosophy than I (don&#8217;t get a big head about it).</p><p>Move over, well-thought-out-philosophy-papers-with-a-clear-focus, we&#8217;re doin a tier list.</p><ul><li><p><strong>F Tier</strong>: Virtue Ethics &#8212; <em>Cultivate virtuous character traits</em></p></li></ul><p>Virtue ethics, <a href="https://survey2020.philpeople.org/survey/results/all">literally the most popular position</a> as polled from all philosophers, gets F tier. Sorry, sucks to suck. The main question you may have after reading the name: ok, so how do you select which virtues? I regret to inform you it&#8217;s pretty much vibes-based off of ill-defined terms like &#8220;human flourishing.&#8221;</p><p>The idea to focus on <em>character</em> instead of choices or outcomes like the other two moral fighters in the ring do is a genuinely cool idea that sounds like it could work &#8212; but there&#8217;s just no principled way to pick the virtues or figure out what &#8220;pure virtue&#8221; is. It&#8217;s so unprincipled it&#8217;s difficult to find what the theory ACTUALLY advocates for in a logical way, besides &#8220;the stuff you think is good&#8221; which is unhelpful if I want to know which good stuff I should focus on. How do I balance multiple virtues? No good answers are given.</p><p>There are good arguments you should <em>act</em> like a virtue ethicist, but they ground this action in a serious moral philosophy which does things like &#8220;tell you what to do&#8221;. </p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Joseph Rahi&quot;,&quot;id&quot;:77849120,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d929367-a6c6-499d-90f8-1a441859f00f_1440x1920.jpeg&quot;,&quot;uuid&quot;:&quot;850a5038-50c3-4d91-97f6-8f44dfa25b9b&quot;}" data-component-name="MentionToDOM"></span> gives <a href="https://thinkstrangethoughts.substack.com/p/virtue-ethics-has-entered-the-chat">the best defense I&#8217;ve ever seen of it here</a>, though, about how virtues are about kindness and community. I recommend it. Handsome guy, too.</p><ul><li><p><strong>D Tier</strong>: Prioritarianism &#8212; <em>Maximize welfare w/ extra moral weight for the worst off</em></p></li></ul><p>Utilitarianism, but we changed the utility optimizing to weight a small, random part of it, with no reason at all supplied except for dodging one bad vibes problem of regular utilitarianism where one guy can suck up all the happiness like a utility vampire. I feel like you have to justify major weighting changes in utilitarianism with more than just &#8220;it makes some outcomes look slightly less bad.&#8221;</p><ul><li><p><strong>D Tier</strong>: Contractualism &#8212; <em>Principles no one could reasonably reject lead to equal basic liberties</em></p></li></ul><p>First imagine perfectly rational beings deciding from behind a veil of ignorance what they want the world to look like when they don&#8217;t know which conscious being they&#8217;re gonna be. Then, imagine a philosopher saying &#8220;these perfectly rational beings would subscribe to my dumb brand of deontology therefore I&#8217;m right lmao&#8221;. Rawls says equal basic liberties and prosperity are the two things these beings would want (in order), instead of, oh I don&#8217;t know, wanting the average life on the planet to be the best it could possibly be.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Daniel Mu&#241;oz&quot;,&quot;id&quot;:63039745,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7cf94bc9-5cb0-40a9-9afe-6378db2c402c_1336x1336.jpeg&quot;,&quot;uuid&quot;:&quot;6a9ee855-064e-48fc-a2f2-4511be31688d&quot;}" data-component-name="MentionToDOM"></span> isn&#8217;t necessarily a contractualist; he&#8217;s some other amalgamation of agent-centered and waivable-duties deontology and has critiqued contractualism but uhhh this is where I&#8217;m gonna recommend him because I didn&#8217;t have another space and he&#8217;s also a handsome guy with some fun viewpoints.</p><p>And what&#8217;s that? I wrote out that entire paragraph above critiquing contractualism and only found out after that <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Silas Abrahamsen&quot;,&quot;id&quot;:95786846,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af561d63-53e8-47df-adad-eeea822ea67d_2000x2000.png&quot;,&quot;uuid&quot;:&quot;261a0965-ad88-400c-8706-dcdfef52b445&quot;}" data-component-name="MentionToDOM"></span> made <a href="https://wonderandaporia.substack.com/p/rawls-should-be-a-utilitarian?utm_source=share&amp;utm_medium=android&amp;r=1l11lq&amp;triedRedirect=true">pretty much this exact post</a>? Gee golly I guess great minds think alike.</p><ul><li><p><strong>C Tier</strong>: Kantian Deontology &#8212; <em>Act only on moral beliefs you can universalize while respecting rational agents.</em></p></li></ul><p>Kant is a guy who tried to think up some really sick, interesting, and internally consistent rules out of pure logic, which, respect brother. The only issue is the perfectly-logical rules he cooked up endorse the dumbest moral actions known to man. I&#8217;ve looked at the actual logic and I really believe the &#8220;cannot lie to a murderer about where your friend is hiding&#8221; thing is actually a 100% necessary feature of his morality, which doesn&#8217;t seem great Bob. Kant &gt; Rawls&#8217; contractualism because Kant&#8217;s beliefs actually kinda naturally follow from his premises.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Florence Bacus&quot;,&quot;id&quot;:362113892,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5960331a-824d-440b-9b66-7817a929aa3e_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;dff387ba-b989-4a1f-b049-75773e1a8c74&quot;}" data-component-name="MentionToDOM"></span> reps <a href="https://morallawwithin.substack.com/p/against-intuitionism">for this position here</a>, subscribe to them. I made <a href="https://starlog.substack.com/p/intuitionism-is-quite-mandatory-in">a big long counter to them and Kant yesterday</a> where I expand on all the points above. Sadly I&#8217;m not even allowed to be jealous of how Florence has gotten more subscribers even faster than me because of how good their articles are, boooooo.</p><ul><li><p><strong>C Tier</strong>:  Preference Utilitarianism &#8212; <em>Maximize the informed preferences of agents</em></p></li></ul><p>The most popular version of utilitarianism, based off preferences, is actually mid. It has no way to sort out bad actors with dumb ideas (guys let&#8217;s all torture this one guy, if the majority of us want to it&#8217;s optimal), stupid mfers (guys it&#8217;s morally optimal for me to eat all these bees), or people having a mental breakdown (let me out of the hospital plz). Don&#8217;t get me started on animals.</p><p>The above isn&#8217;t very fair to preference utilitarianism, because the intent is to avoid <em>irrational</em> priorities. But the only way to get around this is by stipulating some sort of perfect &#8220;ideal-preference&#8221; where we only count what people WOULD prioritize if they were fully informed, rational, and free of bias. I fully believe these abstract fully rational beings would unironically just endorse a better version of utilitarianism so this stipulation destroys the entire point.</p><ul><li><p><strong>B Tier</strong>: Divine Command Theory &#8212; <em>An act is moral because God commands it as good.</em></p></li></ul><p>Hear me out guys hear me out ok. Divine Command Theory makes so much sense. I firmly believe a world where religion is right is the world deontology makes the most sense by far. The source of morality in a world where God produces a literal book that tells you what the good virtues are should be THE BOOK. THE UNIVERSE HAS A HOW TO GUIDE, GUYS. Or at least whatever God himself thinks is moral. Try to match that instead of playing some wishy washy vibes-based religious justification for doing stuff you already wanted to do.</p><p>This is also why if Christianity is correct, Christian literalists make way more sense than the contextualists who treat the Bible as a big metaphor. Now it&#8217;s getting spicy.</p><p>Anyway if you&#8217;re not religious (I&#8217;m not) this one is useless but I firmly believe Christians should roll with this and it gets lots of internal consistency points. Sometimes I feel insane when religious philosophers try to &#8220;play the game&#8221; and build up moral axioms atheists could agree with. No, man. If you&#8217;re in a religion sourced from a book that tells you what to do, that has <em>such</em> massive implications for morality that there&#8217;s no bridge. And nothing an atheist philosopher says should change your mind away from Divine Command Theory either, unless they challenge your religious foundations themselves!</p><ul><li><p><strong>A Tier</strong>: Strong Negative Utilitarianism &#8212; <em>Reduce suffering above all else</em></p></li></ul><p>Erm, guys, this one&#8217;s a little dark? Nah this is easily the most internally consistent and actually respectable position among edgelords who believe life is suffering. So, so much better than nihilism or miserabalism or <a href="https://starlog.substack.com/p/creating-life-is-bad-except-for-antinatalists">the modern antinatalism movement</a> (click on that link I hear from a friend that that article is sick). &#8220;Suffering is really bad, guys&#8221; is a true slogan to have, and yeah, Hell seems like a bad place to be.</p><p>Subscribe to <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Beetle&quot;,&quot;id&quot;:173031807,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48a917f8-cecc-4800-8694-b79b43ea45c1_1720x1720.png&quot;,&quot;uuid&quot;:&quot;fd3cd9a4-472a-49d5-8e36-25358e736f94&quot;}" data-component-name="MentionToDOM"></span>, NOW. Strong negative utilitarians are rare but I respect the commitment to one principle, especially when that principle is indeed what I think matters most in the world.</p><ul><li><p><strong>S+++ Tier</strong>: Hedonistic Utilitarianism &#8212; <em>Maximize good experience; minimize bad experience</em></p></li></ul><p>It&#8217;s beautiful. It&#8217;s clean. It&#8217;s sexy. It gets the most important aspect right: good things are good, and we should make more good things. Who gives a shit what those pesky humans think, they WILL be fulfilled and loved and hopeful and experience the greatest aspects of existence against their will.</p><p>It focuses on conscious beings as the moral endpoint of the universe, cares about animals, and says the goodness acted upon our conscious experience matters more than the random circumstances of the rocks and silt and dead concrete that surrounds us. S+++ inject it into my veins.</p><p><em>I&#8217;m</em> the Substack rep for this one &#8212; oh, wow, the Substack rep for the best moral position ever? You have to subscribe, right?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>Make sure to LIKE and COMMENT!!!1!11!! If I hit my goal of 1 like I&#8217;ll make a Meta-Ethical Positions Tier List and beef with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Joe James&quot;,&quot;id&quot;:1726744,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee6aeba9-bb52-4269-9e1c-0f629e66765e_2351x2351.jpeg&quot;,&quot;uuid&quot;:&quot;9cecda82-c80c-4257-9479-73c579e52449&quot;}" data-component-name="MentionToDOM"></span> about how guys morality is real and I&#8217;m not just coping I swear. Subscribe to him too.</p><p>By the way, shill for your beautiful moral viewpoint and Substack in the comments if you want, self promo is encouraged on this post only.</p>]]></content:encoded></item><item><title><![CDATA[‘Intuitionism’ is Quite Mandatory in Morality]]></title><description><![CDATA[Response to a Constructive Kantian's great post "Against Intuitionism"]]></description><link>https://www.kylestar.net/p/intuitionism-is-quite-mandatory-in</link><guid isPermaLink="false">https://www.kylestar.net/p/intuitionism-is-quite-mandatory-in</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 15 Jul 2025 13:47:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1a750671-8b4d-4dca-b9b3-1d5c16a677a2_486x344.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a famous, mocking idea of Kant as &#8220;the philosopher who wouldn&#8217;t lie to a murderer who asks where his friend is hiding.&#8221; He really did believe this, which was a true joy for me to find<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. Later philosophers inheriting his ideas predictably found this to be &#8220;not great&#8221; and introduced many new takes on his work which produce more agreeable actions to take. I think the majority of these alterations are a mistake; they completely ruin what makes Kant&#8217;s ideas so appealing. I&#8217;ll explain why Kant has such a fun moral theory on his hands, why the patchwork these philosophers apply is doomed (and I&#8217;ll counter one patchwork specifically), and why Kant&#8217;s ideas are still very, very wrong when it comes to the true moral fabric of the universe.</p><h1>The Way of Kant</h1><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Florence Bacus&quot;,&quot;id&quot;:362113892,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5960331a-824d-440b-9b66-7817a929aa3e_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;cbe1a927-ecb5-4f49-984b-6e533a05337f&quot;}" data-component-name="MentionToDOM"></span>&#8217;s piece <em><a href="https://morallawwithin.substack.com/p/against-intuitionism">Against Intuitionism</a></em> is the best explanation and defense of the philosophical position of this &#8220;Kantian Constructivism&#8221; that I&#8217;ve ever read. Unfortunately, I disagree with most of the points, and feel they claim too much based off too little. I highly recommend you read it right now, but a great starting place is to summarize their arguments in the most plain-english way we can manage:</p><ul><li><p>&#8220;Intuitionism,&#8221; where philosophers juggle thought experiments against principles<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> with intuition until it looks right, can&#8217;t be the ultimate source of moral judgement. If intuitions are biased, we&#8217;ll end up with a false theory when we incorporate those biases into our theory.</p></li><li><p>Noticing that people could disagree on morality while being rational, she redefines the term &#8220;moral obligation&#8221; to be a decision that every rational agent could discover purely from reasoning, no intuition required. You can have moral preferences, sure, but moral <em>obligations</em> have gotta bind everyone.</p></li><li><p>It then follows that the moral &#8220;ought&#8221; doesn&#8217;t imply <em>can do physically</em>, but <em>can be rationally convinced</em>. So if Bob has evaluated all of the arguments in the trolley problem rationally, and still doesn&#8217;t agree to pull the lever because his intuitions are different, he cannot be morally obligated to pull &#8212; he can&#8217;t be rationally convinced!</p></li></ul><p>I am going to attempt to start Huge Internet Beef by first saying I respect Florence, they&#8217;re a fantastic and succinct philosopher, and they completely deserve the massive new amounts of subscribers they have, but then attempt to dismantle this argument piece by piece until I can hopefully show that their take on Kant is rife with &#8220;intuitionism&#8221; in at least four places and that the purity of Kant&#8217;s original argument is under question and leads to very odd moral priorities.</p><h1>Wait, &#8220;every rational agent could discover&#8221;? Like what?</h1><p>So the first point which may jump out to you is this phrase &#8220;a decision that every rational agent could discover purely from reasoning.&#8221; What do Kant, Florence, and other proponents like Korsgaard think can be derived from literally, as Florence put it, &#8220;zero premises&#8221;? Well, this is where that classic Kantian phrase that I didn&#8217;t understand for a long time, &#8220;if a maxim is self defeating&#8221; comes in. The idea is this:</p><ol><li><p>You&#8217;re a rational agent who wants to do some moral action.</p></li><li><p>But, by deciding to do stuff in the first place, you&#8217;re relying on your power to choose. That&#8217;s agency.</p></li><li><p>Alright, because you&#8217;re relying on the ability to do stuff in the first place, you&#8217;re saying it&#8217;s important for anyone who lands in that situation to do stuff. That&#8217;s universality.</p></li><li><p>So if you want to do something like kill someone, you&#8217;re saying it would be OK for you to be killed for moral aims. So this moral idea is <em>self-defeating</em>, a <em>contradiction</em>, whatever you want to call it, the point is the same. &#8220;Every rational agent&#8221; would find it dumb to endorse allowing people to stop them from doing what they want to do, importantly even before any actual moral values are laid out at all.</p></li></ol><p>So this is saying that agency is important<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. But there&#8217;s a little bit more of a slight of hand here &#8212; first, I&#8217;m going to point out there&#8217;s at least one curious jump here. We start by saying &#8220;you must respect your own ability to choose&#8221; then jump in the next point to &#8220;ok, so <strong>everybody</strong> must respect<strong> everybody</strong> else.&#8221; This doesn&#8217;t strike me as some obvious thing that&#8217;s clearly derived from being a rational observer. In fact, if I&#8217;m pursuing a goal, even one that&#8217;s &#8220;intuition laden,&#8221; it seems like ignoring other people&#8217;s goals is pretty useful. I accuse the Kantian of baking this idea of &#8220;you gotta care about what other people&#8217;s decisions are too&#8221; into the very idea of a rational agent, which is pretty obviously the premise.</p><p>This is a big deal. The entire Kantian project relies on a foundational, unproven, disagreeable, and arguably intuition-based premise: that I care about your goals.</p><p>I don&#8217;t see anything that&#8217;s not internally consistent about a perfectly rational agent who only cares about their goals and no one else&#8217;s. By insisting that universality is required for reason itself, a Kantian isn&#8217;t proving anything, just declaring that any agent who disagrees isn&#8217;t &#8220;truly rational.&#8221;</p><p>Second, let&#8217;s just ignore the previous complaint and say it&#8217;s fine. If we say we prize agency above all, wouldn&#8217;t it be best if we just did <em>any</em> actions to preserve that agency for our fellow man? <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Richard Y Chappell&quot;,&quot;id&quot;:32790987,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!s0pB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2975dff8-e0e5-4f51-8d47-b9bc2dfd700b_1683x1790.jpeg&quot;,&quot;uuid&quot;:&quot;b91f3ca6-5470-47bb-85b0-1d65763cb0b0&quot;}" data-component-name="MentionToDOM"></span> has <a href="https://www.goodthoughts.blog/p/autonomy-consequentialism">an amazing post he put out yesterday</a> about how it&#8217;s super doable to maximize agency for man. In the trolley problem, it sure seems like there&#8217;s gonna be more agency in the world if the 5 people are saved instead of the 1 person. The ideal Kantian here values the <em>action of saving agency</em> as self defeating, because, it, uh, means that you&#8217;re trampling over the agency of someone? Though admittedly my claim here is pretty narrow, this points to my broader critique of deontology, my arch-nemesis philosophical belief: it definitionally values the actions we do over the people involved. Even though a deontologist can hope that the 5 in the trolley problem are saved, it prizes inaction, and being &#8220;morally clean&#8221; over actually doing something to make the world better in a way even a deontologist agrees with. Even if a survey of every human on Earth said, yes, let&#8217;s save the 5 in the trolley problem, if it&#8217;s not binding to these hypothetical optimal rational folk who are only bound to what Kant says they&#8217;re bound to, it&#8217;s thrown out.</p><p>Now, there&#8217;s actually second and third rules that Kant says as to why you can&#8217;t do this<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. Kant insists these are both the same version of the universality claim he makes above. I disagree, and think these are substantially harder to justify than universality. The second is that Kant says you cannot use another person merely as a means, which is presumably why this agency maxing isn&#8217;t allowed, but &#8220;merely as a means&#8221; is a very poorly defined sentence. When you pay a doctor, or lawyer, you&#8217;re literally using their means. Kant tries to dodge this by saying that he <em>really</em> means &#8220;acting on a maxim that presupposes their cooperation without their informed consent to that maxim.&#8221; Kant tries to exempt non-deceptive transactions too, and this is another absolutist rule that cannot be overrun with scale; that will be relevant later.</p><p>The third rule Kant is built on, Florence doesn&#8217;t mention at all, and I&#8217;ll briefly point to it as the weakest by far. &#8220;Your rules must be compatible with a society where <em>everybody</em> followed the same rules you do.&#8221; This ones the most obviously worst when it comes to compelling &#8220;maximally rational&#8221; folk, because it specifically tells rational beings it doesn&#8217;t matter at all what world they actually live in, they need to be able to justify their rules in a world that will never exist with super-duper rational legislators. This one has the coolest name, though. &#8220;The Kingdom of Ends.&#8221; That goes hard.</p><p>Third, ignoring both of my previous two complaints and accepting this whole deal, the set of actions that this <em>actually endorses</em>, if you include all three of Kant&#8217;s big boy formulations or the one that Florence focuses on, is super narrow&#8230;</p><h1>Prioritizing agency means we don&#8217;t care about pain?</h1><p>Alright, this is where Kant diverges from Florence and Korsgaard. Let&#8217;s start by knocking down Kant, and then I&#8217;ll tackle the other two, who have much weaker stances in my opinion.</p><p>Under Kant, you&#8217;re allowed to care about pain, hate, suffering, and scale of harm. This would be labeled as &#8220;morally permissible&#8221; in Florence&#8217;s view. But nothing to do only with pain or hate or suffering can actually bind these rational agents; they only care about agency and anything unable to be universalized. Here are some things that are &#8220;moral obligations,&#8221; the foundation of all morality, under this view:</p><ul><li><p>Don&#8217;t lie &#8212; having the truth is necessary for agency</p></li><li><p>Don&#8217;t kill &#8212; being alive is necessary for agency</p></li><li><p>Don&#8217;t imprison someone</p></li><li><p>Don&#8217;t maim</p></li><li><p>Keep your promises</p></li><li><p>Don&#8217;t commit suicide</p></li></ul><p>Decent list, I think, if you can stomach the logic it took to get here. But I think it starts to go off the rails next. Remember that universality only works one direction; while you can ban something for being self-defeating, you can&#8217;t make something mandatory for preserving agency. The rational guys we made only care about <em>not violating these innate rights</em>, actually doing anything is a matter of preference. </p><ul><li><p>Alleviate extreme suffering</p></li><li><p>Help a starving kid</p></li><li><p>Rescue someone drowning right next to you</p></li><li><p>Help animals</p></li><li><p>Stop racism<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p>Protect the environment</p></li><li><p>Provide basic needs</p></li><li><p>DO anything LIKE, ACTUALLY GOOD</p></li></ul><p>Now, a lover of Kant will know he labels many of these &#8220;imperfect duties.&#8221; Remind me how this crystalline logic produces these vague unenforceable duties? The entire apparatus we have is only able to generate &#8220;don&#8217;t&#8221; rules, so these seem like an afterthought, lacking the rational necessity that was the whole point of the project. There&#8217;s lots of thoughts Kant has on imperfect duties, and we could fill up another post with his justifications, but I just want to focus on the disconnection from this zero-premises principle. I think saying &#8220;you don&#8217;t have a moral obligation to do these things&#8221; when the scale of these can be much, much worse than the strict ones, is bad.</p><p>Oh, by the way, remember that these uber rational boys are completely obligated to never lie and never kill, so here&#8217;s things that they&#8217;re morally <em>obligated</em> to do when they don&#8217;t have to do any of the above stuff.</p><ul><li><p>Not lie to a murderer about where their friend is when they&#8217;re lying</p></li><li><p>Refuse to press a button to kill 1 person to save a billion people</p></li><li><p>Never sabotage terrorists or anybody who&#8217;s planning nefarious intent</p></li><li><p>Keep your promise to never reveal where a prisoner is if you make one</p></li></ul><p>This is the point where I have to appeal to your intuition. If you disagree with me that Kant is falsely elevating universality to importance, and think he&#8217;s not begging the question by stipulating rational as agreeing with that, and you think that we should praise inaction instead of prioritizing actual people&#8217;s agency, then I want you to take a look at these lists and go: <em>wait, this moral system sucks ass</em>.</p><p>No rational agent wants this! Really? Uber rational agents can only agree on this garbage? Praising people who are unwilling to do good things that have a cost, chastising people who try to do good, banning people from doing things that everyone would agree with because of the principle. It begins to resemble a system that praises people for doing the least bad, instead of any good at all. If this is perfect moral rationality then I&#8217;m glad I live with all the irrational moral people.</p><h1>Patchwork upon patchwork</h1><p>Alright, back to Florence and Korsgaard. Florence and Korsgaard take a look at this set of rules and agree with you and me that it sucks, so they then need to find justifications as to why these &#8220;perfect rational observers&#8221; would disagree with these perfect axioms that they&#8217;ve set out if the result is like, <em>really bad, man</em>. Here are a few tweaks they&#8217;ve made, and I hope I can show that they&#8217;re pretty weak, especially given how high a bar &#8220;all perfect rational observers agree&#8221; is.</p><ul><li><p>Korsgaard on &#8220;refusing to press a button to kill 1 person to save a billion person&#8221;:</p><ul><li><p>She invents &#8220;threshold constructivism&#8221; where there&#8217;s a threshold to where &#8220;refusing to press the button would go against the very maxims they try to protect&#8221; This threshold, despite needing to be agreed upon completely based on intuition by all rational agents, is somewhere between 100-to-1 and 1,000,000,000-to-1, maybe. My threshold, by the way, is that I think all rational observers would reach the conclusion that they should pull the damn lever in the trolley problem and save the 5! The moment you say there&#8217;s some point of convergence of intuitions, you cease to have extremely deep principles about it.</p></li></ul></li><li><p>Florence on &#8220;alleviating extreme suffering&#8221; not being obligatory:</p><ul><li><p>&#8220;Because if you can help someone at little cost who is very likely sentient and in dire need of help, you are obligated to.&#8221; To be completely fair to Florence, she specifically says &#8220;showing we are in fact committed to e.g. helping others when we can is not the topic of this post.&#8221; So it&#8217;s probably unfair of me to bash this point without her even presenting an argument for it. But&#8230; this one I frankly don&#8217;t get at all from the very maxims she talks about in the article. What helping someone <em>looks like </em>is intuitionist unless you&#8217;re talking about agency, not suffering. </p></li><li><p>What little cost looks like is intuitionist &#8212; you&#8217;re obligated to spend $10 but not all your money? Where&#8217;s the hard line? She can appeal to the threshold again and say there&#8217;s some point where all rational beings would converge, but, uh, no? If you&#8217;re saying you need to spend $10 but not $20, and your justification is <em>extremely optimally intelligent rational observers must definitionally converge on some values</em>, that sure seems like a pretty strong claim! So my intuition is that this is underdemanding and vibes-based, much more than utilitarianism, which will always give you an optimal move to make. I will update this section with her counter to this if she puts a post or DM with her reasons, though, because I feel bad pressing on this point when she said that wasn&#8217;t the point of her article &#8212; it&#8217;s just clearly a very weak point!</p></li></ul></li></ul><p>The Kantian constructivist is trapped. Either they accept the unpalatable results of Kant&#8217;s rules, or they create arbitrary thresholds that hope to alleviate the worst of the rules, but end up undermining the entire process in so doing. These two look at the actual set of maxims that really are taken from &#8220;zero premises&#8221; and balk at the results, then need to find a justification not based off the results for why they shouldn&#8217;t be allowed anyway. But this &#8220;zero premises&#8221; conclusion is so damn difficult to build good rationale out of that they just have to gesture and call dissenters irrational, because they&#8217;re stipulating what rational is in the first place. </p><p>Florence says &#8220;ought implies can be rationally convinced.&#8221; But when she starts talking about how you should donate a little to shrimp, but not a lot, and justifies it under these extremely strong criteria we started with, it makes more sense for me to call her moral ground shaky and revert to what &#8220;ought implies can&#8221; means originally &#8212; namely, can <em>physically</em> do. It seems there&#8217;s a surprising amount of wiggle room in perfect logic.</p><h1>Morality Without a Soul</h1><p>Built into the fabric of what we&#8217;re doing here is the claim that pain is an intuitive preference. This isn&#8217;t a bug, it&#8217;s a feature &#8212; Accepting suffering is the moral bad instead of these agency-by-universality rules would just be accepting maximizing or minimizing. But it does mean that it is morally permissible to press a button that makes everyone on earth have 10% more painful a life if it doesn&#8217;t impact agency, and it&#8217;s also the exact same amount of morally permissible to press a button that makes everyone on earth have 10% more beneficial. I would like my moral frameworks to provide guidance on these points.</p><p>Florence is commendably frank when discussing her project here. She says &#8220;It is for the reader to judge whether my characterization captures certain ordinary concepts&#8221; and she&#8217;s uninterested in what the words classically mean in philosophy, which I respect. So I don&#8217;t think she&#8217;s sneaking anything in here, but I can declare that this seems like a fruitless endeavor.</p><p>This morality* she talks about, where not telling white lies is a hard moral obligation but saving millions of lives for the cost of one is a moral preference, just isn&#8217;t a very useful distinction to have when deciding what to morally do. The constructivist quest for rationality comes at a seriously staggering price. By trying to escape human intuition, it removes human significance. Even if you accept all of the qualifiers and believe it&#8217;s technically perfectly rational, I think it&#8217;s really a compelling argument for the moral necessity of our so-called irrationality.</p><h1>Conclusion</h1><p>Florence is a better writer than me, a better philosopher than me, and probably cooler and radder than me with more swagger in real life. But I think the Kantian way has a lot of holes to patch, and attempting to prove morality off of literally zero premises and also have all of the results you get be not-insane might be an impossible task. I think anybody under these constraints would fall into intuitionism and ad-hoc justification for the personal moral beliefs they find compelling, such as donating to animals (which you should obviously do if Florence and I <em>both</em> agree). </p><p>Kant has a fun philosophy on his hands, and the wish to dodge intuition in philosophy is admirable. I think he unfortunately fails to, and his successors don&#8217;t provide clear-cut logical improvements. Proving that the badness in the human experience like pain, anxiety, and fear is <em>actually</em> bad is not a trivial logical endeavor, but I believe that the true foundation of morality must focus on the people experiencing the consequences, not the arbitrary means to reach those ends.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>Here&#8217;s two subscribe buttons, which means you can subscribe twice as hard. Thank you all.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I also learned he was racist, which was less of a joy for me to find.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This is called <em>reflective equilibrium</em>, which is vital in philosophy.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>By the way, this is called just one formulation of what Kant calls the &#8220;categorical imperative,&#8221; called the Formula of the Universal Law. It&#8217;s what Florence focuses on, and there&#8217;s two other laws that Kant insists are merely different expressions of one fundamental principle, called Formula of Humanity and the Formula of the Kingdom of Ends.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Yeeeep, here&#8217;s the stuff from footnote 3 baby.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>As I said in footnote 1, Kant was super racist, so I guess if you&#8217;re Immanuel Kant or somehow stumbled from the dumbest part of Twitter onto my feed, you can call this one a boon.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Art in the Era of AI]]></title><description><![CDATA[Meaning of art in the era of generative AI]]></description><link>https://www.kylestar.net/p/does-reading-chatgpt-book-summaries</link><guid isPermaLink="false">https://www.kylestar.net/p/does-reading-chatgpt-book-summaries</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Thu, 10 Jul 2025 12:05:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a1f80c78-c9bd-4f16-b3fb-103445447f65_1270x910.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If I ask one of my best friends if he&#8217;s seen a movie, he&#8217;ll answer &#8220;yes,&#8221; followed by &#8220;I mean, I read the Wikipedia plot summary.&#8221; This drives me <em>crazy</em>. It&#8217;s also very funny, which is why he does it, but I get to argue the role of &#8220;Noooo, to understand the art you have to engage with it.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IFik!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IFik!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 424w, https://substackcdn.com/image/fetch/$s_!IFik!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 848w, https://substackcdn.com/image/fetch/$s_!IFik!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 1272w, https://substackcdn.com/image/fetch/$s_!IFik!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IFik!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png" width="978" height="666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:666,&quot;width&quot;:978,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:490362,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167931768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IFik!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 424w, https://substackcdn.com/image/fetch/$s_!IFik!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 848w, https://substackcdn.com/image/fetch/$s_!IFik!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 1272w, https://substackcdn.com/image/fetch/$s_!IFik!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff7b960f-bdb3-4480-9e81-0c21dd055b0a_978x666.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I have another one of my closest friends who will join the debate, indignant at my friend, and say &#8220;The Wikipedia summary is lifeless, you have to read the Reddit synopsis to understand the emotions behind the people who see the movie.&#8221;</p><p>They&#8217;re both missing the <em>entire</em> point here, right? What, exactly, are they missing?</p><h1>Stochastic Parrots</h1><p>There&#8217;s some discourse around this tweet that <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Tabitha Alloway&quot;,&quot;id&quot;:69544768,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0a15773-eacc-48a4-838a-a15eb3c65a17_1080x1080.jpeg&quot;,&quot;uuid&quot;:&quot;d99b63e5-5ecd-4621-bde3-298819018cc2&quot;}" data-component-name="MentionToDOM"></span> <a href="https://substack.com/@thewriterinthewillows/note/c-133217766?r=2bgctn&amp;utm_medium=ios&amp;utm_source=notes-share-action">reposted on substack disapprovingly</a>. I mean, it only has around 2k likes, which isn&#8217;t a ton (and it&#8217;s satire), but it sparked discourse in me so I&#8217;m going to talk about it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EML9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EML9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 424w, https://substackcdn.com/image/fetch/$s_!EML9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 848w, https://substackcdn.com/image/fetch/$s_!EML9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 1272w, https://substackcdn.com/image/fetch/$s_!EML9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EML9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png" width="1104" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1104,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:485945,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167931768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EML9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 424w, https://substackcdn.com/image/fetch/$s_!EML9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 848w, https://substackcdn.com/image/fetch/$s_!EML9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 1272w, https://substackcdn.com/image/fetch/$s_!EML9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf5f7095-1771-4ff0-a504-484d0fd5253c_1104x832.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And right off the bat, before I get into the meat of the issue, I&#8217;m going to point out I think there is one valuable way you can read 100 books per day off of AI: if those books are <em>educational.</em></p><p>If I&#8217;m struggling with a physics textbook, and I have a test coming up, so I use ChatGPT to give me simpler versions and analogies of the problems in the chapters so I can learn more efficiently, this strikes me as good! If a book has 20 examples for one problem and you use AI to give you one really good example to make you understand the problem immediately, and understand how the knowledge &#8220;fits in&#8221; to the other knowledge in the course, then you really have done the important understanding in the book.</p><p>Some may protest that AI in schools is a disaster, but that&#8217;s not because AI can&#8217;t explain the problem in an adequate way &#8212; the issue is that AI <em>really does understand</em> the problems well, and a student can use that to produce a good answer to pretty much anything. If a student asks ChatGPT for a question, and it produces 10 paragraphs about how it reached the answer, and the student&#8217;s eyes glaze over the paragraphs and copy and paste the answer into the quiz, then that&#8217;s the fault of the student. There was valuable learning to do there, but the student isn&#8217;t prioritizing the learning &#8212; if they were, then ChatGPT would be amazing! You can ask it how to explain specific parts you&#8217;re confused by; a textbook can&#8217;t do that. Anyone who&#8217;s used AI to really try to <em>learn</em> complicated topics will know it&#8217;s great.</p><p>So I think the issue here is indeed what we want out of non-educational books and media, and indeed the obvious word that I&#8217;m looking for is <em>art</em>. What are we looking to get out of art? What does AI miss out of art? Why can&#8217;t I use AI to summarize a famous book? Why would people be mad if I use AI to <em>generate</em> the next famous book?</p><h1>Art is, like, meant to evoke emotions or whatever</h1><p>I think there&#8217;s two very important reasons that art is different than educational content &#8212; one will sound quite shallow but is vital, and the other might be less vital but is our only hope of preserving some value in human art as AI gets better, and better, and better.</p><p>The first point is that art is meant for you, the consumer, to, like, <em>feel things</em>. Happiness, sadness, anger, ennui, and other European words<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> too. This is true of &#8220;safer&#8221; art, like Marvel movies attempting to evoke triumph and laughter, and less &#8220;safe&#8221; art, like <em>House of Leaves</em>, which tries to give you a sense of claustrophobia, dread, and&#8230; other emotions. I don&#8217;t know what they are, but when I&#8217;m looking at a piece of art, I know that I&#8217;m feeling something. This is it, right? The artist is attempting to distill some emotion, like &#8220;love&#8221; or &#8220;serenity&#8221; and whatever emotion it makes you feel, that&#8217;s a success. &#8220;It&#8217;s meant to have some commentary about the human condition, suck you into a world.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2oIY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2oIY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 424w, https://substackcdn.com/image/fetch/$s_!2oIY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 848w, https://substackcdn.com/image/fetch/$s_!2oIY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 1272w, https://substackcdn.com/image/fetch/$s_!2oIY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2oIY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png" width="1138" height="868" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:868,&quot;width&quot;:1138,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1962452,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167931768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2oIY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 424w, https://substackcdn.com/image/fetch/$s_!2oIY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 848w, https://substackcdn.com/image/fetch/$s_!2oIY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 1272w, https://substackcdn.com/image/fetch/$s_!2oIY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa32b57e8-e1c1-4152-a87a-94163384a63e_1138x868.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Poppies, Near Argenteuil - Claude Monet</figcaption></figure></div><p>This is the very trivial point about why reading a summary isn&#8217;t reading a whole book: even if you know all the characters and plot synopsis, you&#8217;re not getting the all important emotions that the book is meant to evoke. This is unrelated to AI, and is just as much of an issue with reading a Wikipedia page or Sparknotes summary. &#8220;The journey is more important than the destination.&#8221; Ok, then let&#8217;s have AI generate a short poem or something that&#8217;s supposed to evoke the feeling of the book &#8212; what&#8217;s the problem with that? Better yet, just generate a book that&#8217;s better at evoking the vibe of the original story, a book that&#8217;s <em>even deeper</em>. That sounds more efficient. We only have so much time. But surely AI couldn&#8217;t be <em>good</em> at something like that&#8230;</p><p>This point, the one about evoking emotions, is a <em>terrible, terrible</em> point to hang your hat on as something that is &#8220;purely human&#8221; and irreplaceable by AI. For a long time, people have been decrying AI art as &#8220;slop&#8221; and how it would be forever unable to capture the nuances of real art, and for years I&#8217;ve warned that you do not want to hang your hat on AI art being bad because the <em>literal quality</em> is poor. The AI art we have today is going to be the worst we&#8217;ll ever have at evoking emotions, literal quality of artwork, and so forth. I think this point was much stronger and more controversial two years ago, but <a href="https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing">we live in a world now where people can&#8217;t tell if something is AI art or not and AI art can be rated higher than human art</a>. </p><p>You can point to some ineffable &#8220;human richness&#8221; as to why AI art will never be better than humans, but if you can&#8217;t tell whether a piece is done by a human or robot, then I&#8217;m forced to treat you like those people who <a href="https://www.healthline.com/health/allergic-to-electricity#:~:text=The%20name%20%E2%80%9Celectromagnetic%20hypersensitivity%E2%80%9D%20first,medical%20condition%20meriting%20further%20research.">say they&#8217;re allergic to 5G but can&#8217;t tell if the router in the room with them is on</a>. &#8220;So far, every time people have claimed there&#8217;s something an AI can never do without &#8216;real understanding&#8217;, the AI has accomplished it with better pattern-matching,&#8221; from <a href="https://www.astralcodexten.com/p/now-i-really-won-that-ai-bet">this article</a>. And what&#8217;s gonna happen when an AI crams 5,000 microrichness points and hidden flourishes into the piece it makes because it can do operations at the speed of light and your neurons fire 1,000 times slower? No, there&#8217;s no hidden richness that an AI cannot replicate. An AI could write the next great American novel, and the people who say that&#8217;s just too complicated for it are batting 0.000 right now.</p><p>But the second reason, I think, could save us.</p><h1>&#8220;I thought AI would do the data analytics and leave the art to us&#8221;</h1><p>In the beforetimes, when ChatGPT was just a twinkle in Sam Altman&#8217;s eyes, it was thought an AI would never be able to understand the human language. That it would never be able to make art. Fields like evolutionary computation assumed it was best suited for data analysis, because&#8230; well, duh! It&#8217;s a robot and numbers are spreadsheets! And it&#8217;s also good there, of course, but ChatGPT changed the whole game. As art and writing became the first barriers to fall, there was a sense of indignance: &#8220;I thought AI would automate all the hard stuff in society, and we would live in a beautiful post scarcity world where we could paint pictures of butterflies all day.&#8221; Well, I want to talk about <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Ian Dunmore&quot;,&quot;id&quot;:278246206,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb7b04fe-aa55-40ab-a342-14aaa6af88f5_1026x1026.png&quot;,&quot;uuid&quot;:&quot;e21417df-82b5-4ce6-9bf8-efbafe8bfb29&quot;}" data-component-name="MentionToDOM"></span>&#8217;s comment on the tweet:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZAAP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZAAP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 424w, https://substackcdn.com/image/fetch/$s_!ZAAP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 848w, https://substackcdn.com/image/fetch/$s_!ZAAP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 1272w, https://substackcdn.com/image/fetch/$s_!ZAAP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZAAP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png" width="1070" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1070,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:192304,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167931768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZAAP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 424w, https://substackcdn.com/image/fetch/$s_!ZAAP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 848w, https://substackcdn.com/image/fetch/$s_!ZAAP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 1272w, https://substackcdn.com/image/fetch/$s_!ZAAP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04d291c8-207b-44d6-9647-5e5de1825cc7_1070x220.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This is a funny reply, but in a certain sense&#8230; isn&#8217;t this one kinda true? As Scott Alexander puts it in the fantastic <a href="https://www.astralcodexten.com/p/the-colors-of-her-coat">Colors of Her Coat</a>:</p><blockquote><p>What about cameras? A whole industry of portraits, landscapes, cityscapes - totally destroyed. If you wanted to know what Paris looked like, no need to choose between Manet&#8217;s interpretation or Beraud&#8217;s interpretation or anyone else&#8217;s - just glance at a photo. A Frenchman with a camera could generate a hundred pictures of Paris a day, each as cold and perspectiveless as mathematical truth. The artists, defeated, retreated into Impressionism, or Cubism, or painting a canvas entirely blue and saying it represented Paris in some deeper sense. You could still draw the city true-to-life if you wanted. But it would just be more Paris.</p></blockquote><p>There&#8217;s a certain ideal in which AI would pack all the useful bits of a book into a short summary, just like a photo packs all the useful bits of a place into an easy format. It&#8217;d tell you all the emotions you would have felt, just as if you remembered a book you read long ago. Sure, you wouldn&#8217;t get the full experience in real time, but maybe an AI could summarize the <em>feeling</em> of a book into a more compressible format &#8212; it&#8217;s right there in the name, &#8220;generative.&#8221; Maybe the issue with the AI book summaries is that they need to also tell you what feeling the book is trying to express, and if they did that, you really wouldn&#8217;t have to read it at all &#8212; plot summary? check. emotions? check. It&#8217;d be as if you finished a book a year ago, and were looking back at it wistfully.</p><p>Except no, this is all crazy, right? Here it is, my second reason I want humans making art, and it&#8217;s the <em>exact opposite</em> of &#8220;death of the artist&#8221;: <strong>I</strong><em><strong> </strong></em><strong>care that there is a conscious being</strong> who made it. <strong>I care </strong>that I can catch a glimpse of a human&#8217;s mind and peel it open, and even if I don&#8217;t know if a human made the art or not, <em>I care</em> if someone did! AI slop is not slop because it&#8217;s bad, it&#8217;s slop because it&#8217;s not <em>real</em>, because a conscious being breathing life into a piece makes it real. I care about conscious beings more than anything else in the universe. I want them to be happy, fulfilled, and not in pain, and I truly mean it when I say nothing else in the universe matters. </p><p>So even if an AI can generate 1,000 pieces more thoughtful, evocative, and deep than one by a human, if there&#8217;s nothing behind the pixels I&#8217;m looking at, if no one is feeling anything except the consumer, then it&#8217;s not <em>art</em>. There&#8217;s something about a piece that takes 50 years of human contemplation that makes it better <em>art</em> than something chosen from 100 separate images with the same prompt. So even if an AI can compress the entire feeling of a book into a paragraph, even if it can generate masterpieces and wonders the likes of which I&#8217;ve never seen, as long as there are still humans who want to make art, I think their art <em>matters</em> more.</p><h1>&#8230;</h1><p>I think even the most well crafted Wikipedia summary, or parroting of a movie&#8217;s points on Reddit, or incredible summary evoking the same vibes as the original, falls short as art if there&#8217;s not a human putting emotions behind it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. My friends are missing the point. I think the reason a book summary falls short is not because it doesn&#8217;t evoke the plot, not because it doesn&#8217;t evoke the emotions, not that it&#8217;s not long enough, but because the book was crafted by <em>someone, anyone</em>. I&#8217;m not against AI; it&#8217;s amazing and one of the best tools humanity has ever invented, it will only get better and people doubting its ability to craft impressive works will be wrong, and I&#8217;m sure in the future humans will be so stunlocked by constant wonders beyond their imagination that they won&#8217;t even care about making art or something. </p><p>But as long as humans <em>do</em> still want to make art, I think you owe it to them to look at what they made, not just the bullet point list.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em>Subscribe using this button and like if you enjoyed and want to see more. Thank you.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>schadenfreude, uhhhhh, d&#233;j&#224; vu, uhhhhh&#8230;. I&#8217;m drawing a blank&#8230; je ne sais quoi? That&#8217;s kinda an emotion. Is that all?</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This can get into the weeds when some of it is made by a human and some by an AI, but I think that&#8217;s a gray area you can take on a piece by piece basis.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Creating Life is Bad, Except for Antinatalists, They Should Have Kids]]></title><description><![CDATA[The modern antinatalism movement is incoherent pseudo-philosophical window dressing that steals all of the bad points from negative utilitarianism and none of the good]]></description><link>https://www.kylestar.net/p/creating-life-is-bad-except-for-antinatalists</link><guid isPermaLink="false">https://www.kylestar.net/p/creating-life-is-bad-except-for-antinatalists</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 08 Jul 2025 12:05:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/206b68ff-65b1-42d1-a33d-65c45be6be81_1024x683.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>(Like this post &#128077; if you enjoy and want to support me. Thanks!)</em></p><p>The modern antinatalist movement is a social movement predicated mostly on the belief that humans should stop having kids, because life is full of suffering. It&#8217;s an unfortunately incoherent philosophy that steals all of the bad parts and none of the good from the philosophically respectable position of Negative Utilitarianism, which believes only suffering has moral weight. This leads to &#8220;true&#8221; antinatalism, which believes that all conscious beings, not just humans, coming into existence is a morally bad action, which is also a respectable position. Modern antinatalists fail to take their implications seriously, the movement is generally used to complain  about how parents suck or working sucks, and it&#8217;s mostly held by humans in the wealthiest nation with the most fortunate conscious beings in all of history.</p><p>First I&#8217;ll explain negative utilitarianism and the reasons I disagree despite respecting the position and the philosophers who argue for it. Then I&#8217;m going to explain why modern antinatalism is a pseudo-philosophical perversion of every respectable negative utilitarian principle, and why it probably endorses the opposite course of action from &#8220;real&#8221; antinatalism. Buckle up.</p><h1>Negative Utilitarianism</h1><p>Negative utilitarianism is the belief that coming into existence is always a morally bad action. This is cited from the fact the absence of pain is good, but the absence of pleasure isn&#8217;t really bad &#8212; an unborn child NOT experiencing happiness isn&#8217;t really a tragedy, because no one who exists is deprived from experiencing that happiness, but a born child experiencing pain definitely sucks. This asymmetry is helpfully put into this chart by David Benatar (who is not necessarily a negative utilitarian, but this is the best argument for it)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!R51b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!R51b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 424w, https://substackcdn.com/image/fetch/$s_!R51b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 848w, https://substackcdn.com/image/fetch/$s_!R51b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 1272w, https://substackcdn.com/image/fetch/$s_!R51b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!R51b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png" width="718" height="648" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:648,&quot;width&quot;:718,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:122756,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167500755?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!R51b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 424w, https://substackcdn.com/image/fetch/$s_!R51b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 848w, https://substackcdn.com/image/fetch/$s_!R51b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 1272w, https://substackcdn.com/image/fetch/$s_!R51b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3e448cc-7a72-443d-8f55-43e8117cb48c_718x648.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As for why I&#8217;m <em>not</em> a negative utilitarian&#8230; I just think it&#8217;s pretty clear experiencing happiness is good, and it&#8217;s better if a conscious being were to come into existence to experience it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. I think the absence of pleasure <em>is</em> bad! If someone stubs their toe then experiences mindblowing orgasms for the next 30 years, that sounds like a sweet deal, sign me up. If I died right now, I would be pretty happy to have existed, and I certainly wouldn&#8217;t want to have never been born UNLESS I experience some terrible pain at some point in the future, which while a big deal (<span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Beetle&quot;,&quot;id&quot;:173031807,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48a917f8-cecc-4800-8694-b79b43ea45c1_1720x1720.png&quot;,&quot;uuid&quot;:&quot;79c9d38c-b205-4822-a699-9149ff48533f&quot;}" data-component-name="MentionToDOM"></span> talks about <a href="https://axia358.substack.com/p/win-big">s-risks in this post</a>), is indeed still just <em>normal</em> utilitarian math. So while regular utilitarianism, that compares suffering and happiness and prefers maximizing happiness, can handle this Orgasm Guy, <em>negative</em> utilitarianism has to say that even if I drift off into nonexistence forever right now that it would&#8217;ve been better to have never been. I disagree! My life is great! Orgasm Guy&#8217;s life is great! Us existing is good, <em>not</em> not-bad!</p><p>Now there&#8217;s some problems with the implications of creating a net good life always being good. There&#8217;s a famous thought experiment aptly called <a href="https://plato.stanford.edu/entries/repugnant-conclusion/">The Repugnant Conclusion</a> that it will always be preferable from this perspective to create a huge amount of barely-worth-living lives over, like, one extremely happy guy (because .1 happiness * a billion people &gt; 1,000 happiness * 1 person) but I&#8217;m more willing to accept that at some point down the line we might want to create a massive number of net happy lives over the fact that Orgasm Guy&#8217;s life actually sucks because of that one time he stubbed his toe, and we shouldn&#8217;t bring him into existence to let him experience all those orgasms because of the principle.</p><p>Some normal utilitarians try to dodge The Repugnant Conclusion by saying you should only create <em>additional</em> lives if they&#8217;re happy &#8220;enough,&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> not barely worth living, but I don&#8217;t know if this is a great solution. Whatever you choose for the word &#8220;enough&#8221; is gonna be pretty arbitrary, and now your optimal world is gonna be full of a bunch of people slightly better than what your &#8220;enough&#8221; bar says. And the only way to decide what &#8220;enough&#8221; is is by just comparing different possible worlds in the future, and saying &#8220;hmmm, is this happy enough to not be called the repugnant conclusion?&#8221; This gets rid of all the morally clear sexiness that total utilitarianism gives you. Still, both of of these options seem way better than the idea that creating the happiest guy possible isn&#8217;t morally preferable, which I vehemently reject. I stand by Orgasm Guy; it <em>would</em> be bad to not bring him into existence if we could.</p><p>By the way, some philosophers might protest that what I&#8217;m countering is <em>strong</em> negative utilitarianism, and that <em>weak</em> negative utilitarianism is another real philosophical position &#8212; that&#8217;s the belief that suffering is bad enough that we should weight it much heavier than pleasure. Which&#8230; eh. I&#8217;m not convinced that weak negative utilitarianism isn&#8217;t just <em>regular old</em> utilitarianism in a trenchcoat. I feel like arguing &#8220;suffering is worse than pleasure&#8221; is really just saying that you disagree with other utilitarians about how to weight happiness and suffering, and this weighting is something that&#8217;s different between all utilitarians and is the kinda the whole ball game. So when they say &#8220;my weightings comparing torture and orgasms<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> are different!&#8221; I say&#8230; duh<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>.</p><p>Strong negative utilitarians, mostly being philosophers, really do believe what they believe. They think the earth and all life should go caput because their philosophy says so, which is a level of internal consistency and honesty that I respect. And there&#8217;s something appealing about this, because what I <em>do</em> believe is that most conscious lives on Earth suck! <a href="https://benthams.substack.com/p/the-worst-thing-in-the-world-isnt">Life in the wild kinda stinks</a> (they don&#8217;t have air conditioning and are, like, fighting for their lives every day. My life would suck if I had to do that) and life for factory farmed animals definitely stinks (I would rather not be <a href="https://www.ciwf.org.uk/farm-animals/chickens/meat-chickens/">one of these chickens</a> and live my whole life packed with other chickens till I die at 8-12 weeks old), so I believe that the world really does suck ass, in a detached, hidden-from-me-so-I-don&#8217;t-have-to-truly-confront-it kinda way. But where I draw the line is saying that my life stinks because everyone else&#8217;s does, and that genuinely amazing lives shouldn&#8217;t be lived because suffering is the only vector that really matters.</p><h1>Antinatalism</h1><p>So, the modern antinatalist movement. Just the belief itself, off of its original premise &#8220;<em>all</em> life should stop being born&#8221; is actually more defensible from a philosophical perspective than strong negative utilitarians, because it&#8217;s pretty much just a weaker version of negative utilitarianism &#8212; even standard utilitarians scared of massive suffering could endorse that &#8220;being born is bad.&#8221; But wow, the movement on <a href="https://www.tiktok.com/discover/antinatalism">TikTok</a> and <a href="https://www.reddit.com/r/antinatalism/">Reddit</a> misses the point completely by focusing on humans, climate change, and capitalism, and this refusal to look at the bigger picture or consider the implications of their actions make them terrible philosophers and appear like massive whiners.</p><p>The first issue with the antinatalism movement is that it&#8217;s human-centric<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>. Remember how I said human lives, with air conditioning, unlimited food, comfort, safety, and shelter are all pretty awesome and specifically worth living compared to the suffering of the rest? Antinatalists <em>specifically</em> talk about how they&#8217;re against humans, so they&#8217;re only against these lives. They cite human&#8217;s impact on global warming, capitalism, and how life is boring and we didn&#8217;t evolve for 9-5s as reasons you shouldn&#8217;t have kids. </p><p>But surely if you&#8217;re against the most rich lives in human history, you should be against the most painful animal lives too, and maybe want to return to nature and live in the woods or in a tribe or something? You certainly wouldn&#8217;t want all humans to disappear and leave only suffering wild animals, unless you&#8217;re religious and don&#8217;t believe that animals have souls (not something I expect antinatalists believe, given their demographics!) But no, it seems like it&#8217;s all human centric here.</p><p>The second issue is the movement doesn&#8217;t actually seem to care about plans to reduce the amount of human lives, despite it being the only stated thing they believe. The main plan of antinatalism is encouraging people to not have kids and sneering at people who do, and that&#8217;s all they do, but anyone with Google can find that the amount of lives in the west is pretty stagnant and lives in Africa are exploding. So the movement should probably plan to spread propaganda over there. But no, you&#8217;ll only find people in the west defending their personal choice to not have kids to their friends with poorly applied philosophical principles.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-3K5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-3K5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 424w, https://substackcdn.com/image/fetch/$s_!-3K5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 848w, https://substackcdn.com/image/fetch/$s_!-3K5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 1272w, https://substackcdn.com/image/fetch/$s_!-3K5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-3K5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png" width="872" height="870" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:870,&quot;width&quot;:872,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:852828,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167500755?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-3K5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 424w, https://substackcdn.com/image/fetch/$s_!-3K5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 848w, https://substackcdn.com/image/fetch/$s_!-3K5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 1272w, https://substackcdn.com/image/fetch/$s_!-3K5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bfe07f-b9ea-418f-887b-1b1aa97f7784_872x870.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The third issue is they don&#8217;t seem to consider the implications of what they believe, at all. I am 100% not endorsing suicide, but it feels like an important point to talk about if you build your belief system on the phrase &#8220;I wish I wasn&#8217;t born, because I couldn&#8217;t consent to it.&#8221; It&#8217;s a very icky topic, which is why it&#8217;s only indirectly focused on, but it seems like the natural conclusion of a movement built on the lives of high functioning conscious beings sucking is to grapple with the fact that there&#8217;s an easy path to nonexistence for anyone who wants one &#8212; maybe antinatalists should endorse voluntary euthanasia, like how it&#8217;s been legalized in Canada? </p><p>Meanwhile, animals aren&#8217;t smart enough to escape unending pain if they find themselves <a href="https://www.nhm.ac.uk/discover/body-snatchers-eaten-alive.html">the victims of being paralyzed with bugs born in their stomach</a>; it seems like we should deal with that before focusing on how unfulfilling human life with all of society&#8217;s conveniences is. Suicide is not an easy option, for sure, because it creates suffering from all the people that care about you, but it certainly IS an option that will cap the amount of necessary suffering, and importantly, animals don&#8217;t have the option!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Negative utilitarians can grapple with the whole &#8220;kill Earth&#8221; thing, because they&#8217;re philosophically serious people &#8212; this suicide critique is unfair to use against them because they want conscious beings who don&#8217;t have the option out to <em>also</em> stop having kids.</p><p>I know I&#8217;m dissing modern antinatalists pretty hard, and I really do believe that they have no good philosophical backing for an antinatalist movement that doesn&#8217;t factor in animals or Africa (some of them are just using it to feel morally superior for not having kids) but behind some antinatalists are people who feel out of touch with society and culture and wish they&#8217;d never been born, which is pretty dark and an unfortunate fact about this life &#8212; even if you have a lot of stuff, you can still be miserable. I think the worst thing to do when depressed is to dive deep into a philosophy that says all life on Earth is a mistake, and I hope people in this headspace get better.</p><p>But my position is indeed that the world sucks, specifically in all the places that the antinatalist movement online doesn&#8217;t focus on. I think more humans in western society, and most antinatalists certainly are in that demographic, should be born to experience the fruits of humanity&#8217;s labor, because that&#8217;s where the freedom and joy is. And I believe we need to grapple with stopping so many suffering animals from being born &#8212; and luckily, more humans <a href="https://www.lesswrong.com/posts/58ajesm2C38wi3WSJ/insect-suffering-is-the-biggest-issue-what-to-do-about-it">probably reduce the amount of conscious non-humans</a>! So yes, like the title says, creating most life is bad, except for antinatalists. I hope they have kids.</p><p>&#8230;</p><p>This is all a plea to not take the antinatalism movement seriously, and instead view it for what it is: psuedo-philosophy as window dressing to complain that capitalism sucks, to feel morally superior for not wanting kids, and a vehicle to whine about having to work.</p><p>Try to do good effectively and figure out what to prioritize, because you care about the suffering from the world&#8217;s least fortunate, instead of wallowing in the fact that life isn&#8217;t perfectly amazing for the luckiest conscious beings in history justified by shoddy philosophical reasoning. You don&#8217;t have to be an effective altruist like me (though I hope you join!), but either way, certainly don&#8217;t be a modern antinatalist, and don&#8217;t treat the movement seriously. Fin.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>Subscribe!                                            ^^^^^^^^^^^^^</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This is obviously an argument for negative utilitarianism, right? Benatar likes to says it&#8217;s separate from any philosophy but cmon, this is the only and best argument that shows why only suffering counts and all strong negative utilitarians nowadays cite it. I&#8217;m lost on what this argument is doing if not arguing for strong negative utilitarianism</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This is me rejecting person-affecting axiology, which means I&#8217;m working with an impersonal or total utilitarian ideology. Those words don&#8217;t add any context at all to the paragraph unless you&#8217;re a philosopher, in which case bam, there you go. Big words!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Called &#8220;satisficing&#8221; utilitarianism, which is a very fun word.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I posted a note saying there were six of the word &#8220;orgasm&#8221; in today&#8217;s post, and I apologize, because this is in fact the eighth one in the essay. I added more.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>There&#8217;s this certain instinct for weak negative utilitarians to describe the difference between themselves and regular utilitarians as &#8220;oh, you weight suffering 1 to 1, but I weight suffering 10 to 1.&#8221; I think this literally doesn&#8217;t make any sense, because there&#8217;s no freeform 1 to 1 suffering vs happiness to compare to, ALL morality is real world comparison based. If regular utilitarians can disagree on how many orgasms equal a stab in the leg, and weak negative utilitarians come in and say &#8220;oh, well I weight it differently than you guys too, I deserve my own separate label&#8221; I just think they&#8217;re regular utilitarians who disagree. Someone tell me a moral decision that would be different between a weak negative utilitarian vs a regular one who just thinks suffering is more bad than others, because I&#8217;m pretty sure these two&#8217;s preferred worlds (axiology) could be identical &#8212; the moment you say &#8220;I think X suffering = X pleasure, but I think suffering is ten times worse than pleasure so I&#8217;m weighting the equation&#8221; you <em>cease</em> to believe X suffering = X pleasure, because you just said you didn&#8217;t believe that! Now, maybe it&#8217;s useful to call yourself a weak negative utilitarian to SIGNAL yourself as someone who believes suffering is really bad, but I completely reject the idea that this isn&#8217;t a subset of utilitarianism. This is actually a relatively controversial belief so someone can tackle and fight me here if they want.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>I scrolled 100 posts <a href="https://www.reddit.com/r/antinatalism/">on the subreddit</a>&#8217;s top of all time and watched tiktoks and not a single one mentioned anyone but humans not having kids. I&#8217;m countering the entirety of the online movement as unserious, not the specific belief here, so I think this is fair &#8212; I talked about why I don&#8217;t believe in the belief and what I do think in part 1. If there&#8217;s an antinatalist movement separate from the philosophers who argue negative utilitarianism that also doesn&#8217;t fall victim to the three critiques I state in the article, please do tell me, I couldn&#8217;t find any. Also, do not scroll 100 posts on the subreddit, wow. It&#8217;s bad.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>More discussion in the comments, specifically my comment with Silas.</p></div></div>]]></content:encoded></item><item><title><![CDATA[How Social Media Turns Our Political Enemies Into Caricatures]]></title><description><![CDATA[Understanding the enemy, how social media absolutely ruins this understanding of the enemy, what to do about it, and an admission that I may have strawmanned?...]]></description><link>https://www.kylestar.net/p/am-i-treating-all-my-political-opponents</link><guid isPermaLink="false">https://www.kylestar.net/p/am-i-treating-all-my-political-opponents</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Sun, 06 Jul 2025 21:00:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a64f6693-c241-4ea7-b743-28270dc53aad_1170x996.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>If you enjoy this post and would like to support me, please give it a like.</em></p><div><hr></div><p>How do you know you actually understand the enemy?</p><p>If you&#8217;re looking to win an argument, you need to know which buttons to press that they don&#8217;t have any good answers for. But if you&#8217;re looking to find the truth, you need to understand the arguments they make. And if you want to understand the arguments they make, you need to understand the person making them. And as I&#8217;ll explain, in today&#8217;s day and age, social media has <em>absolutely ruined</em> our understanding of the political opponent.</p><p>Our brains don&#8217;t have neat lists of arguments labeled under each worldview that we can individually update&#8212;oh no, of course they don&#8217;t. Instead, our brains hold clusters of personality traits and beliefs, conjured up from what we imagine the average member of a group to be.</p><p>What makes an anarchist tick? How young are they? Who is a conservative; do they like sports? Why? What does a leftist think the biggest evil in the world is? Most importantly: why do they all believe what they believe, and why do they all do what they do? What very real emotions led them to the politics they have today? How justified are these emotions? Are you swayed?</p><p>If you don&#8217;t understand the motivation behind political beliefs, if you <em>cannot comprehend the enemy</em>, then the only thing resting in your head is a field of strawmen.</p><p>&#8230;</p><p>I put out a post a week ago about how a group&#8217;s attitude doesn&#8217;t indicate how true its beliefs are: <a href="https://starlog.substack.com/p/just-because-theyre-annoying-doesnt?r=2bgctn">Just Because They&#8217;re Annoying, Doesn&#8217;t Mean They&#8217;re Wrong</a>.</p><p>Consider this the other side of that coin: the beliefs of a group are best understood in the context of the real person holding them. Don&#8217;t get me wrong, the actual truth of an argument is indeed completely separate from the person. But people are generally bad at articulating the real reasons that they believe what they believe, so only by understanding the motivations of a person can you properly, truly dismiss their claims on their terms. How do you build this mental image of a person in the modern day? Well, you certainly don&#8217;t ask your coworkers for their diagnosis of the recent Trump bill unless you already know they agree with you politically, so of course you build your enemy from the internet and social media.</p><p>Social media sucks, and it especially sucks to build a proper example of someone who believes X in your head. I&#8217;m probably the first person to ever criticize social media.</p><p>I don&#8217;t mean &#8220;the average person on social media sucks,&#8221; I mean, &#8220;the system as a whole encourages the most low brow, signaling content over any attempt at honest communication.&#8221; Some may even say <em>There&#8217;s No Intelligent Consumption Under Social Media</em>, so let&#8217;s diagnose what social media says about Insane Political Discourse.</p><p>Twitter maximizes engagement. Engagement is a punchy word that basically asks &#8220;how long can we keep this poor citizen&#8217;s eyes glued to our app.&#8221; As it turns out, when Twitter reaches into the almighty algorithm and pulls out the content that&#8217;s going to make you stay on the app the most, it&#8217;s probably going to be something that makes you angry. Anger is <a href="https://www.youtube.com/watch?v=rE3j_RHkqJc&amp;ab_channel=CGPGrey">the best human emotion for keeping the app in your mind</a>. Joy isn&#8217;t that far behind, for sure, but trust me that anger is up there. When it comes to politics, the thing that will make one angriest is seeing the most insane, obviously wrong political take imaginable. Importantly, at least some people need to actually believe it (this &#8220;strawman-believer&#8221; <a href="https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/">is actually technically called a &#8220;weak man&#8221;</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>), and it needs to be plausible. A <a href="https://slatestarcodex.com/2018/10/30/sort-by-controversial/">scissor statement</a> &#8212; maximized for disagreement. These insane takes get shown to the most people possible, and thus get individuals the most likes and clout. Twitter is not a place for safe, measured takes.</p><p>So you have the worst takes from each side, magnified so more people respond, sent to their political enemies, which poisons the well. Remember how I just said &#8220;some people need to actually believe it?&#8221; Actually, you just need to <em>believe</em> that some people actually believe it, so as your faith in the other political side drains, the algorithm sees the opportunity to show you even worse takes, knowing you&#8217;ll eat it up. It&#8217;s a vicious cycle, an endless loop. A perfect breeding ground for a field of strawmen. <em>This</em> is why I&#8217;m so insistent that you don&#8217;t fall victim to the trap of this <a href="https://www.smbc-comics.com/comic/aaaah">SMBC comic</a> here<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, lest you fall further than you think.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oJdf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oJdf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 424w, https://substackcdn.com/image/fetch/$s_!oJdf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 848w, https://substackcdn.com/image/fetch/$s_!oJdf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 1272w, https://substackcdn.com/image/fetch/$s_!oJdf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oJdf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png" width="754" height="1410" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1410,&quot;width&quot;:754,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:736336,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167603758?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oJdf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 424w, https://substackcdn.com/image/fetch/$s_!oJdf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 848w, https://substackcdn.com/image/fetch/$s_!oJdf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 1272w, https://substackcdn.com/image/fetch/$s_!oJdf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe677802d-1fa3-4e0c-be42-07d8169bc19c_754x1410.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In this light, it&#8217;s no wonder people overwhelmingly think their political enemies have gotten worse than they used to be. The algorithm isn&#8217;t testing your opponents&#8217; points, it&#8217;s testing what it can <em>get you to believe,</em> and the more your standards fall, the more incoherent and demonic of a take it can show you and get you to bite. And if you&#8217;re fighting a demon, more dire action is justified, so I think the average political belief actually <em>does get worse</em> from this cycle, as discourse gets taken to Cloud Cuckoo Land.</p><p>&#8230;</p><p>So social media is a terrible, terrible place for avoiding having false images and the worst parts of your political enemies&#8217; beliefs in your head. If you&#8217;re looking for the truth, looking to refine the image of your enemy in your head, where should you go? How can you better understand the other side?</p><p>Well, the first bar is that you should assume they have a powerful emotional reason for doing what they&#8217;re doing &#8212; the world makes a lot more sense viewed through a lens of incompetence and selfishness rather than overt evil. Second, find good-faith communities which try to understand others in good-faith. Third, while you can counter the poisoned well, make sure you don&#8217;t conflate the worst of Twitter with &#8220;the other side&#8221; in your head.</p><p>And now let me tell you the easy one, the one that&#8217;s helped me more than any other. <em>Don&#8217;t go to social media platforms that engage in this process.</em> Remember how I said earlier &#8220;No Intelligent Consumption Under Social Media&#8221;? Well, unlike the phrase I countered yesterday, you actually cannot engage with social media! And if you do engage, you can choose the good ones, and curate your feed there, because there&#8217;s a bunch of them!</p><p>Twitter is a lost cause unless you only subscribe to a specific niche (like sports) and never go on the For You page &#8212; even deep into a niche the For You page shows me the most insane people on the internet occasionally. Reddit, by virtue of weighting downvotes and upvotes, is at least not going to show insane strawmen directly, it&#8217;s going to be articles disagreeing with insane strawmen. But it&#8217;s much more doable on Reddit to subscribe to the good subreddits and only go on Home, so I recommend it. Bluesky is probably the same as Reddit&#8217;s popular page, just by virtue of only having one political side and talking lots about politics without a commitment to good-faith, so I&#8217;d avoid.</p><p>I have lots of friends from different political sides in real life (this will link to my article encouraging making politically different friends when it&#8217;s out later this week), and this is hardest one to do and the best option, because when I discuss politics I get the distinct impression that my friends are good and decent people who have real emotions and arguments guiding their beliefs.</p><p>Finally, I want to apologize for adding straw to the strawman in your head. Yesterday, I put out a post complaining about the phrase &#8220;<a href="https://starlog.substack.com/p/the-lefts-most-evil-phrase-no-ethical?r=2bgctn">There&#8217;s No Ethical Consumption Under Capitalism</a>.&#8221; In it, I complained about people on TikTok who use the phrase, and I also complained about leftists who engage in absolutist thinking and &#8220;slacktivism.&#8221; I stand by that my critiques of both of these groups are true and good<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>, but it doesn&#8217;t matter that I specifically, deliberately said the phrase is only misused by &#8220;some leftists&#8221; if the <em>image of a leftist in your head </em>groups people who misuse the phrase along with the other, absolutist thinking group, because the absolutist thinking is way, way more common &#8212; many leftists are acutely aware of all the harm they cause indirectly, and wouldn&#8217;t use the phrase I said in the article in the main way I critique it. I should&#8217;ve made these critiques two separate articles. I said all the correct words about how it&#8217;s only some leftists, and it still wasn&#8217;t enough, because we don&#8217;t communicate with words, we communicate with the impressions we leave behind when you click off the article.</p><p>&#8230;</p><p>The images we hold of our enemies are precious and must be defended from the weak representatives that social media slings at us. Without understanding the enemy, without figuring out what makes them tick, you are lost in your search for truth just as if you charted an expedition with a compass that&#8217;s missing its needle.</p><p>Pave over the field of strawmen in your mind, and you may find that people are more rich in color than you thought. And with a more cultivated field, the more you&#8217;ll know where you belong on it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>(Liking is the best way to support me. Also, subscribe by clicking that button if you like kittens, or if you believe the endless cycle of demonizing our enemies will continue to get worse because of the nature of social media companies&#8217; incentives until political sides have to move to Mars and Venus to get away from the other side&#8217;s opinions.)</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>But you&#8217;re damn right that I will be using strawman for the rest of the article instead of weak man because using the less cool phrase would like, cramp my vibe, man. Putting that in parenthesis instead of a footnote hopefully communicates the semantic point I&#8217;m making for everyone, so they know what I&#8217;m talking about.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I cut off the bottom panel of the comic when I inserted it because it&#8217;s funnier without it. Comic artists please stop adding too much text, show don&#8217;t tell or whatever.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I received a good amount of criticism for yesterday&#8217;s post&#8217;s arguments, too. Some said I was unfair about how common the &#8220;justifying consuming blindly&#8221; interpretation is used, and it was primarily used for one of the two more generous interpretations. Another, fifth interpretation I didn&#8217;t mention was proposed, where it&#8217;s mainly meant to be used as a <em>defense</em> against people who tell leftists, &#8220;oh, if you hate capitalism so much, why do you still buy from companies?&#8221; Some said it was just meant to signal that you&#8217;re a part of the left, and there&#8217;s not much deeper there.</p><p>I talk about conflating different types of leftists in the post, I think the fifth interpretation doesn&#8217;t really hold any water if all of the interpretations signal completely separate advice, and I think signals are most useful when true and good, so my distaste for the phrase is good. Mostly I regret not broadening it to phrases like &#8220;defund the police&#8221; and how the content of words vs intent is very interesting in a <a href="https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/">motte-and-bailey way</a>, because I think that would be a fun post, and &#8220;defund the police&#8221; is way worse when it comes to what the words mean vs the stated interpretation. Read the motte-and-bailey post. I will link Scott Alexander until I die.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Left's Worst Phrase: "No Ethical Consumption Under Capitalism"]]></title><description><![CDATA[This phrase SUCKS guys. I also complain about leftists in this one (I wasn't on substack when that was hot), but mostly this specific phrase, because man is it really not good.]]></description><link>https://www.kylestar.net/p/the-lefts-most-evil-phrase-no-ethical</link><guid isPermaLink="false">https://www.kylestar.net/p/the-lefts-most-evil-phrase-no-ethical</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Sat, 05 Jul 2025 12:02:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2c4d55d6-33ad-43f8-84bc-b22211e2d16b_962x724.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The best way to support me is to like this post. Many people are saying you should, I hear. Thank you.</em></p><p>If you assume all companies are evil because some are, then there&#8217;s absolutely no incentive for any companies to become good, ever. If you assume everyone who uses AI is horrible without any examination, you lump together normal people doing their jobs in the most efficient way with scammers doing deepfake fraud. And worst of all, if you gaze into the horror of the world, see all the evil and suffering and tragedy and blood that surrounds us, and your reaction is &#8220;Welp, guess I don&#8217;t even have to try to do good then,&#8221; then I stand against you.</p><p>&#8230;</p><p>"No ethical consumption under capitalism" is a mess of a phrase that&#8217;s commonly used to absolve the individual from having to try to make good decisions, implying an abstract &#8220;revolution&#8221; is the only way. I think this attitude is bad, and doing good things is good. The phrase also has no less than four completely different potential meanings, that seem to encourage the exact opposite actions. That is also bad, and it&#8217;s unhelpful.</p>
      <p>
          <a href="https://www.kylestar.net/p/the-lefts-most-evil-phrase-no-ethical">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Please Just Answer the Damn Moral Hypothetical]]></title><description><![CDATA[Trolley problems, would-you-rathers, drowning children, the superpower to create superpowers, and the fact that everyone's a goddamn politician when it comes to morality]]></description><link>https://www.kylestar.net/p/please-just-answer-the-goddamn-moral</link><guid isPermaLink="false">https://www.kylestar.net/p/please-just-answer-the-goddamn-moral</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Wed, 02 Jul 2025 17:01:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/19c47e5d-e135-4afa-bcd5-71c405173319_2452x1354.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>(If you enjoy this post, the best way to support me is to give it a like. Probably. I don&#8217;t actually understand Substack&#8217;s algorithm. I appreciate all the support, though)</em></p><p>When a politician is asked a direct, yes-or-no question, they rarely give a straight answer. They dodge, they weave, they rationalize, they distract. This isn&#8217;t because they&#8217;re stupid &#8212; far from it &#8212; but instead because straight answers could be used against them by their enemies in the future.</p><p>They&#8217;ll be interrogated about the implications, quoted out of context, and most importantly, the question will box in the types of things they can say and still look like a reasonable, internally consistent person. This is an inevitable part of politics and of having to appeal to as many people as possible. I&#8217;ve made my peace with it. But I&#8217;d like to imagine most people know these practices divorce us from the truth and what we truly believe. Most people don&#8217;t think of themselves as politicians. So imagine my surprise that whenever I ask a moral hypothetical, most people default to these exact practices instead of just giving a damn answer, even when they have an instinctive, visceral chosen side. Worse &#8212; they seemingly don&#8217;t just use these tactics against me in argument, but they seem to really believe in using them against themselves, too.</p><p>&#8230;</p><p>Peter Singer&#8217;s famous drowning child hypothetical is this: You see a random child drowning in a river on your way to work, but you&#8217;re wearing a $3,000 suit. Do you save the drowning child even if you don&#8217;t have enough time to get the $3,000 suit off, ruining it?</p><p>This is a self-contained hypothetical question. Peter Singer only then, after posing this, makes his argument: if you decide to save the kid, then don&#8217;t you have a moral obligation to donate $3,000 to charity to save a life, right now? <a href="https://www.givewell.org/charities/amf#:~:text=Against%20Malaria%20Foundation%20(AMF)%20provides,distribute%20ITNs%20in%20mass%20campaigns.">Because that&#8217;s something you can do</a>! Saving a human life is relatively cheap, because many people in Africa die of malaria, which is very preventable.</p><p>There are lots of places for disagreement in Singer&#8217;s argument. Perhaps you believe in a stronger version of <a href="https://slatestarcodex.com/2013/05/17/newtonian-ethics/">moral gravity</a>, where we have moral obligations based on our communities first and foremost. You could be religious, and believe that God wants us to exemplify the virtues in the Bible first and foremost, and the child is a moral test, but the children in Africa are tests for someone else. Maybe you reject that a stranger should save the child anyway, and don&#8217;t find any fault with ignoring a child drowning if it would incur heavy financial losses ($3,000 ain&#8217;t nothing!). It&#8217;s possible you don&#8217;t care about people in Africa at all, and would rather protect more productive parts of the global economy with more educated people. Even believing that no charities do any good and that they&#8217;re all just money laundering for evil people, is an argument that you can actually have, and debate the facts on.</p><p>I disagree with all of these pretty strongly, but they&#8217;re all arguments. Good discussions can be had here. The worst objections, however, use a politician&#8217;s trickery to avoid answering the normal would-you-rather moral question, before any arguments are even made at all.</p><p>As I said in my article defending rationalists, the reason moral hypotheticals are good is that they ask: <em>What sort of things do you prioritize</em>? If a politician is asked &#8220;&#8220;Would you vote to overturn Roe v. Wade if given the chance?&#8221;, I would bet that people could acknowledge the use of the would-you-rather. </p><p>If your answer to &#8220;would-you-rather have snakes for arms or snakes for legs&#8221; is "neither, to be honest" you're being annoying. If your answer to "what superpower would you have" is "the superpower to create superpowers," you're not being clever, you're avoiding having to make a choice. Just make a choice! Choose one of the options given to you in any of these scenarios, please! And if you still say "well, um, technically the rules state *any* superpower," then change the rules yourself so you can't choose the thing that's the most boring, obviously unintended, easily-avoided-if-the-question-is-just-phrased-a-different-way option. Choose! Pull the lever to kill one person instead of five or not! What are you so afraid of? Learning about yourself? Don&#8217;t you understand you&#8217;re a politician trying to avoid pinning your morality down so you can always feel like a good person who made the right choice, no matter what?</p><p>By the way, my answers to those are snakes for legs, the superpower to save and reload checkpoints in my life at will like a video game, and pull the lever, in that order.</p><p>Now is the point where I planned to go one by one over a few objections, but I found <a href="https://jessieewesmont.substack.com/p/taking-thought-experiments-seriously">this post</a> by <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Jessie Ewesmont&quot;,&quot;id&quot;:317021542,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ccff9a86-a1df-4e3e-8f30-9ee9af61730c_689x689.jpeg&quot;,&quot;uuid&quot;:&quot;69abc728-b103-4780-a473-1070d20d4b48&quot;}" data-component-name="MentionToDOM"></span> which tragically is pretty much exactly what I was going to write at the very end (she even stole the image off my post!). Read the post; it&#8217;s great. It covers the objections &#8220;but why are the thought experiments so unrealistic!&#8221; (they&#8217;re isolating one moral instinct, like in a lab), &#8220;I would simply save everyone on the track in the trolley problem!&#8221; (it&#8217;s not a gotcha to make you prioritize one thing in morality over another. Plus, d<a href="https://www.lesswrong.com/posts/neQ7eXuaXpiYw7SBy/the-least-convenient-possible-world">on&#8217;t try to find loopholes that conveniently allow you to not make an actual choice</a>), and &#8220;I don&#8217;t understand hypotheticals at all!&#8221; (you&#8217;re sadly just dumb). I&#8217;ll add that you shouldn&#8217;t add third elements, like &#8220;I wouldn&#8217;t save a drowning child; he could be Hitler&#8221; unless you literally, actually wouldn&#8217;t save a real drowning child in real life. <a href="https://www.lesswrong.com/posts/s9hTXtAPn2ZEAWutr/please-don-t-fight-the-hypothetical">This is not a writing exercise</a>, this is about what you really, actually believe. Anyway, if anyone in the comments counters this post with something she addressed directly in her article without acknowledging that, I will knock your grade down a letter in your report card, and keep you after class.</p><p>Look, yesterday <a href="https://starlog.substack.com/p/capitalism-effective-altruism-and?r=2bgctn">I put out a post about how the moral weight of the decisions we make is deliberately hidden</a>, and this is why I&#8217;m so insistent that you need to answer hypotheticals, without finding a clever third argument or loophole (read the article btw, it&#8217;s great of course). If you accept that pressing a button to inflict pain on a kid or animal is wrong, and pressing a button to inflict pain on a kid or chicken who is 100 miles away is just as wrong, even though it&#8217;s further away, then you need to grapple with the fact that factory farming and companies that benefit from child labor are only able to convince you to buy their products by deliberately separating you from the moral weight of you buying a cheeseburger or chocolate bar or whatever. Just because you don&#8217;t have to see the chicken who was killed for your food doesn&#8217;t mean that it doesn&#8217;t have the same moral weight. </p><p>Even effective altruists believe that there&#8217;s something different between the drowning in the hypothetical and giving to charity. My Substack friend <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Joe James&quot;,&quot;id&quot;:1726744,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee6aeba9-bb52-4269-9e1c-0f629e66765e_2351x2351.jpeg&quot;,&quot;uuid&quot;:&quot;f64514ca-95d6-40e2-966d-e22fe29544f1&quot;}" data-component-name="MentionToDOM"></span> isn&#8217;t even an EA but he makes this excellent point in one of his notes<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, that we have to grapple with the difference as much as non-EAs do. Scott Alexander tries to grapple with what exactly constitutes a good and moral person by isolating his moral beliefs and the obligations that a human can find in the fantastic post <a href="https://www.astralcodexten.com/p/more-drowning-children/comments#comment-102299245">More Drowning Children</a>. But half the comments are somehow people who object to the idea of asking *questions* to find your morality. It&#8217;s vital to accept and analyze what makes different cases different AFTER choosing your options, lest you be a bundle of contradictions whose moral beliefs are just &#8220;nah, I&#8217;ll do whatever feels right man&#8221; and fail to realize your actions are causing immense harm that you can&#8217;t see by industries who are doing the worst things on this planet right now because of thoughts like this.</p><p>If you refuse to answer a hypothetical to not be boxed in morally and quote-mined, you&#8217;re a politician. I implore you to at least know what you <em>really believe</em> in your head if you do this. And if you refuse to quantify your morality at all, telling yourself &#8220;I will never have to make any moral decisions that aren&#8217;t right in front of me right now,&#8221; you&#8217;ll never know what moral trade-offs you&#8217;re willing to make, and you&#8217;ll never know what&#8217;s worth sacrificing and what&#8217;s not. People think they can dodge the consequences of their moral actions by not acknowledging that it&#8217;s there. Inaction is a choice. You are not spared from choosing something every second you&#8217;re alive.</p><p>Just make a choice, and make that choice snakes for legs.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em>*youtuber voice from 2015*</em> &#8220;Make sure to like, comment, and subscribe by pressing that big orange button!!!&#8221;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>He also made an <a href="https://joerjames3.substack.com/p/the-corrective-action-problem-of">interesting post countering EAs</a> like an hour ago!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Cheap Meat Relies On Moral Atrocities Being Hidden From Us]]></title><description><![CDATA[How the true moral cost of the actions we take are hidden from us deliberately, and what to do about it.]]></description><link>https://www.kylestar.net/p/capitalism-effective-altruism-and</link><guid isPermaLink="false">https://www.kylestar.net/p/capitalism-effective-altruism-and</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Tue, 01 Jul 2025 14:05:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cbb99bc7-017e-41bd-9ce7-8521b15a501d_1460x1306.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>If you enjoy this post, liking it is the best way to support me.</em></p><div><hr></div><p>What does it take to build a civilization that rules the world? How many children, animals, and countries have to be sacrificed to the God of Progress before we decide to stop?</p><p>This is not an anti-capitalist post. The innovation that capitalism incentivizes has created so many technologies that really do make the world an unambiguously better place for all, such as vaccines, Google, and Edison&#8217;s famous lightbulb. The share of humans in extreme poverty since 1990 has gone down from 36% to 9%. Unbounded technology is a boon for the entire human race, and it makes everyone&#8217;s lives better. But I want to focus on this: the incentives that unchecked markets cause atrocities like factory farming to be separated and hidden as best as physically possible from us, to confuse our intrinsically altruistic and caring moral sense into not realizing what we&#8217;re really doing, for we would surely object.</p><p>What can a single person do?</p><p>&#8230;</p><p>Some of my friends like to say capitalism is bad, and they have some good points. But I think their best points are not complaining about their desk-job, or the price of the Nintendo Switch 2. Because <em>the average American is the main beneficiary of market forces</em>. The market has to appeal to the consumer, to Joe Average in middle America, to convince him to spend his money on their product in strict competition, so they had to make their product the cheapest and best. This is all great for Joe, but so much is sacrificed in the margins.</p><p>Modern society is built on separating you from the tower of cruelty used to create much of what you love in your life. Teslas, powered by batteries whose <a href="https://www.humanium.org/en/the-current-state-of-child-labour-in-cobalt-mines-in-the-democratic-republic-of-the-congo/">cobalt is dug by children as young as seven</a> in Congolese mines. A chocolate bar, with cocoa <a href="https://www.dol.gov/agencies/ilab/our-work/child-forced-labor-trafficking/child-labor-cocoa?utm_source=chatgpt.com">harvested by more than 1.5 million children</a> working on farms on West African plantations. And the worst of all in sheer scale, a breaded chicken wing, separated from the animal whose beak was cut off and who lived their entire life standing beak-to-feather with other chickens in its own feces. </p><p>Let me be clear: If you&#8217;re homeless or unemployed or suffering, I&#8217;m not asking you to &#8220;be grateful for what you have.&#8221; There are people whose lives are bad in America because of circumstances out of their control; America is certainly not a utopia. But if you have a place to live and a job in America, you have a number of options and material goods unmatched when compared to the conscious experience for any being in all of Earth&#8217;s short history. To help illustrate this, here are some things that an average American has that 99.9999999999999999% of all conscious beings (correct number of 9s<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>) in all of existence have never had:</p><ul><li><p>Unlimited clean water</p></li><li><p>Curated food selections</p></li><li><p>Seasonings</p></li><li><p>A soft place to sleep, safely</p></li><li><p>Climate controlled rooms</p></li><li><p>Toilets</p></li><li><p>Hot showers</p></li><li><p>Mass produced, affordable clothing</p></li><li><p>Curated TV, movies, and books</p></li><li><p>A GLOBAL INFORMATION NETWORK WITH ALL THE KNOWLEDGE IN THEIR SPECIES&#8217; HISTORY, BEAMED AT THE SPEED OF LIGHT TO ANY OTHER HUMAN ON EARTH</p></li></ul><p>Anyway, I don&#8217;t blame middle class Americans for not realizing how good they have it. You can&#8217;t compare your experience to someone who you&#8217;re&#8230; uh, not. In the modern day, there&#8217;s an epidemic of people who feel depressed, anxious, and lack meaning in life. These people are not &#8220;faking.&#8221; They&#8217;re miserable, aimless, unmotivated, and angry at the world and the systems they grew up in. The desk-job, a critique of capitalism I mocked a couple of paragraphs ago, really <em>can</em> be soul-crushing, even if it&#8217;s far better than many alternatives. </p><p>Just because you technically have access to lots of things, <em>stuff,</em> doesn&#8217;t mean that fulfillment in life is easy, and social media and the status games relentlessly shit on them and show them an endless onslaught of people more successful than them. Some of this is also due to the hunt for profit, and some is not. Either way, evolution generally frowned on people who felt completely satisfied and fulfilled in all their pursuits, and didn&#8217;t try to make their position in life better. The people always paranoid, searching for more potential threats, won the day back when we had to worry about bears eating us or whatever, I&#8217;m sure. But this endless craving for more has nowhere to stop. The sheer number of Americans dissatisfied with their lives is an important and interesting enough point that I&#8217;m dedicating a whole post to it, releasing soon.</p><p>But there&#8217;s a lack of imagination when imagining how much worse it could be. Compare the average American to those in impoverished countries, or those in warring countries, or animals in the wild, or factory farmed animals, or pretty much any conscious being throughout all of history, and ask them to choose, and the American life is better every time. Yet there&#8217;s <a href="https://www.reddit.com/r/antiwork/">subreddits that feel the idea of working at all is too much of an indignity to bear</a>, and their lives are miserable having to spend 40 hours a week at an air-conditioned desk. It&#8217;s so difficult to understand the sacrifices made halfway across the world to provide you with the life you lead, because the sacrifices are in the shadows, deliberately hidden by those who wish for profits. </p><p>I stand with those who want to make lives better for those who are the least fortunate on this beautiful, blue marble we call home!</p><p>Saying that, I truly believe almost every person is trying to do good in the world. So maybe that proclamation I just made of who I stand with <em>doesn&#8217;t actually say very much at all</em>. In fact, this worldview, where the bad stuff is pushed as far away from the consumer as possible, doesn&#8217;t make sense if people don&#8217;t care about doing good. Why not slaughter the chickens at the table, right in front of everyone? Why wouldn&#8217;t a company actively promote that they got the cheapest labor, as a brag: the prices are cheap! Why not get rid of all the charities that already exist? No. Of course people prefer to buy &#8220;free-range&#8221; chicken over regular if they cost the same. But they don&#8217;t cost the same, do they? And instead of knowing what free range means, people mostly have an image in their head. </p><p>This is why I&#8217;m no libertarian. It feels like one of the best ways to solve this issue is to have people who care in government. Of course, they first need an incentive to do so, and more people to recognize the issue. The difficulties with getting good people into government and guiding the people to the biggest issues in society could be a whole &#8216;nother post, but having a government that does nothing and letting the market run wild, relying on the labeling and boycotts that already don&#8217;t work today, seems like the worst way to go about it. Leaded gasoline is an easy success story: once regulators put a cap on it, companies phased it out worldwide in &lt;15 years. We can use this playbook on factory farming, if we so choose. But it comes in the form of stopping the natural way of the market.</p><p>People say, &#8220;each individual can&#8217;t make a difference, every company is inhumane and terrible.&#8221; Firstly, I&#8217;ll say it&#8217;s a crucial error to put all companies in the same category. If you assume all companies are evil because some are, there&#8217;s no incentive for any company to become good, because people won&#8217;t care! But secondly, I worry they&#8217;re ignoring the truth that it works the other way around, too. The <em>companies</em> are the ones most beholden to this race to the bottom that having the coveted &#8220;cheapest price&#8221; is so important<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. If people aren&#8217;t willing to confront the <a href="https://the-ethos.co/is-temu-ethical/">chain that leads the TEMU products to their door</a>, instead thinking &#8220;wow, so much cheaper!&#8221; then the altruistic companies can&#8217;t find any footing, and they&#8217;ll fail. <em>Some</em> company is gonna pursue whatever could make them rich. I&#8217;m not saying that companies that use child labor and animal cruelty aren&#8217;t horrible. They&#8217;re obviously more horrible in what they&#8217;ve done than mostly anyone else. The people making the actual child-working animal-torturing decisions in their companies are more evil than any random Joe could be. What I am saying is that these people making the decisions are just as separated from the actual morality as you are. They just have to have a nice meeting in an air conditioned office to &#8220;expand their reach&#8221; and decide to open another factory farm where chickens are packed like sardines. And it&#8217;s trivial that their evil decisions are more evil than you create, because the more power you have, the more you can affect the world with your decisions, good or bad. I heard some obscure movie quote once, &#8220;With great power comes great responsibility.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HOm0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HOm0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 424w, https://substackcdn.com/image/fetch/$s_!HOm0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 848w, https://substackcdn.com/image/fetch/$s_!HOm0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!HOm0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HOm0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png" width="1456" height="976" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:976,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2606638,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/167204190?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HOm0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 424w, https://substackcdn.com/image/fetch/$s_!HOm0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 848w, https://substackcdn.com/image/fetch/$s_!HOm0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!HOm0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe86c58e7-5356-41f8-a2cc-527310870279_1710x1146.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">I couldn&#8217;t find one with less pixels, oops. Or I guess what I&#8217;m really looking for is technically MORE pixels, though that&#8217;s the opposite of how it&#8217;s commonly used. Weird.</figcaption></figure></div><p>I&#8217;m asked why I care so much about moral hypotheticals. It&#8217;s indeed because the system is designed to separate you from the morality of the actions you take as much as possible. As one character says in the fantastic show you should watch <em>The Good Place</em>, &#8220;Humans think that they&#8217;re making one choice, but they&#8217;re actually making dozens of choices they don&#8217;t even know they&#8217;re making.&#8221; If you had to watch the entire process and life of the genuinely mouthwatering chicken that reaches your plate, you&#8217;d be less than hungry. The act of consciously not confronting the evil in the banal decisions we make every day, leaving your morality unexamined, is what forces the companies to not care in an attempt to give you the cheapest price.</p><p>Importantly, some of the evils in these banal decisions are worse than others, and it&#8217;s necessary to follow the chain all the way back to see how bad each company is individually. If you care about how brands position themselves politically over the actual harm of the supply chains they cause, you&#8217;ll end up in the situation where the <a href="https://www.nytimes.com/2017/01/31/business/delete-uber.html">most</a> <a href="https://en.wikipedia.org/wiki/Bud_Light_boycott">successful</a> <a href="https://www.npr.org/sections/goatsandsoda/2018/12/01/671891818/dolce-gabbana-ad-with-chopsticks-provokes-public-outrage-in-china">boycotts</a> are unrelated to their real practices. Their advertising departments messed up, yet no one cares about the harm the actual products create. I&#8217;ve seen backlash against Nestl&#233; online, which is great because them and the other big chocolate brands, like I said before, have over 1 million children doing farm work for pennies. Don&#8217;t misunderstand me: the social issues that have captured the hearts and minds of America are important. But in terms of scale of cruelty compared to something like Nestl&#233;, it&#8217;s not even close. There&#8217;s a prioritization problem in the human brain here. And honestly, I look at the <a href="https://ourworldindata.org/how-many-animals-are-factory-farmed#:~:text=Nearly%20all%20livestock%20animals%20in%20the%20US%20are%20factory%2Dfarmed&amp;text=It%20estimates%20that%2099%25%20of,were%20factory%2Dfarmed%20in%202022.&amp;text=That%20was%20just%20over%2010,than%20the%20global%20human%20population.">billions of animals</a> suffering in the dark, one-foot-wide area that they&#8217;ll live their entire lives in before being killed, and I wonder how cruelty on this scale is possible. Of course, I&#8217;ve never even seen a live broiler chicken. That&#8217;s the whole game. Statistics on a screen.</p><p>I have seen a normal farm, though. My aunt owns one, and some cows, chickens, pigs and horses. It&#8217;s a nice farm. The animals are well fed, have enough space to move around in, and the chicken&#8217;s beaks are not cut off. If this is what factory farming was, and I think this is the image that gets conjured up for most people, and I don&#8217;t blame them for not feeling angry. I&#8217;m interested in suffering of conscious beings during their lives, and these animals seem to live relatively good ones. My thoughts here are that people really do care about animals, like dogs, and wouldn&#8217;t want them to suffer needlessly as a result of their decisions. If polled, <a href="https://awionline.org/sites/default/files/uploads/documents/awi-humanely-raised-claim-survery-factsheet.pdf">people</a> <a href="https://faunalytics.org/public-support-for-animal-protection-in-the-united-states/">condemn</a> <a href="https://www.filesforprogress.org/reports/DFP_FF_MI_Polluters_Report.pdf">factory</a> <a href="https://www.aspca.org/sites/default/files/2023_industrial_ag_survey_results_report_052523_1.pdf">farming</a>. But the process has abstracted away so much of the harm that it&#8217;s hard to care about learning the exact brands you should buy when you have your own life to wrangle under control. Factory farming remains evil.</p><p>&#8230;</p><p>So, where do we from here? I implied earlier that the individual can make a difference, what should the individual do? This is the part where I probably should say, &#8220;restack this post to every human on earth, and we can stop all the bad in the world and rejoice in our shared altruistic intentions!!!1!&#8221; but I&#8217;m not that naive. Companies have had a long time to painstakingly, efficiently separate the intrinsic moral sense we have within us from the actual negative impact of our actions, all in search of profit. I&#8217;m not some immune paragon who &#8220;sees the world like it really is&#8221;, with a huge amount of sympathy by seeing big numbers indicating large amounts of suffering. No &#8212; I arrived at this conclusion by playing around with my values and what I prioritize, not through my instinctive moral sense in my head, because humans are unmoved by spreadsheets on a screen and are easily moved by horrible things happening in front of them. The fact that we&#8217;re wired to care about what we see instead of what we can&#8217;t, even if they have the same moral weight, is so obvious from an evolutionary perspective I&#8217;ll leave it at that. It&#8217;s probably better to not be moved by the scale of suffering represented in the data we have, though, because living a happy life makes one more agentic and is intrinsically good, and the scope of the world&#8217;s suffering is too much for anyone to bear. Much has been said about the leftist who has taken all the suffering in the world on their shoulders and been rendered useless.</p><p>I guess I do strongly believe that this prioritization game, where we try to figure out which companies and industries cause the most suffering, needs to be more of a talking point, at least in smart circles of people who make decisions. It&#8217;s like trying to make altruism, and being a good person, more effective&#8230; hm&#8230; Alright, you got me, I believe the Effective Altruism movement does a great job of trying to figure out which things are worse than other things, and they&#8217;ve identified this &#8220;what do we prioritize&#8221; issue and made it their brand. They&#8217;ve recognized that charity is actually super efficient compared to the other forms of improving the world, and are trying to find out what to <a href="https://www.givewell.org/?gad_campaignid=10290048369&amp;gad_source=1&amp;gbraid=0AAAAADp4pzhCfcn-Ah8c6zmxEtherwlFQ&amp;gclid=Cj0KCQjwjo7DBhCrARIsACWauSlPGqzvmuZNfSFWglx6eVRFqcSYlh0JGIRf-KsB-xxxkVNJ57BbgcQaApiVEALw_wcB">prioritize there, too</a>. Remember that boycott problem I mentioned? Interestingly, a dollar routed through a proven charity gives around 10 times more money to the people you&#8217;re trying to help than a dollar given to a different company in a boycott<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Setting a $25 donation to repeat on a <a href="https://www.givewell.org/?gad_campaignid=10290048369&amp;gad_source=1&amp;gbraid=0AAAAADp4pzhCfcn-Ah8c6zmxEtherwlFQ&amp;gclid=Cj0KCQjwjo7DBhCrARIsACWauSmUO-WMJl-2CPk-J_TLA3SQzuTeKZ-MhFOnxnoQGBsItyoqxZMjcQ4aAh5IEALw_wcB">pre-vetted, quality charity</a> genuinely feels like doing nothing at all but helps more than most of the other moral decisions you make. Work smarter, not harder. This dedication to having good actions go further is something I admire. The more influence the movement gets, and the more people who recognize that they want to help the most people they can from it, the better we all can be. They have lots of thoughts on the best ways for an individual to influence the world to make it a better place, and I think their discussion and community is always nice and interesting. I encourage looking at their <a href="https://www.effectivealtruism.org/">website</a> or <a href="https://forum.effectivealtruism.org/">forum</a>. Most of all I think they&#8217;re correct, and if more people were correct, that would be nice.</p><p>When I look around the world, I want it to be good. I know others want the same thing. Companies have spent a lot of time making us feel good while shoving all the bad stuff they do out of sight. I don&#8217;t know everything, and I don&#8217;t have all the solutions, but I know I want to make the world a better place in an effective way.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>If you enjoyed this post<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> or think it&#8217;s important, the best way to support me is to like it. I&#8217;d also appreciate it if you subscribed using this big button &#8212; if you&#8217;re interested in seeing more posts like this it&#8217;s the best way for me to support <em>you</em>. I get to deliver delicious content right into your mouth. If you&#8217;re not sure whether you want to subscribe off of one post, read some other good posts of mine <a href="https://starlog.substack.com/p/just-because-theyre-annoying-doesnt">here</a> and <a href="https://starlog.substack.com/p/why-im-not-a-rationalist-is-a-bad">here</a> and decide then.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Estimates off of the best guesses for the total amount of conscious beings who have ever lived on earth.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Alright, because I didn&#8217;t want to mention it in the article, I&#8217;ll say it 5 times here instead. Yes, it&#8217;s my favorite blog post of all time. Yes, I was inspired by it for this piece. Moloch, Moloch, Moloch, Moloch, Moloch.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Check the comments for more precise numbers and the studies I examine. This is broadly true, given that a high percentage of money given to GiveWell charities reaches factory farming or stops child labor, and there&#8217;s lots of leakage in supply chains from reaching the actual farmers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Also, note, I changed the title of the article and the first paragraph because the original one didn&#8217;t fit into substack&#8217;s little box, which seemed important.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Just Because They’re Annoying Doesn’t Mean They’re Wrong]]></title><description><![CDATA[Woke, Redpilled, Vegan, Rationalist, Socialist, Communist, Reactionary, Neoliberal, Conservative, Progressive, Effective Altruist, Libertarian, Anarchist, Centrist, Stoic, Accelerationist, Nihilist.]]></description><link>https://www.kylestar.net/p/just-because-theyre-annoying-doesnt</link><guid isPermaLink="false">https://www.kylestar.net/p/just-because-theyre-annoying-doesnt</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Fri, 27 Jun 2025 15:27:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cf37706c-3b72-4951-84f1-4fbb268e5d09_1374x1202.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>If you enjoy this post, the best way to support me is to like it. Thank you for your support.</em></p><div><hr></div><p>There was a girl in my friend group in college who was <em>very</em> annoying. She didn&#8217;t understand social cues, was relatively neurotic, and talked shit to anyone who were even slightly different politically than her, never assuming anything but ill intent. But worst of all, she was <strong>vegan</strong> &#8212; An extremely stereotypically butthurt vegan, like a caricature you&#8217;d see on Twitter. She would talk about how I enjoyed murdering babies out of the blue. She commented on a post I made about a Doritos and chicken taco saying I &#8220;loved animal rape&#8221;.</p><p>Well, I&#8217;m the type of person who went to church for a year with some of my diehard Christian friends despite being agnostic at the time to try to understand how Christians think. I think the vast majority of people are trying to do good in the world. Given these differences, you can imagine how this vegan girl and I got along. For a long time, she was the only vegan I knew.</p><p>&#8230;</p><p>Only one religion can be correct (or lack thereof). The biggest religion in the world is Christianity, with 29% of the world population. So, at minimum, at least 71% of the entire world is completely wrong about their most fundamental belief. In politics, there are many different factions. Progressives, liberals, conservatives, socialists, communists, alt-right, and more. If these are all competing ideas of government, and one is best, most of these factions are completely wrong. How can it be that so many banners that people rally behind must be incorrect? The answer is that correctness isn&#8217;t what&#8217;s being optimized for here&#8230;</p><p>Yesterday I put out a <a href="https://starlog.substack.com/p/why-im-not-a-rationalist-is-a-bad">rebuttal to a post titled &#8220;Why I&#8217;m Not A Rationalist&#8221;</a>. In it, I talked about how the post was lacking in something important: arguments. It pointed at a stereotypical rationalist and laughed, and it only made implicit arguments that the things rationalists believe are ridiculous. I now want to zoom out and consider the more general question: what are the implications of a belief becoming a tribe? And I want to also consider how this, indeed, ends up muddying the truth.</p><p>Look, humans are tribal creatures. We care about status and being <em>seen</em> as smart, hot, put together, empathetic or whatever else. We sometimes care about this social acceptance over logical consideration of the beliefs we hold. This makes sense! For most of humanity&#8217;s existence, we knew a very small number of people, and so whatever beliefs we had were inherited from a small pool. There wasn&#8217;t much debate about whether the Sky God or the Star God ruled the heavens; you just accepted whatever you were told, and you certainly didn&#8217;t have a reason to disagree and start a big thing of it.</p><p>You may assume that the age of the internet may allow us to clear up these tribal misunderstandings, deferring to only the greatest arguments, but it is not so. Nowadays, actually, everyone is pressured to have an opinion on everything. Just because I am not a political scientist does not mean I&#8217;m spared from needing to have an opinion on the Middle East in casual conversation. So it&#8217;s only natural that we decide to choose the beliefs of people we trust who are smart, compassionate, and good. This isn&#8217;t a bad thing! I&#8217;d argue that deferring to the beliefs of people we trust on issues we don&#8217;t know is great practice! The only issue, of course, is that everybody thinks their group is the smart and correct one, so for any random individual, not looking at the arguments can be fatal. The more people who aren&#8217;t experts who influence the discourse, the more the water is muddied.</p><p>This isn&#8217;t some weak plea for centrism, or always giving both sides a chance. On many issues, <a href="https://substack.com/@starlog/p-166684398">one side is correct</a>, and obviously so. But the internet has provided access to everyone in the world&#8217;s deepest opinions. And because people need to have opinions on everything from politics to celebrities, and they certainly don&#8217;t have time to dive into the arguments, suddenly every single argument turns into &#8220;who can portray their enemies as the Soyjak and themselves as the Chad.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MgLA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MgLA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 424w, https://substackcdn.com/image/fetch/$s_!MgLA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 848w, https://substackcdn.com/image/fetch/$s_!MgLA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 1272w, https://substackcdn.com/image/fetch/$s_!MgLA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MgLA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png" width="1456" height="989" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:989,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1305891,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/166952066?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MgLA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 424w, https://substackcdn.com/image/fetch/$s_!MgLA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 848w, https://substackcdn.com/image/fetch/$s_!MgLA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 1272w, https://substackcdn.com/image/fetch/$s_!MgLA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d51f32f-e818-4a45-9c87-079aea37f2b9_1808x1228.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Resist! Don&#8217;t let this comic I made make you associate truth seeking with the soyjack! You&#8217;re playing directly into the hands of the positioning of your beliefs in a tribal context mattering more than actual arguments! That actually makes you the soyjak, I swear! Quick, view this antidote!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RFun!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RFun!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 424w, https://substackcdn.com/image/fetch/$s_!RFun!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 848w, https://substackcdn.com/image/fetch/$s_!RFun!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 1272w, https://substackcdn.com/image/fetch/$s_!RFun!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RFun!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png" width="1456" height="992" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:992,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2544093,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/166952066?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RFun!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 424w, https://substackcdn.com/image/fetch/$s_!RFun!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 848w, https://substackcdn.com/image/fetch/$s_!RFun!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 1272w, https://substackcdn.com/image/fetch/$s_!RFun!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc31a8c6-3fb6-481e-9c01-d0076d9df8fd_1960x1336.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Alright, you&#8217;re back. Good to have you.</p><p>Anyway, what are we doing here? Progressive politics values siding with those who need the most help in society, so suddenly everything is a race to position yourself as the most oppressed group, to ridiculous extremes. Rationalists value being smart, so indeed, some try to use as big of words as possible to try to seem smarter than the rest. The Republican anti-woke that <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Richard Hanania&quot;,&quot;id&quot;:6319739,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de4c8df-7f9c-4bca-901c-53a83a3e97eb_2736x1824.jpeg&quot;,&quot;uuid&quot;:&quot;97449223-dabd-4c82-8b4d-a1febe9329fa&quot;}" data-component-name="MentionToDOM"></span> <a href="https://www.richardhanania.com/p/the-based-ritual">talks about here</a> supports the policies that are most likely to piss off those they hate, seemingly without regard for how good they are for themselves. Any whining about how they&#8217;re making lives worse is met with glee, for that&#8217;s the reaction they were gunning for the whole time. Different banners to rally behind, same incentive: status first, arguments later.</p><p>Identification with a group becomes a ritual. In an ideal world, you&#8217;d identify with &#8220;rationalist&#8221; or &#8220;communist&#8221; or &#8220;Rick and Morty fan&#8221; or &#8220;Metalhead&#8221; based on how much you agreed with the beliefs of that group. But through this curation of a community, saying you&#8217;re a communist represents a lot more than just your beliefs in the optimal form of government, and identifying as a Rick and Morty fan carries a lot more baggage than just liking a random show. Everything you identify as points towards the groups, not the facts. So people say they &#8220;watch Rick and Morty, but wouldn&#8217;t consider themselves a fan&#8221; even if this is a lie.</p><p>This doesn&#8217;t just ruin groups, it also taints individual points. You can support trans people vehemently and still believe that trans women shouldn&#8217;t be in women&#8217;s sports. But if the issue then becomes a positioning game where the only people arguing for your point are transphobes, then suddenly you <em>really can&#8217;t</em>, because the <em>belief</em> has suddenly become the <em>group</em>. And if the transphobes notice this is their best issue, amplify it to all hell, and use their supposed reasonableness from this point to then take away more rights from trans people, well, then you&#8217;re <em>really, really screwed</em>. I could talk about how this leads to echo chambers here, but I think it&#8217;s pretty obvious.</p><p>There are actually multiple layers to this status game. &#8220;Rationalist&#8221; is a word that could be used in a sentence to just mean &#8220;I use rationality to find my points&#8221;. But to <em>identify</em> as a rationalist is to align yourself mostly with utilitarianism, the potential of AI, and good faith arguments. Those are the values I <em>wish</em> defined identifying as a rationalist. Then there&#8217;s a third layer: there&#8217;s the stereotype of what an average person in the group is like, such as the polyamorous computer nerd. This is the step that I wish we would carve out of our society, where all the bad faith and positioning games on the internet lie. Characteristics of the group, what it means to be a member, eventually swarm and override the beliefs in our social, status-seeking brains.</p><p>This carving out, of course, is an impossible ask. And honestly, it&#8217;s not a bad thing that similar people end up joining similar groups. The computer nerds deserve to find each other and become friends, just as the patriotic gun lovers deserve to find each other and talk about different kinds of weapons. But if their group identity then rallies behind rationality and conservatism respectively, consider just how divorced this process is from the truth of the banner they rally behind. There&#8217;s distance between the substance of what they believe every step of the way.</p><p>I have a lot of IRL friends, and the most put together people I know in my whole life, besides myself (I&#8217;m not very humble), are in fact the aforementioned diehard Christians! They&#8217;re consistently happier, fulfilled, and more agentic than my other friends who are more in line with my politics and beliefs. If I wanted to position myself to my friends as the most put together I could be, then I would identify with Christianity.</p><p>But I don&#8217;t think Christianity is true! I want to discern the truth for myself as best I can, as untainted as I can be from the ideas of the people who I think are the coolest. I want to learn, and be correct! If you succumb to this status game, applause replaces argument, and sneering replaces scrutiny.</p><p>Truth loses.</p><p>&#8230;</p><p>I&#8217;ve been trying to become more of a vegan the past couple months, because I&#8217;m now convinced by the arguments of why factory farming is really, really bad. I&#8217;ve also become an effective altruist. When I told some of my friends this, they said &#8220;isn&#8217;t that what Sam Bankman-Fried was?&#8221; That&#8217;s the hat I&#8217;ve thrown my lot into, and I&#8217;m alright with that, because I believe in the substance of the cause.</p><p>As for that vegan friend, I texted her after a few years to tell her I was trying to become vegan by first starting to eat less chicken. Her reply: I suck because I&#8217;m starting with chickens over cows, and cows contribute &#8220;astronomically&#8221; to global warming. Apparently, she&#8217;s still annoying. But I&#8217;m not embarrassed to share the label. Veganism&#8217;s substance is one I believe in, and I&#8217;m going to try my best to proudly stand under the banner with the best arguments.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p><em>Uh, I know the top said the best way to support me is to like the post, and that&#8217;s definitely true, so if you like this post because you enjoyed, I&#8217;d really appreciate it. Buut here&#8217;s a subscribe button I&#8217;m just gonna put here. Just for no reason. Alright, genuinely though, thank you for all the support.</em></p>]]></content:encoded></item><item><title><![CDATA["Why I'm Not A Rationalist" is a Bad Article]]></title><description><![CDATA[I guess I think arguments are more persuasive than vibes]]></description><link>https://www.kylestar.net/p/why-im-not-a-rationalist-is-a-bad</link><guid isPermaLink="false">https://www.kylestar.net/p/why-im-not-a-rationalist-is-a-bad</guid><dc:creator><![CDATA[Kyle Star]]></dc:creator><pubDate>Thu, 26 Jun 2025 10:35:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/06ad7f94-b700-46db-ad67-02656c06b0f5_906x784.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m on the side of truth. If someone posts an essay titled &#8220;Why I&#8217;m not a socialist,&#8221; and spends the whole article talking about how socialists annoy them, with literally no answers to any of the substance of what they believe, then they&#8217;re conceding the truth to people who care enough to argue for substance.</p><p>I will now list all of the points made in the popular <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Brackish Waters, Barren Soil&quot;,&quot;id&quot;:3301302,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/brackishwatersbarrensoil&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e933ef93-5a7b-42d5-bf46-a63a07ac63d5_1024x1024.jpeg&quot;,&quot;uuid&quot;:&quot;2e4ba2ce-01b0-4240-86a2-90a1c8d709e4&quot;}" data-component-name="MentionToDOM"></span> article &#8220;<a href="https://brackishwatersbarrensoil.substack.com/p/why-im-not-a-rationalist-clean-version">Why I&#8217;m Not A Rationalist</a>&#8221;:</p><ul><li><p>Rationalists make knowledge a personality trait, like foodies make food a personality trait</p></li><li><p>Rationalists are fat</p></li><li><p>Rationalists don&#8217;t know how to date</p></li><li><p>Rationalists have spreadsheet brain; they think things can be solved through models and data</p></li><li><p>Rationalists are utilitarians, who try to make people better on average through measurable statistics. He then mentions Auschwitz, and how they wouldn&#8217;t be comforted if the world was made a better place elsewhere</p></li><li><p>Rationalists use hypotheticals to measure why people do things</p></li><li><p>Rationalists use unnecessarily sophisticated language</p></li><li><p>Rationalists try to invent stuff from the ground up, instead of using established sophisticated language</p></li><li><p>Rationalists are doomers when it comes to AI, but they admit AI can be wrong, which is ironic because they don&#8217;t admit that they can be wrong (what?)</p></li></ul><p>Have you noticed what&#8217;s missing from this article? Arguments! There are no frickin arguments anywhere! If anyone can find any arguments, please tell me in the comments, because now I have to rebut an article that doesn&#8217;t have any arguments.</p><p>Half of the points are just calling rationalists pretentious, know-it-all, fat losers, and the other half is just arguments from &#8220;<a href="https://slatestarcodex.com/2013/06/13/arguments-from-my-opponent-believes-something/">my opponent believes something</a>&#8221;.</p><p>&#8220;My opponent believes in utilitarianism, which is kinda like believing in all of the implications of utilitarianism blindly, like <a href="https://plato.stanford.edu/entries/repugnant-conclusion/">The Repugnant Conclusion</a>&#8221;. Yeah, no utilitarians have ever tried to grapple with the problems with utilitarianism. He doesn&#8217;t offer any alternatives! He doesn&#8217;t say, &#8220;and that&#8217;s why I&#8217;m a robust deontologist, for reasons X, Y, and Z&#8221;. He just points and says, &#8220;Utilitarianism can mean X, so all rationalists believe X.&#8221;</p><p>&#8220;My opponent believes in using hypotheticals, instead of not ever trying to grapple with hypotheticals&#8221;, I guess? This critique has always been bad. Hypotheticals are just ways to ask how you compare the values of things. If you don&#8217;t have an instinctive answer to the question &#8220;would you rather stub your toe or be stabbed in the thigh with a knife five times&#8221;, and instead prefer to push up your glasses and state &#8220;Umm, well actually, I can choose neither. I will never have to consider any moral questions about things that aren&#8217;t happening right in front of me&#8221; then what are we doing? Questions like &#8220;how much do you value a human life vs animals&#8217; lives&#8221; are vitally important for law, right now! Refusing to engage in the idea of <em>tradeoffs</em> is ridiculous, and just because they&#8217;re abstract (I will not stab you in the thigh, don&#8217;t worry) doesn&#8217;t mean you can&#8217;t learn about <em>your own priorities</em> by answering. Why is factory farming not bad? Why is torturing animals, cutting off chickens&#8217; beaks, tearing out shrimp&#8217;s eyes, not something we should compare to anything, or compare the pros and cons of? These are the questions we have to grapple with when trying to make the world a better place, and there are smart people who are on every side, but the first step is <em>grappling with the question</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v3in!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v3in!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 424w, https://substackcdn.com/image/fetch/$s_!v3in!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 848w, https://substackcdn.com/image/fetch/$s_!v3in!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 1272w, https://substackcdn.com/image/fetch/$s_!v3in!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v3in!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png" width="1410" height="704" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/feaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:704,&quot;width&quot;:1410,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:962613,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://starlog.substack.com/i/166845208?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!v3in!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 424w, https://substackcdn.com/image/fetch/$s_!v3in!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 848w, https://substackcdn.com/image/fetch/$s_!v3in!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 1272w, https://substackcdn.com/image/fetch/$s_!v3in!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffeaea265-fa50-4abd-94f3-e8feb596f107_1410x704.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I just wish alternatives were in the article, anywhere. He doesn&#8217;t say what his moral beliefs are, he doesn&#8217;t say what he uses to prioritize instead of using hypotheticals, he just points.</p><p>The &#8220;rationalists use too sophisticated language&#8221; point would be a real point, I just completely disagree. When I read Eliezer&#8217;s stuff from 2008-2009, I was shocked at how easy and intuitive the points were to understand, and even more shocked by the fact that nobody I knew used <em>any</em> of it, including me. When I read a dense philosophy paper, there&#8217;s tons of complicated, referential language that requires me to read other papers. When I read Scott Alexander and, indeed, Eliezer Yudkowsky, they build up concepts that seemingly no one uses in an intuitive way so I can understand it. He calls this reinventing the wheel in the next point. I counter that making complicated topics in philosophy, math, and other subjects simpler and presenting them in an understandable way is Good, Actually. I don&#8217;t think you need to have a high IQ to understand what they say at all.</p><p>Do I need to say much about the whole &#8220;rationalists are fat, undatable, unfulfilled, losers&#8221; part? Call me crazy, I think stereotypes of whatever group I don&#8217;t like is not a very good point. I live an extremely fulfilled and happy life, with an amazing girlfriend, and lots of friends, in a city that I love. Oh, and I&#8217;m also the perfect weight. <em>None </em>of the facts I just said should matter at all to how you consider this article. I&#8217;m sure there are many insulting stereotypes I could make about every single political and social group that exists that will sound vaguely true, but I think anyone would want <em>their</em> group to get a little credit.</p><p>Actually, I only started working out more consistently recently (instead of just occasionally playing sports with my friends) because of the rationalist community! I was comparing the drawbacks and risk of living a sedentary lifestyle with other risky behaviors in my life while <a href="https://starlog.substack.com/p/your-sense-of-fear-is-the-enemy?r=2bgctn">writing this article</a>, and I figured that I should exercise more for my long term health. I always thought that because I was happy with the way I looked and felt great, I could get away with only playing sports. But considering the risk rationally helped me get to the gym.</p><p>He ends off the article with another joke mocking how smart rationalists think they are, while promising a part two where he&#8217;ll talk about &#8220;certain people in the rationalist community&#8221;. He then later released this part two (titled &#8220;mean version&#8221;, I guess all of this is him being nice), where he starts by literally putting famous rationalists&#8217; photos next to each other and making fun of the way they look. I can&#8217;t counter the rest of this &#8220;mean version&#8221; because it&#8217;s a paid article, and I ain&#8217;t paying. I&#8217;d much rather read arguments about what moral philosophy he uses instead of pointing and laughing at people. </p><p>There are plenty of legitimate criticisms of rationalists, and non-rationalists have lots of fantastic points they could make in an article titled &#8220;Why I&#8217;m not a rationalist&#8221; that I would love to read! <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Bentham's Bulldog&quot;,&quot;id&quot;:72790079,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ee10b9d-4a49-450c-9c8d-fed7c6b98ebc_1280x960.jpeg&quot;,&quot;uuid&quot;:&quot;20183d2e-65c5-48bb-b6df-8816a357a559&quot;}" data-component-name="MentionToDOM"></span> talks about how Eliezer is overconfident in <a href="https://benthams.substack.com/p/eliezer-yudkowsky-is-frequently-confidently">some of his important beliefs</a>, like consciousness (I agree!). Daniel Dennett has some thoughts on <a href="https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/">AI risk</a> that counter the rationalist narratives. And yesterday, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Backcountry Psychology&quot;,&quot;id&quot;:1355230,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/backcountrypsych&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/305c53be-e017-4aa4-91c9-cace91ed9706_1280x1280.png&quot;,&quot;uuid&quot;:&quot;d987bd36-0935-49b9-bc68-6cde3473f8d1&quot;}" data-component-name="MentionToDOM"></span> put out a great essay with multiple interesting points discussing the famous &#8220;paper cuts vs. torture&#8221; debate, titled <a href="https://backcountrypsych.substack.com/">&#8220;Total Suffering isn&#8217;t real&#8221;</a>. I probably disagree (I think the induction argument for the other side is pretty strong) but he actually argues things! Discusses different facets of the problem! What his thoughts are on each side! He also has other good posts discussing the cons of utilitarianism, and what <em>he</em> believes! The only thing I don&#8217;t like about it is where he ends the essay quoting this terrible article.</p><p>Call me crazy, but I&#8217;m on the side of people willing to debate utilitarianism vs. deontology, the different philosophical implications of the trolley problem, and the side that&#8217;s willing to use English to try to find the truth from first principles.</p><p>I&#8217;m on the side of truth. Rationalists seem to care about finding the truth. As far as I can tell, they&#8217;re pretty good at it. Only enough truly compelling articles&#8212;like Bentham&#8217;s or Backcountry&#8217;s&#8212;would convince me to renounce them. If you don&#8217;t care about being correct, you can write vibe essays about the people you don&#8217;t like. You&#8217;re allowed to be wrong! But I&#8217;m choosing the side that cares enough to argue for substance.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.kylestar.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.kylestar.net/subscribe?"><span>Subscribe now</span></a></p><p>Subscribe for free! The more people who subscribe, the more time I can spend writing. Doesn&#8217;t that button above look fun to press? It&#8217;s so easy, just give it a little <em>tap</em>. And then I think you have to click the &#8220;no pledge&#8221; button to subscribe for free, I don&#8217;t know how I can make it just instantly subscribe for free to be honest, whoops.</p>]]></content:encoded></item></channel></rss>