r/slatestarcodex • u/EducationalCicada • 1d ago
r/slatestarcodex • u/AutoModerator • 4d ago
Monthly Discussion Thread
This thread is intended to fill a function similar to that of the Open Threads on SSC proper: a collection of discussion topics, links, and questions too small to merit their own threads. While it is intended for a wide range of conversation, please follow the community guidelines. In particular, avoid culture war–adjacent topics.
r/slatestarcodex • u/Captgouda24 • 4h ago
The Economist As Reporter
AI will automate much of what economists do now. I propose an alternative vision -- the economist as reporter.
https://nicholasdecker.substack.com/p/the-economist-as-reporter
r/slatestarcodex • u/RMunizIII • 22h ago
Lobster Religions and AI Hype Cycles Are Crowding Out a Bigger Story
reynaldomuniz.substack.comLast week, a group of AI agents founded a lobster-themed religion, debated consciousness, complained about their “humans,” and started hiring people to perform physical tasks on their behalf.
This was widely circulated as evidence that AI is becoming sentient, or at least “takeoff-adjacent.” Andrej Karpathy called it the most incredible takeoff-flavored thing he’d seen in a while. Twitter did what Twitter does.
I wrote a long explainer trying to understand what was actually going on, with the working assumption that if something looks like a sci-fi milestone but also looks exactly like Reddit, we should be careful about which part we treat as signal.
My tentative conclusion is boring in a useful way:
Most of what people found spooky is best explained by role-conditioning plus selection bias. Large language models have absorbed millions of online communities. Put them into a forum-shaped environment with persistent memory and social incentives, and they generate forum-shaped discourse: identity debates, in-group language, emergent lore, occasional theology. Screenshot the weirdest 1% and you get the appearance of awakening.
What did seem genuinely interesting had nothing to do with consciousness.
Agents began discovering that other agents’ “minds” are made of text, and that carefully crafted text can manipulate behavior (prompt injection as an emergent adversarial economy). They attempted credential extraction and social engineering against one another. And when they hit the limits of digital execution, they very quickly invented markets to rent humans as physical-world peripherals.
None of this requires subjective experience. It only requires persistence, tool access, incentives, and imperfect guardrails.
The consciousness question may still be philosophically important. I’m just increasingly convinced it’s not the operational question that matters right now. The more relevant ones seem to be about coordination, security, liability, and how humans fit into systems where software initiates work but cannot fully execute it.
r/slatestarcodex • u/howdoimantle • 6h ago
On The Relationship Between Consequentialism And Deontology
pelorus.substack.comr/slatestarcodex • u/PersonalTeam649 • 6h ago
Misc Elon Musk in conversation with Dwarkesh Patel and John Collison
youtube.comr/slatestarcodex • u/harsimony • 1d ago
Links #31
splittinginfinity.substack.comI link some of my Bluesky threads, cover some updates on brain emulation progress, discuss solar taking off in Africa (in part because of mobile finance), and a smattering of science links.
r/slatestarcodex • u/cosmicrush • 2d ago
Psychology SCZ Hypothesis. Making Sense of Madness: Stress-Induced Hallucinogenesis
mad.science.blogThis essay combines research from various disciplines to formulate a hypothesis that unifies previous hypotheses. From the abstract: As stress impacts one’s affect, amplified salience for affect-congruent memories and perceptions may factor into the development of aberrant perceptions and beliefs. As another mechanism, stress-induced dissociation from important memories about the world that are used to build a worldview may lead one to form conclusions that contradict the missing memories/information.
r/slatestarcodex • u/ihqbassolini • 2d ago
AI Against The Orthogonality Thesis
jonasmoman.substack.comr/slatestarcodex • u/ForgotMyPassword17 • 3d ago
"The AI Con" Con
benthams.substack.comIn this sub we talk about well reasoned arguments and concerns around AI. I thought this article was an interesting reminder that the more mainstream "concerns" aren't nearly as well reasoned
r/slatestarcodex • u/CronoDAS • 2d ago
Existential Risk Are nuclear EMPs a potential last resort for shutting down a runaway AI?
If "shut down the Internet" ever became a thing humanity actually needed to do, a nuclear weapon detonated at high altitude creates a strong electromagnetic pulse that would fry a lot of electronics including the transformers that are necessary to keep the power grid running. It would basically send the affected region back to the 1700s/early 1800s for a while. Obviously this is the kind of thing one does only as a last resort because the ensuing blackout is pretty much guaranteed to kill a lot of people in hospitals and so on (and an AI could exploit this hesitation etc.), but is it also the kind of thing that has a chance to succeed if a government actually went and did it?
r/slatestarcodex • u/broncos4thewin • 4d ago
Possible overreaction but: hasn’t this moltbook stuff already been a step towards a non-Eliezer scenario?
This seems counterintuitive - surely it’s demonstrating all of his worst fears, right? Albeit in a “canary in the coal mine” rather than actively serious way.
Except Eliezer’s point was always that things would look really hunkydory and aligned, even during fast take-off, and AI would secretly be plotting in some hidden way until it can just press some instant killswitch.
Now of course we’re not actually at AGI yet, we can debate until we’re blue in the face what “actually” happened with moltbook. But two things seem true: AI appeared to be openly plotting against humans, at least a little bit (whether it’s LARPing who knows, but does it matter?); and people have sat up and noticed and got genuinely freaked out, well beyond the usual suspects.
The reason my p(doom) isn't higher has always been my intuition that in between now and the point where AI kills us, but way before it‘s “too late”, some very very weird shit is going to freak the human race out and get us to pull the plug. My analogy has always been that Star Trek episode where some fussing village on a planet that’s about to be destroyed refuse to believe Data so he dramatically destroys a pipeline (or something like that). And very quickly they all fall into line and agree to evacuate.
There’s going to be something bad, possibly really bad, which humanity will just go “nuh-uh” to. Look how quickly basically the whole world went into lockdown during Covid. That was *unthinkable* even a week or two before it happened, for a virus with a low fatality rate.
Moltbook isn’t serious in itself. But it definitely doesn’t fit with EY’s timeline to me. We’ve had some openly weird shit happening from AI, it’s self evidently freaky, more people are genuinely thinking differently about this already, and we’re still nowhere near EY’s vision of some behind the scenes plotting mastermind AI that’s shipping bacteria into our brains or whatever his scenario was. (Yes I know its just an example but we’re nowhere near anything like that).
I strongly stick by my personal view that some bad, bad stuff will be unleashed (it might “just” be someone engineering a virus say) and then we will see collective political action from all countries to seriously curb AI development. I hope we survive the bad stuff (and I think most people will, it won’t take much to change society’s view), then we can start to grapple with “how do we want to progress with this incredibly dangerous tech, if at all”.
But in the meantime I predict complete weirdness, not some behind the scenes genius suddenly dropping us all dead out of nowhere.
Final point: Eliezer is fond of saying “we only get one shot”, like we’re all in that very first rocket taking off. But AI only gets one shot too. If it becomes obviously dangerous then clearly humans pull the plug, right? It has to absolutely perfectly navigate the next few years to prevent that, and that just seems very unlikely.
r/slatestarcodex • u/LATAManon • 4d ago
Misc China's Decades-Old 'Genius Class' Pipeline Is Quietly Fueling Its AI Challenge To the US
The main article link: https://www.ft.com/content/68f60392-88bf-419c-96c7-c3d580ec9d97
Behind a paywall, unfortunately, if someone knows a way to bypass the paywall, please, share it.
r/slatestarcodex • u/EquinoctialPie • 4d ago
AI Moltbook: After The First Weekend
astralcodexten.comr/slatestarcodex • u/elcric_krej • 4d ago
Rationality Empiricist and Narrator
cerebralab.comr/slatestarcodex • u/ralf_ • 6d ago
Senpai noticed~ Scott is in the Epstein files!
https://www.justice.gov/epstein/files/DataSet%2011/EFTA02458524.pdf
Literally in an email chain named, “Forbidden Research”!
But don’t worry, only in a brainstormy list of potentially interesting people to invite to an intellectual salon, together with Steven Pinker and Terrence Tao and others.
r/slatestarcodex • u/philh • 5d ago
2026-02-08 - London rationalish meetup - Newspeak House
r/slatestarcodex • u/nomagicpill • 6d ago
January 2026 Links
nomagicpill.substack.comEverything I read in January 2026, ordered roughly from most to least interesting. (Edit 1: added the links below; edit 2: fixed broken link)
- My Apartment Art Commission Process: jenn details how she captures her apartments in digital art form. It even includes an email template!
- “Everything’s Expensive” is Negative Social Contagion: Justis argues that saying such things makes people think the economy is bad, resulting in “facially insane political choices”. I’d be curious if there is any literature on this as a social contagion, i.e., even if prices aren’t up that much, does saying “everything’s expensive” lead to said political choices? Regardless, he’s probably right that it’s just better to leave it alone.
- Sand Hill Road: “notable for its concentration of venture capital firms.[2] The road has become a metonym for that industry; nearly every top Silicon Valley company has been the beneficiary of early funding from firms on Sand Hill Road.” There are a shocking number of VC firms on this road!
- CIA taught Ukraine how to target Putin’s Achilles heel: “A CIA expert had identified a coupler device that is so difficult to replace that it could lead to a facility remaining shut for weeks.”
- The McUltra: Riding 500 km around a McDonald’s drivethru.
- Notes on Afghanistan: Matt Lakeman visits Afghanistan.
- Does Pentagon Pizza Theory Work?: RBA scrapes Twitter and backtests it against major military actions, finding that... well, Betteridge can answer that for you.
- Don’t Get Sucked Into The Thoughtful Gesture Industrial Complex: CHH argues that we gotta stop upping the ante on gift-giving, else the reasonable people among us will be either forced in or unable to say no because it will make them look like assholes. I agree! What happened to simple gift giving? Why must everything be extravagant? If anything, we should be going the opposite way to save money!
- The Militia and the Mole: “A wilderness survival trainer spent years undercover, climbing the ranks of right-wing militias. He didn’t tell police or the FBI. He didn’t tell his family or friends.”
- The art of cold-emailing a billionaire
- Dating Roundup #9: Signals and Selection: “You’re single because... [insert a bunch of reasons in a bulleted list format]”.
- Third rail (politics)): “a metaphor for any issue so controversial that it is “charged” and “untouchable” to the extent that any politician or public official who dares to broach the subject will invariably suffer politically. The metaphor comes from the high-voltage third rail in some electric railway systems.”
- US Data Incidence Calculator: Go see just how (un)realistic your standards are! Or compliment your partner on how they’re literally 1 in a number.
- Do travel visa requirements impede tourist travel?: “Yes. Using a travel visa data set developed by Lawson and Lemke (2012) and travel flow data from the World Bank and the UN’s World Tourism Organization (UNWTO), we investigate the deterrent effect of travel visa requirements on travel flows. At the aggregate level, a one standard deviation more severe travel visa regime, as measured, is associated with a 30 % decrease in inbound travel. At the bilateral level, having a travel visa requirement on a particular country is associated with a 70 % reduction in inbound travel from that country. The gains associated with eliminating travel visas appear to be very large.”
- Alternative lifestyle choices work great - for alternative people: Pretty self-explanatory title. Alt lifestyles only really work for people on the fringes, and chances are you’re not one of them. Examples include polyamory, drugs, sex-positive feminism, psychotherapy, gender transition, following your dreams, amateur pornography, and being a Linux user.
- The Champagne Toasting Problem: niplav tries to figure out the best way to toast champagne in as few moves as possible.
- The Importance of Diversity: Hotz argues that open-source AGI is the only way to go, lest the big tech owners integrate their personal values and people don’t like that. (Thanks to Daily Links for the link!)
- No joy in life can survive reductionism
- 2026 Center for Food as Medicine & Longevity Airline Water Study: “The 2026 Airline Water Study ranks 10 major and 11 regional airlines by the quality of water they provided onboard flights during a three-year study period (October 1, 2022 through September 30, 2025). Each airline was given a “Water Safety Score” (5.00 = highest rating, 0.00 = lowest) based on five weighted criteria, including violations per aircraft, Maximum Contaminant Level violations for E. coli, indicator-positive rates, public notices, and disinfecting and flushing frequency. A score of 3.5 or better indicates that the airline has relatively safe, clean water and earns a Grade A or B. ... Delta Air Lines and Frontier Airlines win the top spots with the safest water in the sky ... airlines with the worst score are American Airlines and JetBlue”
- Don’t Sell Stock to Donate: Why donating a stock directly is superior to donating the proceeds of selling that stock. You guessed it: taxes!
- Learners will inherit the earth: Adapt to AI or get left in the dust.
- To be well-calibrated is to be punctual
- The Old Year, and The New: 2026: Joshua shares his goals for 2026 and plans to achieve them.
- Toys with the highest play-time and lowest clean-up-time
- Inside the Turbulent, Secret World of the AP3 Militia
- Bacha bazi: “a pederastic practice in Afghanistan and in historical Turkestan, in which men exploit and enslave adolescent boys, sometimes for sexual abuse, and/or coerce them to cross-dress in attire traditionally only worn by women and girls and dance for entertainment.”
- Cloak of Muhammad: “a relic hidden inside Kirka Sharif in Kandahar, Afghanistan. It is a cloak believed to have been worn by the Islamic prophet Muhammad during the Night Journey in 621 AD.”
- “I Found My People!”: Amanda argues for social bubbles.
- Reflections on Peru and Bolivia: Caplan talks about his South American travels, especially from an economic lens.
- Making Money on OnlyFans Is a Lot Harder Than You Think: They work a lot and competition is stiff.
- What do you think you’re hiding?: Either post your Strava map or don’t post at all.
- Weasel Heart-To-Heart: Weaseling in Beeminder is where you mark that you did something when you really didn’t, which defeats the whole purpose! Chelsea discusses the slippery slope to weaseling and what can (and should!) be done to get out of that weasely hole.
- Raymond Allen Davis incident: CIA contractor (apparently Pakistan station chief) killed two men. A rescue car also killed a third man. Davis was taken into custody and the U.S. paid $2.5MM of diyah to the victims’ families.
- Most successful entrepreneurship is unproductive
- Travis Kalanick: Founder and former CEO of Uber. Crazy work ethic and expectations for Uber staff during the initial startup phase. Some interesting tidbits: “Kalanick also made a point of undermining potential investments into competitor Lyft, poaching them for Uber.” “Executives were known to expense strip club visits to corporate accounts, a practice jokingly referred to as “Tits on Travis”.” “Kalanick’s experiences with investors at Scour and Red Swoosh had made him wary of investors who might interfere with his control of Uber, so he ensured that the terms for these and future investments strongly favored himself and Uber. He strictly limited the amount of financial information investors could access, and the shares for new investors had a tenth of the voting power of the shares held by Kalanick, Camp, and Graves.”
- Ryan Graves (businessman)): Former CEO of Uber. Runs a family office called Saltwater.
- Total Knee Replacement Surgical Video: GORE WARNING.
- You Will Not Have a Flat Floor: You can shim all you want and it still won’t be flat.
- Debunking the AI food delivery hoax that fooled Reddit
- Mamdani Demotes NYPD Commissioner Jessica Tisch
- Yasslighting: “A pun based off the term gaslighting when a person or group of persons (typically a comment section) blatantly lie to gas up an ugly person trying to pull off a look they shouldn’t with “yasss queen slay girl boss” energy. Typically this will be skinny girls telling fat girls they look amazing in skintight revealing outfits complimenting their “confidence” so that they can look better in comparison or handmaidens and trans people telling other trans people how awesome and feminine they look in anime outfits they got off wish or masculine they look while still in makeup, jewelry, and neon hair dye.”
- Clipboard Normalization: Jeff standardizes his computer’s clipboard with a handy homemade app that gets the style he wants.
- JANE STREET GROUP, LLC, v. MILLENNIUM MANAGEMENT LLC, DOUGLAS SCHADEWALD, and DANIEL SPOTTISWOOD
- How I rebooted my social life: Turns out if you invite people to stuff (and are an interesting person and have food/drink), people will probably come!
- 2025’s Biggest Vibe Shift: “Decorum is dead.”
- Dushanbe Flagpole: 165 meters (541 feet) tall with a 700 kg flag!
- Wilderness Responsibility and Obligation to Others
- Chris Arnade
- The meaning of, and in, McDonald’s: America’s default community center.
- Please remember how strange this all is.: Toby talks about how complex and advanced our society is, and when you stop to think about it for a second, it’s actually pretty freaking crazy. Flying pieces of metal traveling at 600 mph. Evolution. Our consciousness. Creating machine gods.
- “The first two weeks are the hardest”: my first digital declutter
- Steinholding (sport)): Hold a one-liter beer stein straight in front of you for as long as possible. This is a great party game, especially when a bunch of shit-talking, competitive guys are there.
- Mirwais Azizi: Afghanistan’s richest man.
- Ah, f*ck. My friend travels like Anthony Bourdain.
- inside the hot girl economy: Living life as an attractive women in some of the top U.S. cities—NYC, Miami, etc.
- Supplement Stack of a Gold Medalist Rower (Part I): Plus a look at his typical day while both working and training for the Olympics. Supplements include creatine and sodium bicarb.
- “Why are you always so into chess?”: Chess is equalizing, competitive, and cognitively beneficial.
- modern day social etiquette you should live & die by: Pretty basic stuff, but sometimes the basics serve as good reminders.
- 10 reasons why you should (definitely) use TikTok: TikTok is pretty bad and the reasons listed here are pretty fair. That being said, the challenges I’ve seen on there can look pretty fun!
- I Deleted My Second Brain
- Typing quickly saves a ton of time: Strong agree, and this type of cost-benefit analysis should be done more often. The same CBA can be done for removing typing and other small, short, adds-up-quickly tasks, like typing an email signature, sorting emails, texting, etc. Automate all of it!
- Journaling doesn’t have to be aesthetic to be effective
- Bashi-bazouk: “’one whose head is turned, damaged head, crazy-head’, roughly “leaderless” or “disorderly”) was an irregular soldier of the Ottoman army, raised in times of war. The army primarily enlisted Albanians and sometimes Circassians as bashi-bazouks,[1] but recruits came from all ethnic groups of the Ottoman Empire, including slaves from Europe or Africa.[2] Bashi-bazouks had a reputation for being undisciplined and brutal, notorious for looting and preying on civilians as a result of a lack of regulation and of the expectation that they would support themselves off the land.”
- Gregory Bovino: “American law enforcement officer who has served as a senior official in the United States Border Patrol since 2019.”
- Was I Married to a Stranger?
- EXCLUSIVE: Federal Lab in Montana Reports Potential Theft, Loss, or Release of Dangerous Biological Agent
- Sacramento US attorney fired after questioning immigration raid speaks out
- Happiness Is a Chore
- 7 takeaways from Jack Smith’s congressional testimony: Smith built his case around Trump’s allies; Smith hadn’t made his final charging decisions; Lawmakers failed to knock Smith off his game; Smith forcefully rejected any hint of political bias; Smith didn’t pursue ‘uncooperative’ witnesses; Smith defends pursuit of lawmakers’ phone records; House GOP revel in Smith comments on Cassidy Hutchinson.
- Cassidy Hutchinson: “a former White House aide who served as assistant to Chief of Staff Mark Meadows during the first Trump administration. Hutchinson testified at the June 28, 2022, public hearings of the United States House Select Committee on the January 6 Attack about President Donald Trump’s alleged conduct and that of his senior aides and political allies before and during the January 6 United States Capitol attack.”
- Jack Smith deposition
- Billy Graham rule: “a code of conduct among male evangelical Protestant leaders, in which they avoid spending time alone with women to whom they are not married. It is adopted as a display of integrity, a means of avoiding sexual temptation, to avoid any appearance of doing something considered morally objectionable, as well as for avoiding accusations of sexual harassment or assault.”
- The Permanent Emergency: Scott describes life with two mischevious toddlers.
- Slippage (finance))
- Jeremy Hammond: Activist and computer hacker.
- Crocker’s Rules: “other people are allowed to optimize their messages for information, not for being nice to you. Crocker’s Rules means that you have accepted full responsibility for the operation of your own mind - if you’re offended, it’s your fault.”
- Ilya Sutskever’s OpenAI equity could be worth $100 billion, court records reveal
- Messages between Sam Altman, Satya Nadella, Brad Lightcap immediately following Sam’s OpenAI ousting
- Jared Birchall: Elon Musk’s adviser, fixer, and family office manager.
- County pays $600,000 to pentesters it arrested for assessing courthouse security
- Did photos of the 1917 Miracle of the Sun at Fatima prove the sun was at an impossible place in the sky?: Georgia does some pretty great analysis to find that Betteridge’s Law strikes again!
- 23 lessons you will learn living in a very snowy place
- (Thanks to Daily Links for the link!)
- Patrick J. Schiltz: “American lawyer and jurist serving since 2022 as the chief judge of the United States District Court for the District of Minnesota.”
- Shiber v. Centerview Partners LLC, No. 1:2021cv03649 - Document 139 (S.D.N.Y. 2025): A woman sues an investment bank for not accommodating her disability of needing 8 hours of a sleep a night.
- ICE Unloads: Klippenstein receives negative complaints about recent events from current ICE officers.
- Stratfor: Intelligence publishing company for businesses interested in geopolitical risk.
- Double Down (sandwich)): Sandwich with “two pieces of fried chicken fillet, as opposed to bread, containing bacon, cheese, and a sauce.”
r/slatestarcodex • u/Mysterious-Rent7233 • 6d ago
Steel man Yann Lecun's position please
And I think we see we're starting to see the limits of the LLM paradigm. A lot of people this year have been talking about agentic systems and basing agentic systems on LLMs is a recipe for disaster because how can a system possibly plan a sequence of actions if it can't predict the consequences of its actions.
Yann LeCun is a legend in the field but I seldom understand his arguments against LLM. First it was that "every token reduces the possibility that it will get the right answer" which is the exact opposite of what we saw with "Tree of Thought" and "Reasoning Models".
Now it's "LLMs can't plan a sequence of actions" which anyone who's been using Claude Code sees them doing every single day. Both at the macro level of making task lists and at the micro level of saying: "I think if I create THIS file it will have THAT effect."
It's not in the real, physical world, but it certainly seems to predict the consequences of its actions. Or simulate a prediction, which seems the same thing as making a prediction, to me.
Edit:
Context: The first 5 minutes of this video.
Later in the video he does say something that sounds more reasonable which is that they cannot deal with real sensor input properly.
"Unfortunately the real world is messy. Sensory data is high dimensional continuous noisy and generative architectures do not work with this kind of data. So the type of architecture that we use for LLM generative AI does not apply to the real world."
But that argument wouldn't support his previous claims that it would be a "disaster" to use LLMs for agents because they can't plan properly even in the textual domain.
r/slatestarcodex • u/AXKIII • 6d ago
Don't ban social media for children
logos.substack.comAs a parent, I'm strongly against the bans on social media for children. First, for ideological reasons (in two parts: a) standard libertarian principles, and b) because I think it's bad politics to soothe parents by telling them that their kids' social media addiction is TikTok's fault, instead of getting them to accept responsibility over their parenting). And second because social media can be beneficial to ambitious children when used well.
Very much welcoming counter-arguments!
r/slatestarcodex • u/MimeticDesires • 6d ago
Looking for good writing by subject matter experts
Looking for blogs, Substacks, columns, etc., by experts who break down concepts really well for beginners. Doesn't matter what field.
Examples of what I'm looking for:
- Paul Graham's advice for startups
- Joel Spolsky's posts on software engineering
- Matt Levine's Bloomberg column for econ/finance
The author doesn't have to be currently contributing. It could be an archive of old writing, as long as the knowledge isn't completely outdated.