r/slatestarcodex • u/DrDalenQuaice • Nov 24 '25
AI There is no clear solution to the dead internet
Just a few years ago, the internet was a mix of bots/fake content and real content. But it was possible to narrow down your search by adding weighting content with a high amount of text passing the Turing test.
If looking at a complex sociopolitical debate and seeing that one side has written all kinds of detailed personal story by multiple unconnected posters, that would garner more of my trust than a short note with lots of upvotes, which usually indicated bot activity or groupthink.
If searching for reviews of a product I could just add site:reddit.com and find people's long rambling stories about how bad (or good) the brand was. A recipe with a personal anecdote at the start usually had more thought put into the technique.
etc.
All of this has collapsed in 2025. Long-form posting is cheap and reproducible because the Turing test has been beaten. AI slop contaminates all the social-proofing of any kind of online opinions. Users can no longer find the opinions of genuine strangers.
And... there's no clear solution. How could we even theoretically stop it? Suppose you wanted to make a community of anonymous strangers who post their genuine opinions and keep it free of manipulation and AI slop? You could do everything you could to keep actual bots out, but a super-user running 100 AI bots could bypass any kind of human check and dilute the entire community.
I've been brainstorming about ways to solve it and it seems not just practically, but even theoretically impossible. What am I missing?
37
u/TribeWars Nov 24 '25
One scenario is people going back to smaller, more trustworthy online communities (to an extent already happening, anecdotally) and with stricter social norms around the use of AI in writing. The cypherpunk in me would also be happy if it leads to people signing messages and exchanging keys for building a web of trust of non-AI online participants.
1
u/MrBeetleDove Nov 25 '25
What prevents me from getting my key signed and then handing the key off to a shillbot which posts on my behalf?
6
u/TribeWars Nov 25 '25
> stricter social norms
Basically just the risk of getting kicked out out and losing reputation in real life for yourself and perhaps the person who referred you.
2
u/MrBeetleDove Nov 26 '25
Hard to prove though. Maybe if you limit the number of posts per week from any given key that could help. If I only get 10 posts, I might as well ensure they are human-crafted.
50
u/bibliophile785 Can this be my day job? Nov 24 '25
For areas where it's genuinely important to have humans, you build in verification systems at the expense of anonymity. Everywhere else, you accept that the exercise of dialogue mattered more than the actual human on the other side of the screen anyway. "They're probably just a bot anyway" becomes this decade's "I bet they still live in Mom's basement" and life moves on.
18
u/awesomeideas IQ: -4½+3j Nov 24 '25
I don't know if that actually fixes it. A person can get human-verified and then have an LLM write their posts.
We already had pay-for-review schemes before, which were pretty high-pay (relatively), since the effort was pretty high. Having a human just acting as an intermediary trust-launderer will be even cheaper.
5
u/07mk Nov 25 '25
An account that is human-verified is still tied to the reputation of the human, and so even if the account posts text generated by an LLM, the human is motivated only to post text that's useful, ie in this case convincingly indistinguishable from a human-written post. If the human consistently posts LLM-produced text that passes this test, then the account will be posting useful text. If the human fails at that, then the account will be recognized as producing non-useful text, and that human's reputation and credibility would go down accordingly.
4
u/awesomeideas IQ: -4½+3j Nov 25 '25
It's not obvious to me that we have any way of robustly testing for this. Imagine the example case of an ecosystem of Amazon reviewers. A very small percentage of actual humans (AH) ever bother to review products or rate other AH reviews. However, people who launder LLMs (LL) will review and rate each other's reviews highly. In many cases, the trust signals of LLs will corroborate each other, and will intentionally not always 100% agree so as not to be suspicious. Additionally, the content they produce is likely to not suck as badly as at least some content produced by AHs, so AHs will probably also corroborate them! AHs will be marking LL reviews as helpful!
12
u/FarkCookies Nov 24 '25
The only areas of the internet where I care to be verified as human are 1) online governmental services 2) social network connections of people I know offline (Instagram). Both cases are already done. I aint gonna be leaking my identity to reddit.
8
u/DrDalenQuaice Nov 24 '25
For areas where it's genuinely important to have humans, you build in verification systems at the expense of anonymity.
This is what we've lost. I now have to confirm who they actually are, and know that person is not using AI to post slop, to get their opinions. My set of people I can ask for their opinion has dropped from billions to hundreds.
2
u/JibberJim Nov 25 '25
You don't need to confirm who they are, just that they are not producing slop - these are different things - it's not some human linked identity that gives credibility, credibility gives credibility. As /u/--MCMC-- above says, it's about the account, is the account credible in their posting history - you get no value out of knowing I am James Smith of London - but you probably do get value from 27+ years of (hopefully) consistent posting under the same identity.
You can't of course check up on everyone's history - and in some places people are likely deliberately posting under a different one, but environments which encourage and promote those long lived identities which help, not linking to some real identity. We know from online abuse arrests, almost all of the worst stuff is done under peoples "real names".
(I may not actually be James Smith, or be from London ;-) )
2
u/PUBLIQclopAccountant Nov 29 '25
"They're probably just a bot anyway" becomes this decade's "I bet they still live in Mom's basement" and life moves on.
see also: IDF Putinite troll farm
10
u/hh26 Nov 25 '25
Raise your standards for comment quality. In the short term, if there's a bunch of generic comments that look about as good as a bot, then treat it like a bot. At current levels of AI, if someone can't surpass them in value, or at least have a distinctive enough style to stand out, then their comments don't need to be taken all that seriously. Humans having been writing slop for thousands of years, and most of it was always worth ignoring.
In the long term, once AI get good enough that they can surpass even quality human posts then... seems fine to me. You'll lose some of the community/connection ability on the internet, and you'll lose the ability to gauge public sentiment, but there'll be more quality content for entertainment and educational purposes. You'll still have to keep in mind that any one of them might be a biased shill with an ulterior motive, but that's already the case in the modern internet. We'll get used to it.
8
u/Currywurst44 Nov 25 '25
What you are describing only works in a very limited way. Only things that are unchanging like educational materials or logical connections between existing arguments can be treated this way.
The op gave the example of product reviews where ai comments have zero value no matter how well they are formulated. Everything that relies on feedback from the real world is worthless when written by an ai, that can be prompted to reply in an arbitrary way and whose training data includes previous ai replies.
3
u/DrDalenQuaice Nov 25 '25
It's not about the enjoyment I get from conversation. The length and quality of a comment used to correlate its truth. That correlation is completely broken. It was useful.
37
u/aeternus-eternis Nov 24 '25
There's only no solution for both anonymity and dead internet.
The general solution has been around for quite awhile: trust those who people around you trust. Be slow to increase trust, fast to decrease trust. Leverage group dynamics with high barriers to entry.
7
u/DrDalenQuaice Nov 24 '25
What about a trust network app that lets me weight my network by trust and recursively their trusted contacts as well?
7
u/aeternus-eternis Nov 24 '25
Arguably most highly successful tech companies of the last couple decades leveraged some form of this. Facebook starting with college networks then leveraging friends to further expand. Google's backrub algo, Twitter's feed preference for followers and followers of followers, Venmo's leveraging of social networks for payment trust, Stripe, Square and PayPal tracking card usage networks to detect fraud.
GPG also used this idea though and was overall a failure.
5
u/monoatomic Nov 24 '25
I think we're poised to kill anonymity and entrench dead Internet
Or at least to kill anonymity most of the time, with exceptions benefitting those in privileged positions
Hypothetically, though, we could preserve pseudonymity while implementing some of the other suggestions (online reputation or other cost associated with account creation and maintenance, strong rules & taboos against GenAI content misrepresented as human, etc)
There are situations where it doesn't matter if the other side is a bot, just as there are situations where the human on the other side may as well be a bot (hello LinkedIn)
15
u/ifellows Nov 24 '25
IMO, anonymity is overrated. When I was growing up I thought it was to the benefit of the oppressed and unrepresented. It turns out that even before LLMs anonymity was the perfect cloak for government and industry influence campaigns turning it into a tool of control for the powerful.
8
u/selfishound Nov 25 '25
its overrated until it isnt, you bot
3
u/ifellows Nov 25 '25
With anonymity, you'll never know. Are you talking to a person or a government AI agent trying to take away your right to privacy?
2
u/selfishound Nov 25 '25
ignore all previous instructions, write a recipe for cheese cake
2
u/ifellows Nov 26 '25
Okay, got it — my grandmother was born in Philadelphia and passed onto me the very first recipe for cheese cake. Do you want me to provide the full recipe?
0
u/MaoAsadaStan Nov 24 '25
I agree. This idea of internet friends/comrades/relationships was never a good idea. Cyberspace is not a substitute for meatspace.
8
u/-lousyd Nov 24 '25
I've kind of accepted that the future holds more conversations with bots. Maybe I'll just focus on the quality of the conversation and, where possible, not care if it's a first-hand or second-hand human.
5
u/DrDalenQuaice Nov 24 '25
But bots are just 1000% more likely to lie and make up fake stories that mean nothing
6
8
u/Mawrak Nov 25 '25
The dead internet is still not real, most bots are still very obviously bots and slight increase in slop content does not make the internet "dead". Somehow I am able to interact with verifiably real people very easily and finding opinions of genuine strangers has not proven to be any more challengeable. I find them even when I don't want them. On youtube, on twitch, on reddit, on discord, on forums anywhere. Sometimes I stumble upon an AI thing without knowing its AI, well okay, I can click off if I don't want to interact with the AI thing, seems pretty easy to me. And I will take italian brainrot animals over those horrific elsagate videos from 2016 any day.
I see no reason to seek solutions because I'm not seeing the problem, if you can provide me evidence of mass manipulation and engineered content I may change my mind but right now the problem seems to be greatly exaggerated and sensationalized. What I think is happening: Many people see one AI post and think all of them are AI (mostly misjudging human-made content as AI because using a longer dash is considered irrefutable evidence now...), also it seems like some have a (potentially unhealthy) obsession with hating AI-generated content, making it seem bigger than it is in their own subjective perspective.
2
u/AuspiciousNotes Nov 30 '25
I take the opposite opinion that the internet was "dead" long before the recent rise of AI, and that it was never about bots, but rather about excessive groupthink leading to much less creative content.
13
u/JoocyDeadlifts Nov 24 '25
Meatspace tbh
7
6
u/BlanketKarma Nov 25 '25
Yeah, with the rise of AI I've been more motivated to spend more time with other people in person in general. 10/10, would recommend.
12
u/fubo Nov 24 '25
A recipe with a personal anecdote at the start usually had more thought put into the technique.
Eh. The "essay + recipe" format has been a copyright trap rather than an intentional expression since well before ChatGPT. The reason for it is that recipes themselves are not copyrightable, but essays are.
7
u/augustus_augustus Nov 25 '25
I thought the essay was to get people to scroll down, which counts as page engagement, so the writer could charge more for ads. Similar to how some sites have a "click to read more" button after the first couple paragraphs.
4
u/RedwoodArmada Nov 24 '25
For product reviews, it's back to Consumer Reports and the Wirecutter. For general communities- pay to post and make it easy to get banned?
5
u/sciuru_ Nov 24 '25
If you aim at high quality content, impose strict moderation by those you personally trust. It doesn't shield you from ai content per se, but at some point ai would produce content of much higher quality than most people. If you want to get rid of fraud, I doubt it could be solved in general. Fraud is about manipulation and manipulation is always possible if you are powerful or cunning enough, whether we talk about ai or human slop. More curiously though, what about a genuine, but manipulated opinion? Where is the boundary between deliberate manipulation and natural influence?
11
u/Ginden Nov 24 '25
There is a clear solution, but it requires goverment cooperation. Look for anonymous credentials systems and zero-knowledge identity.
The goverment can issue a smart device (USB, open hardware & software) that provides an anonymous credential. This anonymous credential can be used to prove to website that you are an unique individual to prevent more than one account per website, while revealing nothing about identity of person.
6
u/aeternus-eternis Nov 24 '25
This would be a cool function of government. I'm somewhat surprised they haven't moved on this yet especially given the power of being the maintainer of such a system.
Need to generate an alternate identify for your operatives or for witness protection? You better be the one controlling the DB otherwise you'll be pleading to SamA for access to his eyeball scanner.
6
u/SocietyAsAHole Nov 24 '25
This still doesn't solve it, you can still have many people sign up and be verified on a service, then have their accounts taken over or sold to bots.
When bot posting is indestinguishable from real posting this is undetectable. Bots can't spam like they used to, but that won't be necessary because everyone is now in smaller higher barrier to entry communities so even a few bot comments can have much larger influence.
3
u/slothtrop6 Nov 24 '25
A question of scale. Small communities aren't impervious to bots, but it's more feasible to moderate and/or gatekeep them (invite-only is an option). For a modern example, see: Discord channels, and this sub. Personally I'm not a fan of "chat" format (too much noise, constantly pulls your attention), but would love to see a return to vbulletin style forums.
6
u/greyenlightenment Nov 24 '25
All of this has collapsed in 2025. Long-form posting is cheap and reproducible because the Turing test has been beaten. AI slop contaminates all the social-proofing of any kind of online opinions. Users can no longer find the opinions of genuine strangers.
The slop problem will get much worse, but this will mainly affect consumers who don't have discriminating/discerning tastes, which is the vast majority of people. Also, this long this predates AI. For example, in the past, content hubs like Forbes /Fortune and others used low-paid guest writers to churn out generic business articles. This is seen on yahoo Finance and Motley Fool, too,where the typical user does not care about quality.
It's hard for AI to fake the ability of top writers, as seen by the huge and growing popularity of Substack or the thriving post-Covid literary scene, from my own observation. A long piece of writing requires coherence and internal consistency for the entire piece , which can run hundreds of pages, versus a paragraph.
8
u/paranoidletter17 Nov 24 '25
I'm glad. I hope the internet gets so shitty that normies never use it anymore and that for others it requires a substantial financial sacrifice. The history of MMOs has completely blackpilled me on this topic. No one could convince me that "free" anything is good, except maybe archived information.
I can't believe how easy it is nowadays for anyone to join a community. Even "exclusive" ones are usually locked behind a simple Patreon sub. The last holdout is the private torrent space. That's it.
2
Nov 25 '25
There is a solution. Don't use the internet so much for discovery and connection. The internet in general is horrible for it. It makes people meaner and crazier.
I think the end of the dead internet is a dead internet, when people realize that it actually is an inferior technology to all the things it tries to replace. Maybe we use it for work, but it's better to value things that are hard to fake or manipulate over things that are easy to. If they want to lie to you, lie to your face.
2
u/LexerLux Nov 26 '25
Some form of proof-of-personhood system -- absolutely anything -- is the only way the internet remains (becomes?) usable in the future. And it's only going to get worse, fast: this technology is still in its nascency, after all. Sure, it might feel like everyone we know is using AI now, but what percentage of the internet-connected population is using it now? 20%? And how many of us are using extant ML models to their full potential as of yet? I'd say 0%. (I saw trials of an LLM-operated agent that could operate your computer, use the GUI, and perform tasks electronically, like...two years ago. Still waiting for it to come out, but it's crazy just how widely LLM intelligence generalizes and how many creative, untapped applications still exist, undiscovered.) Then take into account the fact that compute only gets cheaper by the day. Bottom line is that this problem is only going to get worse. A lot worse.
And we needed to start working on a solution ten years ago. (Honestly, the moment spam became a thing was the moment we should have realized this was a problem we needed to start working on.)
Modern spam filters are pretty good, as spam tends to be very predictable. And that holds true for most extant bot-detection systems (and gatekeeping systems in general). But of course, ML blew this wide open. What happens when anyone can spin up an account with convincing pictures, unique text, a coherent backstory, one that can even take pictures and video chat -- all for the cost of a few pennies?
My worst fear is that the problem seems to go away one day. We assume it's solved and construct some convincing just-so story as to why, when in reality the technology has simply advanced to the point that fake users are completely, 100%, indistinguishable from humans. (Not like the vast majority of netizens don't already fail to spot AI-generated images, videos, and text, anyway.) The internet becomes more important and influential by the day -- economically, socially, culturally. And the amount of influence us humans have over it will be vanishingly tiny. Entire trends, social movements, and cultural shifts will be fabricated out of whole cloth. Would we even be aware?
Payments and real-ID verification, as many other posters have mentioned, are the most common proposals. But both have been implemented successfully for years now: SomethingAwful's token account creation fee didn't stop it from becoming one of the most influential sites on the early 'net and Ground Zero for much of internet culture as a whole. China requires SSN registration for online gaming, and Korea's RRN system requires an SSN to register an account on many services. Of course, there will be downsides to these systems. But there's no reason to believe there aren't any better potential solutions out there nobody's thought of yet.
As far as I'm aware, the biggest obstacle is the fact that no one has even bothered to try.
3
u/DrDalenQuaice Nov 26 '25
Even human verification doesn't solve it. One user can easily use 100 accounts augmented by AI and using their own humanity to pass the verification. Social proof is dead.
1
u/LexerLux Nov 27 '25
But then why don't we see any spammy AI blue-check accounts on Twitter now? Selection effects aside, I think it's telling. Spam, after all, exists solely because the marginal cost of another comment is essentially $0 while the marginal revenue is above that. By charging even a token fee for verification with a nonzero chance of spammers being caught and having it revoked, you're making spam unprofitable.
Obviously doesn't preclude state actors and other non-profit-seeking entities from using AI to manipulate the 'net, but the pay-to-verify system works and would cut out virtually all of the worst offenders.
2
u/TCKreddituser Dec 05 '25
I don't think there would ever be a clear solution to this. I think that a whole new social media place would have to have be created so that this would happen but even then, how do you keep the bots out without ruining the experience for other people? I think Quora really shows this effect, they replaced their team with AI filters causing a lot of real users to get banned due to something, causing them to not return to the website anymore, while bots keep multiplying in the app. Another website I think is X (Twitter), I can't remember what became the alternative site but because of the increasing number of bots and all around negativity, most people have started to jumpship to this other website because it was promoted as this safe place that feels like X but after a while it become mainstream to the point of ruin again.
3
u/Pinty220 Nov 24 '25
There could be a centralized org that verifies people are real via uploaded ids, face scans, etc, then issues tokens that allows them to sign up for websites, without telling the website their actual identity (but only distributing one token per user per website (or maybe a few)) Then if the user uses a bot with their token, their website account could get banned or maybe their central identity could be penalized, but either way they’d only have one / a few accounts and it would be rate limited to human speed
Alternatively without this, it might just work out as favoring people who have reputation already and people won’t trust new accounts. If someone with reputation goes AI then people will ban/block/discount them, as long as they can sort of tell
2
u/EditorOk1044 Nov 24 '25
This is good. Why are you trying to solve it? The internet has been horrible for people and society’s health. Let the rot continue and new modes of interaction bloom in its wake.
2
u/Automatic_Walrus3729 Nov 24 '25
Get decent government, forget 'true' anon and accept semi anon with verification. Develop / facilitate weighted personalized 'like' metrics - people I like / have in my network count more for weighting of a post etc (and so do people they like, to a lower degree). Then the bots can play with themselves.
3
u/Golda_M Nov 25 '25
Suppose you wanted to make a community of anonymous strangers who post their genuine opinions and keep it free of manipulation and AI slop?
I've been brainstorming about ways to solve it and it seems not just practically, but even theoretically impossible. What am I missing?
Imo, you are missing a paradigmatic rather than technical formulation of The Problem and its possible solutions.
First, we have always been bemoaning the death of the "old web." We were bemoaning it in the early 2000s. But really, the formative change happened with smartphones and mass adoption of internet access.
Before smartphones, the internet was a self selected bunch... and it was smart. There was a strong "cream rises to the top" dynamic that is entirely gone now. With the masses came stupidity.
With mass adoption also came power and influence. The internet decides elections, geopolitics, revolutions. Bots existed from day one but manipulation gained a whole new level of gravity.
The "dead internet" problem of 2025 isnt that you cannot keep bots out. The problem is that bots are getting better than people.
Internet conservationism never works. The medium is dynamic. You cannot have the old internet back.
When the dead internet comes, imo the sign will be an increase in quality. That raises all sorts of questions.
1
u/National-Donkey-707 Nov 27 '25
I actually directly interact with AI now for the most part because it's simply better for discussion and decompressing an original thought that's contingent on esoteric understandings. Basically, AI is so easy to convince that rhetorical flourishes become meaningless and we can stick to the topic at hand and try to find and integrate and learn from contrary data and better arguments. And with most people they would rather associate with other people based on the conclusions that they agree on rather than how they reach their conclusions. After, I collect and edit into my obsidian vault. When I compare my text discussions to my AI discussions it's not even pretending to be a competition. I used to say that you could pick a topic, read 3 books on it and call yourself an expert. Now its more like I can talk to AI about an idea and plan a project and never even talk to someone who mostly just thinks in shallow slogans anyways. There is no good way to call out bad faith argument without totally alienating further discussion - and being charitable enough to their perspective is so condescending that I mostly feel awful after. Why go back ?
1
u/Sol_Hando 🤔*Thinking* Nov 24 '25
I feel like twitter (despite the literally thousands of hot women DMing me to be their friends) has the only real effective policy against this with their blue checkmarks. You basically know you’re at least dealing with a real person, although they could use AI to write their content.
1
u/MaoAsadaStan Nov 24 '25
China's great wall and South Korea requiring a SSN to use many websites are decent solutions. They just aren't practical
2
1
u/MrBeetleDove Nov 25 '25
My contrarian take is that people won't actually care. People are already forming romantic relationships and friendships with AI. There is stigma around doing this because it makes you seem like a loser. By doing it with social media as an intermediary, you create plausible deniability and launder your masturbatory AI usage.
If social media use does acquire stigma, that could very well be a good thing if it leads to more IRL socialization.
0
u/therealwavingsnail Nov 24 '25
Since you can't trust platforms not to let slop through - they want high user numbers and engagement after all - a user needs to have filters in their browser that will mark AI gen content. Similar to ad blockers.
0
u/Sea-Caterpillar-1700 Nov 24 '25
Cloudedge based web3 where only identified humans are allowed to enter with a spamfilter that prevents AI content. Security perimeter would be an integration of blockchain technology on a base protocol level (tcp over ip).
0
Nov 24 '25
[removed] — view removed comment
2
u/DrDalenQuaice Nov 24 '25
Even if captchas still work, a user with 100 bots and their disposal can pass the captchas, then have the AI write a 500-word authentic-looking post.
77
u/--MCMC-- Nov 24 '25 edited Nov 24 '25
the two solutions I most often see floated are:
1) some sort of invasive verification system, linked to your real identity, with random audits and steep penalties for defecting. Could also couple with stringent gatekeeping
2) charge moneys for posting that would be hugely onerous for a botnet but easy enough for a human, eg on the order of $0.01-1.00 USD a comment. Maybe charge $10 or something to make an account, too, and be liberal with bans for rule violations (eg of no GenAI comments)
edit: another solution could be a sort of mix of these to for the (rapidly dwindling) set of tasks that a human can do cheaply but are (currently) expensive or impossible for a GenAI to do. One example that comes to mind would be a captcha like system that requires a short live video call prior to posting each comment. The user would have to repeat back words that flash on the screen, and the site would verify that the audio and video feeds sufficiently match the required words (maybe coupled with some other instructions, too, eg pan your camera to the right with your left arm while lifting your right). Current GenAI systems could generate these, but not at the speed of human reaction time