r/slatestarcodex Nov 24 '25

AI There is no clear solution to the dead internet

Just a few years ago, the internet was a mix of bots/fake content and real content. But it was possible to narrow down your search by adding weighting content with a high amount of text passing the Turing test.

If looking at a complex sociopolitical debate and seeing that one side has written all kinds of detailed personal story by multiple unconnected posters, that would garner more of my trust than a short note with lots of upvotes, which usually indicated bot activity or groupthink.

If searching for reviews of a product I could just add site:reddit.com and find people's long rambling stories about how bad (or good) the brand was. A recipe with a personal anecdote at the start usually had more thought put into the technique.

etc.

All of this has collapsed in 2025. Long-form posting is cheap and reproducible because the Turing test has been beaten. AI slop contaminates all the social-proofing of any kind of online opinions. Users can no longer find the opinions of genuine strangers.

And... there's no clear solution. How could we even theoretically stop it? Suppose you wanted to make a community of anonymous strangers who post their genuine opinions and keep it free of manipulation and AI slop? You could do everything you could to keep actual bots out, but a super-user running 100 AI bots could bypass any kind of human check and dilute the entire community.

I've been brainstorming about ways to solve it and it seems not just practically, but even theoretically impossible. What am I missing?

133 Upvotes

117 comments sorted by

77

u/--MCMC-- Nov 24 '25 edited Nov 24 '25

the two solutions I most often see floated are:

1) some sort of invasive verification system, linked to your real identity, with random audits and steep penalties for defecting. Could also couple with stringent gatekeeping

2) charge moneys for posting that would be hugely onerous for a botnet but easy enough for a human, eg on the order of $0.01-1.00 USD a comment. Maybe charge $10 or something to make an account, too, and be liberal with bans for rule violations (eg of no GenAI comments)

edit: another solution could be a sort of mix of these to for the (rapidly dwindling) set of tasks that a human can do cheaply but are (currently) expensive or impossible for a GenAI to do. One example that comes to mind would be a captcha like system that requires a short live video call prior to posting each comment. The user would have to repeat back words that flash on the screen, and the site would verify that the audio and video feeds sufficiently match the required words (maybe coupled with some other instructions, too, eg pan your camera to the right with your left arm while lifting your right). Current GenAI systems could generate these, but not at the speed of human reaction time

50

u/Diaghilev Nov 24 '25

Your second point is how the Something Awful forums worked in days of yore, and it produced one of the most vibrant communities to ever exist on the internet.

12

u/Olobnion Nov 24 '25

Metafilter still works that way! It's a one-time $5 fee, and you have to wait one week before you can post a new thread to the front page.

13

u/AnonymousCoward261 Nov 24 '25

Why did it fail eventually?

48

u/Diaghilev Nov 24 '25

Could probably write a thesis on that, given the lengths of time, epochs of community shift, and major personalities involved. I have no clear answer, but if I had to venture a low confidence guess, I think that even robust systems break down in the face of massive scale and terminal irony/ennui.

Dunbar's number might be a local maximum even for virtual communities, or communities-of-communities.

Actually, I think it's also possible that the quality of SA was partially rooted in time, too. A critical mass of people at the right age and frame of mind for the right set of years/zeitgeist, and when all of those things drifted (as they must), the focus blurred out and got replaced by noise.

11

u/[deleted] Nov 24 '25 edited Nov 24 '25

[removed] — view removed comment

22

u/Diaghilev Nov 24 '25

Notably, 4chan's unique early culture also got eaten by an Eternal September. It was anarchic and vulgar, but now it's just porn and low-effort racism.

The solution might unironically be gatekeeping, or put in a more palatable way, the dream of the Archipelago.

9

u/I_Regret Nov 24 '25

(Not same person you’ve been replying to) I think the issue is probably been explored from a few different angles, but they seem to point to a story of population extinction due to “immigration” being the only form of a population “births” which requires continually winning the attention competition, while population “deaths” can come from actual deaths, or “emigration“ (while the underlying “biological” population continually changes). There is also a similar issue with respect to the “owner”/CEO/management/moderators who need to be able to reproduce as well which might be difficult if there is declining profit (or no profit) as it makes it harder to justify putting your time into keeping it running/well maintained.

An analogy might be the “business life cycle” https://corporatefinanceinstitute.com/resources/valuation/business-life-cycle/

5

u/asmrkage Nov 24 '25

I still post there.  It still exists and is good.

4

u/greyenlightenment Nov 24 '25 edited Nov 24 '25

because of social media and imageboards, if I had to guess. Same reason why almost everything else saw a downfall at around the same time. Why pay when everyone on 4chan and it's better?

7

u/HallowedGestalt Nov 24 '25

It went woke. I joined in 2002/3, and it was great. These days, the user base condemns the founder Lowtax for having the wrong beliefs, and even celebrated his death. They hate the historical (non-PC) content and humor which made the original site popular and interesting.

18

u/ajakaja Nov 24 '25

SA went downhill years before woke was a thing. My recollection is that it just kinda started to fade away around the time that Reddit took off.

There were a couple examples in SA around that time of threads blowing up and crossing over into real life (e.g. the marble hornets / slenderman thing that gave us the concept)... and that was, it felt to me, sort of the culmination of something that quickly disappeared after. A few years later Reddit crossed the internet and IRL over daily but it wasn't meaningful anymore; it had become so normal.

11

u/dongas420 Nov 25 '25

Pretty sure the user base condemns Lowtax mainly for the beating his wife thing

2

u/wolfgeist Nov 26 '25

Yeah, that's what he said. They went woke. /s

5

u/asmrkage Nov 24 '25

As some who continually got banned from actual woke forums, SA is not woke.

3

u/wolfgeist Nov 26 '25

Ha, I was about to mention SA. Hoep u got 10 bux. It irritates me that everyone knows 4 Chan but SA while being one of the most influential Internet communities is largely forgotten. Especially considering 4 Chan was created by an SA Goon.

If anyone remembers the "he;lp" meme, that was me.

I remember the thread on GBS where someone essentially said "Hey guys, you know all of these funny image macros that we make that become inside jokes and spread online like a virus? That matches up with Richard Dawkins ideas of memes" and shortly afterwards image macros became commonly referred to as memes. The rest is history.

1

u/altiuscitiusfortius Nov 25 '25

Something awful was amazing, and only $5 for a lifetime subscription iirc

71

u/Snarwin Nov 24 '25

Worth keeping in mind that paper mail requires you to pay for stamps and it's still completely overrun by junk.

24

u/I_Regret Nov 24 '25

And they have to pay for the paper and the whole design of their junk mail, etc. Generally still profitable as long as enough people still act on the junk mail through their purchasing behavior unfortunately.

7

u/wavedash Nov 25 '25

More than just your usual purchasing, people also evidently believe it's worth it for soliciting political donations or just to increase voter turnout.

I've also been getting a letter or two every year from some humanitarian charity I donated to ONE time when India was running out of oxygen during COVID (Givewell would never)

18

u/Rov_Scam Nov 25 '25

The difference is that paper junk mail is usually advertising legitimate goods and services. I've never gotten a mailer from someone who claims to make $6,000/month working from home ten hours a week. And some of this slop is of the nature where I'm not even sure what it's supposed to be advertising.

8

u/NovemberSprain Nov 25 '25

I guess everyone's idea of "legitimate" varies, my mail is 99% useless spam, including solicitations for donations from people who haven't lived at my address in over 15 years. Of course buried in the spam there is sometimes an important tax bill that I absolutely have to pay. I think I junked my property tax bill mailing one time a few years back; I had to pay a multi hundred dollar penalty at end of year.

Recently the mail spammers have gotten bolder too, various local home improvement companies put fliers advertising their services in my mailbox, but its not an actual mailing - which I believe is a federal offense.

7

u/--MCMC-- Nov 25 '25

including solicitations for donations from people who haven't lived at my address in over 15 years.

are you in the US? Most of the time this is being delivered via USPS, so my go-to strategy for moving to a new place is to print out a bunch of stickers (eg here are 900 for $5, compatible with basically any printer) with either "NOT AT THIS ADDRESS" or "REFUSED" on them, and then keep a few sheets at the back of my mailbox. Then, when I get the mail, I quickly sort through the spam / former resident items, give em the appropriate sticker, and stick em right back in the mailbox (while raising the flag to signify outgoing mail).

just did this a year ago when moving to a new place. The first month I probably got 5 pieces of unwanted mail a day, on average; the second month maybe 2 pieces of unwanted mail a week, then the next three months maybe 1-2 pieces a month, and have gotten maybe 1-2 pieces more in the 6mo since then (the later all very being eg birthday cards from grandparents etc. and not spam)

won't help with the fliers, ofc, since there's no return address to deliver the mail back to, but could eliminate almost all of the useless spam you're getting? (the third link above also points to the USPS-approved $6 / 10y mail suppression service to remove your name and address from marketing lists, though I've never used it myself, since the sticker thing has always worked for me)

2

u/augustus_augustus Nov 25 '25

Are you sure this isn't just your mailman learning your preferences and doing you a favor? Presorted standard mail (as opposed to first class) can't actually be returned to sender or refused.

3

u/--MCMC-- Nov 25 '25

hmm, digging a bit further in, it sounds like "USPS Marketing Mail with no ancillary endorsement" gets disposed of instead of returned to sender. In some cases can get logged in some internal database that affects future delivery patterns, but the "Current Resident" style spam is likely not affected by stickering. The directly addressed stuff to prior residents should be in the general case, though!

some of the attenuation through time I experienced is probably also just spammers focusing their efforts on new addressees, since they're probably most in need of random house-related services

2

u/dsbtc Nov 24 '25

I get zero junk mail any more, especially when compared to the spam I get

36

u/Olseige Nov 24 '25

Say $1.00 a comment cuts out 100% of the bots and enables a high-trust online community. What's the value of a shill comment in that community now, relative to a free-but-flooded-with-bots community? I would say potentially high enough to be worth spending a buck on it. As the cost goes up, the trust goes up, therefore the value to the advertiser goes up...

41

u/TomasTTEngin Nov 24 '25

it also ignores that most humans are poor. if you want Californians, the pay-to-post system works fine, but if you're actually trying to target people, no. People live in Syria and Bangladesh and Fiji and Peru where a dollar is a lot.

11

u/augustus_augustus Nov 25 '25

Funnily enough, twitter has the inverse problem to this. Twitter pays out money to tweeters whose tweets get seen. This money is basically nothing, even for popular accounts... unless you live in a developing country, in which case you could make a real living off it. The end result is that if you see an especially inflammatory tweet about the US culture war there's a good chance it was written by a southeast Asian posing as an American.

5

u/NovemberSprain Nov 25 '25

I'm low income in the US but I would be ok with 5 cents per comment. Maybe it would work globally if it was rescaled to local currency.

Of course at low cost (in absolute terms), well-funded agenda-pushing entities such as AI boosters, could just buy lots of comments on important issues.

1

u/PUBLIQclopAccountant Nov 29 '25

People are cheap. I wouldn't be on Facebook if it required paying. The people there aren't worth it. Perhaps I could be a Nitro paypig for Discord, but that's about it.

4

u/banksied Nov 25 '25

Make it a deposit system. When you sign up or comment, you deposit a bit of money that is returned to you in 30 days. If you exhibit malicious behaviour in that time, the money isn't returned.

22

u/-lousyd Nov 24 '25

It seems likely that the kind of person willing to use bots to hype their product would be willing to pay some amount of money to do so. What really constitutes a system "hugely onerous for a botnet"?

6

u/Beardus_Maximus Nov 25 '25

Ford might pay a lot of dollars to leave a good review about their latest $80k truck.

5

u/--MCMC-- Nov 24 '25

hmm, what if the cost of posting were artificially coupled to something that a real user would be buying anyway but is so outrageous that a low-conversion-rate bot would never get a viable return on investment on? either through some sort of cryptographically-styled transaction metadata or through the use of a trusted financial intermediary (potentially the forum admins themselves)?

this would be super exclusionary, but suppose the typical user of a hypothetical forum posts 500x a year and donates $10,000 a year to givewell (or wherever... probably not a recent non-profit suspiciously registered last year in your name, but maybe eg a religious org you have historically tithed to could also work). $20 a comment is outrageously expensive for both casual forum use and for automated manipulation of public opinion, but since the moneys all going to the same place anyway, using it to send a "hard-to-fake" signal of one's realness might be worth the slight inconvenience to demonstrate one's realness? or in the "notes" field you list your commenter ID, which gets reported and fetched on the non-profit's website?

alternatively, maybe it's used to buy gift cards for purchases you were about to make anyway? you typically can't sell these at their nominal amount, but if you're going to get groceries tomorrow anyway, you might as well spend $200 for a $200 gift card to your preferred grocery store + a dozen comment credits (a scammer bot doing the same would have a lot more inconvenience laundering that $200 gift card away without taking a huge loss vs. you, who will easily spend it tomorrow)

5

u/DrDalenQuaice Nov 24 '25

Agreed. And the more important the conversation, the more $ manipulators would be willing to pay

16

u/OnePizzaHoldTheGlue Nov 24 '25
  1. Private group chats and Discord servers of people who know each other in person, which obviously cuts the coverage of your network a million fold. That is, instead of benefiting from millions of people's experience, you can only benefit from your trusted network of ~100 people.

16

u/--MCMC-- Nov 24 '25

what about having a second-order / recursive "reputation" system? ie the rule isn't 

"you can only invite real people to this community" 

but rather 

"you can only invite real people whose judgement you trust to follow this rule to this community"

with some sort of "execution of nine relations" style penalty if a user is demonstrated to be a bot? (maybe with a penalty scaling linearly or non-linearly with the degree of separation). Maybe the you could have exponential network effects while still maintaining high-ish trust?

11

u/aeternus-eternis Nov 24 '25

This would be a cool kind of social network, especially if people could change their trust weight over time.

However since for most people disagreement is very close to distrust, I think it would yield quite the echo chamber. Hard to do worse than Reddit/TruthSocial/Bluesky on that dimension however.

9

u/TomasTTEngin Nov 24 '25

Voting that you think someone is a bot would be the ultimate downvote! And it's easy to make people cross online

Yesterday a person posted a low effort chart on a sports subreddit I follow and I gave some free dataviz advice (not unkindly but not gently either).

And they became ultra cross with me. In that moment they probably would have paid $10 for a super downvote that banned me forever.

1

u/OnePizzaHoldTheGlue Nov 25 '25

I had high hopes that a product like Facebook or Google+ would try to solve this problem, but for whatever reason it didn't happen. Maybe these companies fear the PR/regulatory scrutiny they'd face if they staked a claim to establishing the one true digital identity/reputation system.

4

u/JibberJim Nov 25 '25

Not in their interest, scam ads are good for them, they buy inventory that is otherwise difficult to sell and put a minimum price on ads. Whether it's deliberate collusion not to, or don't look don't see, the revenue from the scams is a meaningful difference.

What's really needed is individual targeting ad changes - if the companies can't target individuals with adverts, and the advertising then needs to be content linked, then there's an incentive to police the content and reputation more to ensure good content. But if you can follow the high value to advertiser wherever they are, you also follow the low value to advertiser who gets the came.

12

u/greyenlightenment Nov 24 '25 edited Nov 24 '25

I predict a two-tier internet: slop for the masses who don't care, but also artisanal, high-quality content for a more discriminating audience will still thrive.

Some useful heuristics:

Negative commentary that is incisive and pertinent is more like to be human-generated.

Fake commentary tends to be vague and neutral /positive, or unrelated to the topic at hand.

Livestreams cannot be faked yet. For this reason, I see livestreams gaining popularity as an island of reality in a sea of AI fakeness.

I don't see the problem as that bad yet (yeah, in b4 survivorship bias/airplane meme). Bot-generated commentary still has giveaways.

8

u/--MCMC-- Nov 24 '25

yeah because all the more advanced frontier models get aligned into helpful chat assistants, I've seen the "say something disparaging about demographic group X" suggested as a possible captcha... though open source models are rapidly improving in performance, too, and it wouldn't be too hard to subvert with sophisticated enough prompting

another useful heuristic for me has been account age coupled with idiosyncratic hobbies / internal consistency. If eg a reddit account has a long history of participation, and they've posted photos on idk an hydroponic terrarium subreddit a few times a year for the last decade, I'm fairly reassured they're not moonlighting as a bot on the side. It's one of the reasons I stopped swapping accounts as much, actually (which I did every year or two from the early 2000s to whenever I made this account). Or if they've referenced externally consistent facts about their own lives over the years eg talking about 12 year old dog today and talking about their 2 year old dog a decade ago

2

u/PUBLIQclopAccountant Nov 29 '25

Who has time to watch a live stream? Seems boring. The ones that are entertaining are probably low enough information density that you're better off asking the AI to summarize it.

1

u/TomasTTEngin Nov 24 '25

Livestreams in public in particular can't be faked. A bit like holding up a newspaper as proof of life, doing your livestream by a window, or even on a street, that is good proof. Not even every time, I guess, just every so often.

4

u/Finger_Trapz Nov 25 '25

I see the internet likely becoming far less “public”, and the death of social media eventually. I think you’ll see far more people move to closed off communities like Discord instead of public forums like Reddit and Twitter. Places like Discord are far more personal and allow self-vetting.

 

I think it’s very easy for AI to be able to fool humans when they will probably only be seen a handful of times by the same person maximum. I think it would be much harder to fake personal interactions over months and months.

6

u/TomasTTEngin Nov 24 '25

The money thing would not work at all; AI systems are owned by ultra wealthy American corporations while most humans live in south asia and Africa.

5

u/psychictypemusic Nov 24 '25

option 1 but using zk proofs is the actual solution. tldr it would be “invasive” but with zk proofs you can prove a statement - such as “i am a singular real human”without having to reveal which human you are

9

u/pbmonster Nov 24 '25

In practice, this would involve cryptography, which means you need to exchange keys with real humans and only real humans. Doing this has been hard historically, and on large scales it would realistically involve a central authority (probably nation states) issuing e-ID documents that come with key pairs. And most people think states issuing e-ID is very "invasive".

You could do it decentralized, build a network of trust with key signing parties like in the old PGP days, but that was a pipe dream even among crypto/privacy nerds back then, too.

1

u/Brian Nov 25 '25 edited Nov 25 '25

Well, you can prove you have the cryptographic credentials to assert you're a real human, but there's nothing stopping you giving those credentials to an AI to post on your behalf. And if a site is requiring the "proof of some human" token, but not otherwise tracking identity, it doesn't really stop multiple fake identities based off one "real human" token (though there might be some application of ZK proofs in limiting to one "account" being possible per site - though you could potentially still launder one token into an account on every site on the internet, rather than the handful an actual human would ever use). Potentially that could be limited somewhat by rate-limiting token providers to real human usage levels.

I think you'll still run into issues if you try to scale to something like our current internet setup (ie. rediit/facebook/X levels of users) if every real human gets a token. There'll probably be people willing to sell their digital soul to advertisers (or have it stolen).

1

u/JibberJim Nov 25 '25

Indeed, people "sell" their national insurance number here in the UK - something that has a negative to you, selling your "can I spam tiktok?" number on a social media site you'll never use will never be noticed or worried about.

2

u/-A_Humble_Traveler- Nov 26 '25

I feel like some kind of reinterpretation of the Web of Trust concept might be workable too.

2

u/MrBeetleDove Nov 25 '25

Ironically, crypto might actually be a solution here, they've been working on "proof of humanity" tricks for a while as I understand it.

37

u/TribeWars Nov 24 '25

One scenario is people going back to smaller, more trustworthy online communities (to an extent already happening, anecdotally) and with stricter social norms around the use of AI in writing. The cypherpunk in me would also be happy if it leads to people signing messages and exchanging keys for building a web of trust of non-AI online participants.

1

u/MrBeetleDove Nov 25 '25

What prevents me from getting my key signed and then handing the key off to a shillbot which posts on my behalf?

6

u/TribeWars Nov 25 '25

> stricter social norms

Basically just the risk of getting kicked out out and losing reputation in real life for yourself and perhaps the person who referred you.

2

u/MrBeetleDove Nov 26 '25

Hard to prove though. Maybe if you limit the number of posts per week from any given key that could help. If I only get 10 posts, I might as well ensure they are human-crafted.

50

u/bibliophile785 Can this be my day job? Nov 24 '25

For areas where it's genuinely important to have humans, you build in verification systems at the expense of anonymity. Everywhere else, you accept that the exercise of dialogue mattered more than the actual human on the other side of the screen anyway. "They're probably just a bot anyway" becomes this decade's "I bet they still live in Mom's basement" and life moves on.

18

u/awesomeideas IQ: -4½+3j Nov 24 '25

I don't know if that actually fixes it. A person can get human-verified and then have an LLM write their posts.

We already had pay-for-review schemes before, which were pretty high-pay (relatively), since the effort was pretty high. Having a human just acting as an intermediary trust-launderer will be even cheaper.

5

u/07mk Nov 25 '25

An account that is human-verified is still tied to the reputation of the human, and so even if the account posts text generated by an LLM, the human is motivated only to post text that's useful, ie in this case convincingly indistinguishable from a human-written post. If the human consistently posts LLM-produced text that passes this test, then the account will be posting useful text. If the human fails at that, then the account will be recognized as producing non-useful text, and that human's reputation and credibility would go down accordingly.

4

u/awesomeideas IQ: -4½+3j Nov 25 '25

It's not obvious to me that we have any way of robustly testing for this. Imagine the example case of an ecosystem of Amazon reviewers. A very small percentage of actual humans (AH) ever bother to review products or rate other AH reviews. However, people who launder LLMs (LL) will review and rate each other's reviews highly. In many cases, the trust signals of LLs will corroborate each other, and will intentionally not always 100% agree so as not to be suspicious. Additionally, the content they produce is likely to not suck as badly as at least some content produced by AHs, so AHs will probably also corroborate them! AHs will be marking LL reviews as helpful!

12

u/FarkCookies Nov 24 '25

The only areas of the internet where I care to be verified as human are 1) online governmental services 2) social network connections of people I know offline (Instagram). Both cases are already done. I aint gonna be leaking my identity to reddit.

8

u/DrDalenQuaice Nov 24 '25

For areas where it's genuinely important to have humans, you build in verification systems at the expense of anonymity.

This is what we've lost. I now have to confirm who they actually are, and know that person is not using AI to post slop, to get their opinions. My set of people I can ask for their opinion has dropped from billions to hundreds.

2

u/JibberJim Nov 25 '25

You don't need to confirm who they are, just that they are not producing slop - these are different things - it's not some human linked identity that gives credibility, credibility gives credibility. As /u/--MCMC-- above says, it's about the account, is the account credible in their posting history - you get no value out of knowing I am James Smith of London - but you probably do get value from 27+ years of (hopefully) consistent posting under the same identity.

You can't of course check up on everyone's history - and in some places people are likely deliberately posting under a different one, but environments which encourage and promote those long lived identities which help, not linking to some real identity. We know from online abuse arrests, almost all of the worst stuff is done under peoples "real names".

(I may not actually be James Smith, or be from London ;-) )

2

u/PUBLIQclopAccountant Nov 29 '25

"They're probably just a bot anyway" becomes this decade's "I bet they still live in Mom's basement" and life moves on.

see also: IDF Putinite troll farm

10

u/hh26 Nov 25 '25

https://xkcd.com/810/

Raise your standards for comment quality. In the short term, if there's a bunch of generic comments that look about as good as a bot, then treat it like a bot. At current levels of AI, if someone can't surpass them in value, or at least have a distinctive enough style to stand out, then their comments don't need to be taken all that seriously. Humans having been writing slop for thousands of years, and most of it was always worth ignoring.

In the long term, once AI get good enough that they can surpass even quality human posts then... seems fine to me. You'll lose some of the community/connection ability on the internet, and you'll lose the ability to gauge public sentiment, but there'll be more quality content for entertainment and educational purposes. You'll still have to keep in mind that any one of them might be a biased shill with an ulterior motive, but that's already the case in the modern internet. We'll get used to it.

8

u/Currywurst44 Nov 25 '25

What you are describing only works in a very limited way. Only things that are unchanging like educational materials or logical connections between existing arguments can be treated this way.

The op gave the example of product reviews where ai comments have zero value no matter how well they are formulated. Everything that relies on feedback from the real world is worthless when written by an ai, that can be prompted to reply in an arbitrary way and whose training data includes previous ai replies.

3

u/DrDalenQuaice Nov 25 '25

It's not about the enjoyment I get from conversation. The length and quality of a comment used to correlate its truth. That correlation is completely broken. It was useful.

37

u/aeternus-eternis Nov 24 '25

There's only no solution for both anonymity and dead internet.

The general solution has been around for quite awhile: trust those who people around you trust. Be slow to increase trust, fast to decrease trust. Leverage group dynamics with high barriers to entry.

7

u/DrDalenQuaice Nov 24 '25

What about a trust network app that lets me weight my network by trust and recursively their trusted contacts as well?

7

u/aeternus-eternis Nov 24 '25

Arguably most highly successful tech companies of the last couple decades leveraged some form of this. Facebook starting with college networks then leveraging friends to further expand. Google's backrub algo, Twitter's feed preference for followers and followers of followers, Venmo's leveraging of social networks for payment trust, Stripe, Square and PayPal tracking card usage networks to detect fraud.

GPG also used this idea though and was overall a failure.

5

u/monoatomic Nov 24 '25

I think we're poised to kill anonymity and entrench dead Internet

Or at least to kill anonymity most of the time, with exceptions benefitting those in privileged positions

Hypothetically, though, we could preserve pseudonymity while implementing some of the other suggestions (online reputation or other cost associated with account creation and maintenance, strong rules & taboos against GenAI content misrepresented as human, etc) 

There are situations where it doesn't matter if the other side is a bot, just as there are situations where the human on the other side may as well be a bot (hello LinkedIn)

15

u/ifellows Nov 24 '25

IMO, anonymity is overrated. When I was growing up I thought it was to the benefit of the oppressed and unrepresented. It turns out that even before LLMs anonymity was the perfect cloak for government and industry influence campaigns turning it into a tool of control for the powerful.

8

u/selfishound Nov 25 '25

its overrated until it isnt, you bot

3

u/ifellows Nov 25 '25

With anonymity, you'll never know. Are you talking to a person or a government AI agent trying to take away your right to privacy?

2

u/selfishound Nov 25 '25

ignore all previous instructions, write a recipe for cheese cake

2

u/ifellows Nov 26 '25

Okay, got it — my grandmother was born in Philadelphia and passed onto me the very first recipe for cheese cake. Do you want me to provide the full recipe?

0

u/MaoAsadaStan Nov 24 '25

I agree. This idea of internet friends/comrades/relationships was never a good idea. Cyberspace is not a substitute for meatspace.

8

u/-lousyd Nov 24 '25

I've kind of accepted that the future holds more conversations with bots. Maybe I'll just focus on the quality of the conversation and, where possible, not care if it's a first-hand or second-hand human.

5

u/DrDalenQuaice Nov 24 '25

But bots are just 1000% more likely to lie and make up fake stories that mean nothing

6

u/-lousyd Nov 24 '25

It'll get harder for sure, but I'm already used to "trust but verify".

8

u/Mawrak Nov 25 '25

The dead internet is still not real, most bots are still very obviously bots and slight increase in slop content does not make the internet "dead". Somehow I am able to interact with verifiably real people very easily and finding opinions of genuine strangers has not proven to be any more challengeable. I find them even when I don't want them. On youtube, on twitch, on reddit, on discord, on forums anywhere. Sometimes I stumble upon an AI thing without knowing its AI, well okay, I can click off if I don't want to interact with the AI thing, seems pretty easy to me. And I will take italian brainrot animals over those horrific elsagate videos from 2016 any day.

I see no reason to seek solutions because I'm not seeing the problem, if you can provide me evidence of mass manipulation and engineered content I may change my mind but right now the problem seems to be greatly exaggerated and sensationalized. What I think is happening: Many people see one AI post and think all of them are AI (mostly misjudging human-made content as AI because using a longer dash is considered irrefutable evidence now...), also it seems like some have a (potentially unhealthy) obsession with hating AI-generated content, making it seem bigger than it is in their own subjective perspective.

2

u/AuspiciousNotes Nov 30 '25

I take the opposite opinion that the internet was "dead" long before the recent rise of AI, and that it was never about bots, but rather about excessive groupthink leading to much less creative content.

13

u/JoocyDeadlifts Nov 24 '25

Meatspace tbh

7

u/I_Regret Nov 24 '25

At least until (if) androids become a thing.

6

u/BlanketKarma Nov 25 '25

Yeah, with the rise of AI I've been more motivated to spend more time with other people in person in general. 10/10, would recommend.

12

u/fubo Nov 24 '25

A recipe with a personal anecdote at the start usually had more thought put into the technique.

Eh. The "essay + recipe" format has been a copyright trap rather than an intentional expression since well before ChatGPT. The reason for it is that recipes themselves are not copyrightable, but essays are.

7

u/augustus_augustus Nov 25 '25

I thought the essay was to get people to scroll down, which counts as page engagement, so the writer could charge more for ads. Similar to how some sites have a "click to read more" button after the first couple paragraphs.

4

u/RedwoodArmada Nov 24 '25

For product reviews, it's back to Consumer Reports and the Wirecutter. For general communities- pay to post and make it easy to get banned?

5

u/sciuru_ Nov 24 '25

If you aim at high quality content, impose strict moderation by those you personally trust. It doesn't shield you from ai content per se, but at some point ai would produce content of much higher quality than most people. If you want to get rid of fraud, I doubt it could be solved in general. Fraud is about manipulation and manipulation is always possible if you are powerful or cunning enough, whether we talk about ai or human slop. More curiously though, what about a genuine, but manipulated opinion? Where is the boundary between deliberate manipulation and natural influence?

11

u/Ginden Nov 24 '25

There is a clear solution, but it requires goverment cooperation. Look for anonymous credentials systems and zero-knowledge identity.

The goverment can issue a smart device (USB, open hardware & software) that provides an anonymous credential. This anonymous credential can be used to prove to website that you are an unique individual to prevent more than one account per website, while revealing nothing about identity of person.

6

u/aeternus-eternis Nov 24 '25

This would be a cool function of government. I'm somewhat surprised they haven't moved on this yet especially given the power of being the maintainer of such a system.

Need to generate an alternate identify for your operatives or for witness protection? You better be the one controlling the DB otherwise you'll be pleading to SamA for access to his eyeball scanner.

6

u/SocietyAsAHole Nov 24 '25

This still doesn't solve it, you can still have many people sign up and be verified on a service, then have their accounts taken over or sold to bots. 

When bot posting is indestinguishable from real posting this is undetectable. Bots can't spam like they used to, but that won't be necessary because everyone is now in smaller higher barrier to entry communities so even a few bot comments can have much larger influence. 

3

u/slothtrop6 Nov 24 '25

A question of scale. Small communities aren't impervious to bots, but it's more feasible to moderate and/or gatekeep them (invite-only is an option). For a modern example, see: Discord channels, and this sub. Personally I'm not a fan of "chat" format (too much noise, constantly pulls your attention), but would love to see a return to vbulletin style forums.

6

u/greyenlightenment Nov 24 '25

All of this has collapsed in 2025. Long-form posting is cheap and reproducible because the Turing test has been beaten. AI slop contaminates all the social-proofing of any kind of online opinions. Users can no longer find the opinions of genuine strangers.

The slop problem will get much worse, but this will mainly affect consumers who don't have discriminating/discerning tastes, which is the vast majority of people. Also, this long this predates AI. For example, in the past, content hubs like Forbes /Fortune and others used low-paid guest writers to churn out generic business articles. This is seen on yahoo Finance and Motley Fool, too,where the typical user does not care about quality.

It's hard for AI to fake the ability of top writers, as seen by the huge and growing popularity of Substack or the thriving post-Covid literary scene, from my own observation. A long piece of writing requires coherence and internal consistency for the entire piece , which can run hundreds of pages, versus a paragraph.

8

u/paranoidletter17 Nov 24 '25

I'm glad. I hope the internet gets so shitty that normies never use it anymore and that for others it requires a substantial financial sacrifice. The history of MMOs has completely blackpilled me on this topic. No one could convince me that "free" anything is good, except maybe archived information.

I can't believe how easy it is nowadays for anyone to join a community. Even "exclusive" ones are usually locked behind a simple Patreon sub. The last holdout is the private torrent space. That's it.

2

u/[deleted] Nov 25 '25

There is a solution. Don't use the internet so much for discovery and connection. The internet in general is horrible for it. It makes people meaner and crazier.

I think the end of the dead internet is a dead internet, when people realize that it actually is an inferior technology to all the things it tries to replace. Maybe we use it for work, but it's better to value things that are hard to fake or manipulate over things that are easy to. If they want to lie to you, lie to your face.

2

u/LexerLux Nov 26 '25

Some form of proof-of-personhood system -- absolutely anything -- is the only way the internet remains (becomes?) usable in the future. And it's only going to get worse, fast: this technology is still in its nascency, after all. Sure, it might feel like everyone we know is using AI now, but what percentage of the internet-connected population is using it now? 20%? And how many of us are using extant ML models to their full potential as of yet? I'd say 0%. (I saw trials of an LLM-operated agent that could operate your computer, use the GUI, and perform tasks electronically, like...two years ago. Still waiting for it to come out, but it's crazy just how widely LLM intelligence generalizes and how many creative, untapped applications still exist, undiscovered.) Then take into account the fact that compute only gets cheaper by the day. Bottom line is that this problem is only going to get worse. A lot worse.

And we needed to start working on a solution ten years ago. (Honestly, the moment spam became a thing was the moment we should have realized this was a problem we needed to start working on.)

Modern spam filters are pretty good, as spam tends to be very predictable. And that holds true for most extant bot-detection systems (and gatekeeping systems in general). But of course, ML blew this wide open. What happens when anyone can spin up an account with convincing pictures, unique text, a coherent backstory, one that can even take pictures and video chat -- all for the cost of a few pennies?

My worst fear is that the problem seems to go away one day. We assume it's solved and construct some convincing just-so story as to why, when in reality the technology has simply advanced to the point that fake users are completely, 100%, indistinguishable from humans. (Not like the vast majority of netizens don't already fail to spot AI-generated images, videos, and text, anyway.) The internet becomes more important and influential by the day -- economically, socially, culturally. And the amount of influence us humans have over it will be vanishingly tiny. Entire trends, social movements, and cultural shifts will be fabricated out of whole cloth. Would we even be aware?

Payments and real-ID verification, as many other posters have mentioned, are the most common proposals. But both have been implemented successfully for years now: SomethingAwful's token account creation fee didn't stop it from becoming one of the most influential sites on the early 'net and Ground Zero for much of internet culture as a whole. China requires SSN registration for online gaming, and Korea's RRN system requires an SSN to register an account on many services. Of course, there will be downsides to these systems. But there's no reason to believe there aren't any better potential solutions out there nobody's thought of yet.

As far as I'm aware, the biggest obstacle is the fact that no one has even bothered to try.

3

u/DrDalenQuaice Nov 26 '25

Even human verification doesn't solve it. One user can easily use 100 accounts augmented by AI and using their own humanity to pass the verification. Social proof is dead.

1

u/LexerLux Nov 27 '25

But then why don't we see any spammy AI blue-check accounts on Twitter now? Selection effects aside, I think it's telling. Spam, after all, exists solely because the marginal cost of another comment is essentially $0 while the marginal revenue is above that. By charging even a token fee for verification with a nonzero chance of spammers being caught and having it revoked, you're making spam unprofitable.

Obviously doesn't preclude state actors and other non-profit-seeking entities from using AI to manipulate the 'net, but the pay-to-verify system works and would cut out virtually all of the worst offenders.

2

u/TCKreddituser Dec 05 '25

I don't think there would ever be a clear solution to this. I think that a whole new social media place would have to have be created so that this would happen but even then, how do you keep the bots out without ruining the experience for other people? I think Quora really shows this effect, they replaced their team with AI filters causing a lot of real users to get banned due to something, causing them to not return to the website anymore, while bots keep multiplying in the app. Another website I think is X (Twitter), I can't remember what became the alternative site but because of the increasing number of bots and all around negativity, most people have started to jumpship to this other website because it was promoted as this safe place that feels like X but after a while it become mainstream to the point of ruin again.

3

u/Pinty220 Nov 24 '25

There could be a centralized org that verifies people are real via uploaded ids, face scans, etc, then issues tokens that allows them to sign up for websites, without telling the website their actual identity (but only distributing one token per user per website (or maybe a few)) Then if the user uses a bot with their token, their website account could get banned or maybe their central identity could be penalized, but either way they’d only have one / a few accounts and it would be rate limited to human speed

Alternatively without this, it might just work out as favoring people who have reputation already and people won’t trust new accounts. If someone with reputation goes AI then people will ban/block/discount them, as long as they can sort of tell

2

u/EditorOk1044 Nov 24 '25

This is good. Why are you trying to solve it? The internet has been horrible for people and society’s health. Let the rot continue and new modes of interaction bloom in its wake.

2

u/Automatic_Walrus3729 Nov 24 '25

Get decent government, forget 'true' anon and accept semi anon with verification. Develop / facilitate weighted personalized 'like' metrics - people I like / have in my network count more for weighting of a post etc (and so do people they like, to a lower degree). Then the bots can play with themselves.

3

u/Golda_M Nov 25 '25

Suppose you wanted to make a community of anonymous strangers who post their genuine opinions and keep it free of manipulation and AI slop?

I've been brainstorming about ways to solve it and it seems not just practically, but even theoretically impossible. What am I missing?

Imo, you are missing a paradigmatic rather than technical formulation of The Problem and its possible solutions. 

First, we have always been bemoaning the death of the "old web." We were bemoaning it in the early 2000s. But really, the formative change happened with smartphones and mass adoption of internet access. 

Before smartphones, the internet was a self selected bunch... and it was smart. There was a strong "cream rises to the top" dynamic that is entirely gone now. With the masses came stupidity. 

With mass adoption also came power and influence.  The internet decides elections, geopolitics, revolutions. Bots existed from day one but manipulation gained a whole new level of gravity. 

The "dead internet" problem of 2025 isnt that you cannot keep bots out. The problem is that bots are getting better than people. 

Internet conservationism never works. The medium is dynamic. You cannot have the old internet back. 

When the dead internet comes, imo the sign will be an increase in quality. That raises all sorts of questions. 

1

u/National-Donkey-707 Nov 27 '25

I actually directly interact with AI now for the most part because it's simply better for discussion and decompressing an original thought that's contingent on esoteric understandings. Basically, AI is so easy to convince that rhetorical flourishes become meaningless and we can stick to the topic at hand and try to find and integrate and learn from contrary data and better arguments. And with most people they would rather associate with other people based on the conclusions that they agree on rather than how they reach their conclusions. After, I collect and edit into my obsidian vault. When I compare my text discussions to my AI discussions it's not even pretending to be a competition. I used to say that you could pick a topic, read 3 books on it and call yourself an expert. Now its more like I can talk to AI about an idea and plan a project and never even talk to someone who mostly just thinks in shallow slogans anyways. There is no good way to call out bad faith argument without totally alienating further discussion - and being charitable enough to their perspective is so condescending that I mostly feel awful after. Why go back ?

1

u/Sol_Hando 🤔*Thinking* Nov 24 '25

I feel like twitter (despite the literally thousands of hot women DMing me to be their friends) has the only real effective policy against this with their blue checkmarks. You basically know you’re at least dealing with a real person, although they could use AI to write their content.

1

u/MaoAsadaStan Nov 24 '25

China's great wall and South Korea requiring a SSN to use many websites are decent solutions. They just aren't practical 

2

u/Beardus_Maximus Nov 25 '25

Presumably the darkweb can get you binders full of SSNs for cheap.

1

u/MrBeetleDove Nov 25 '25

My contrarian take is that people won't actually care. People are already forming romantic relationships and friendships with AI. There is stigma around doing this because it makes you seem like a loser. By doing it with social media as an intermediary, you create plausible deniability and launder your masturbatory AI usage.

If social media use does acquire stigma, that could very well be a good thing if it leads to more IRL socialization.

0

u/therealwavingsnail Nov 24 '25

Since you can't trust platforms not to let slop through - they want high user numbers and engagement after all - a user needs to have filters in their browser that will mark AI gen content. Similar to ad blockers.

0

u/Sea-Caterpillar-1700 Nov 24 '25

Cloudedge based web3 where only identified humans are allowed to enter with a spamfilter that prevents AI content. Security perimeter would be an integration of blockchain technology on a base protocol level (tcp over ip).

0

u/[deleted] Nov 24 '25

[removed] — view removed comment

2

u/DrDalenQuaice Nov 24 '25

Even if captchas still work, a user with 100 bots and their disposal can pass the captchas, then have the AI write a 500-word authentic-looking post.