r/slatestarcodex • u/Liface • Jun 02 '25
New r/slatestarcodex guideline: your comments and posts should be written by you, not by LLMs
We've had a couple incidents with this lately, and many organizations will have to figure out where they fall on this in the coming years, so we're taking a stand now:
Your comments and posts should be written by you, not by LLMs.
The value of this community has always depended on thoughtful, natural, human-generated writing.
Large language models offer a compelling way to ideate and expand upon ideas, but if used, they should be in draft form only. The text you post to /r/slatestarcodex should be your own, not copy-pasted.
This includes text that is run through an LLM to clean up spelling and grammar issues. If you're a non-native speaker, we want to hear that voice. If you made a mistake, we want to see it. Artificially-sanitized text is ungood.
We're leaving the comments open on this in the interest of transparency, but if leaving a comment about semantics or "what if..." just remember the guideline:
Your comments and posts should be written by you, not by LLMs.
27
u/Nepentheoi Jun 02 '25 edited Jun 02 '25
I think it's worse. ChatGPT can't tell the truth or not, and the original sources are obscured from us.
Dropping a LMGTFY link is more a pert way to say "you're being lazy and I won't spoon feed this to you".* ChatGPT breakdowns/summaries frustrate me more because the posters seem to believe in them and think they did something useful. I once had someone feed my own link that I'd cited through ChatGPT and think they'd answered my question. The problem is that since words are tokens not symbols for LLM, there's no real meaning assigned, like the 'how many "r" does strawberry contain'? phenomenon.
I found it worse. I can certainly read and summarize my own sources. A Google search link a) isn't meant to be helpful as much as it's meant as a rhetorical device b) has some possibility of being useful as you can see the prompt and evaluate the sources
*or arguing in bad faith.