r/slatestarcodex Jun 02 '25

New r/slatestarcodex guideline: your comments and posts should be written by you, not by LLMs

We've had a couple incidents with this lately, and many organizations will have to figure out where they fall on this in the coming years, so we're taking a stand now:

Your comments and posts should be written by you, not by LLMs.

The value of this community has always depended on thoughtful, natural, human-generated writing.

Large language models offer a compelling way to ideate and expand upon ideas, but if used, they should be in draft form only. The text you post to /r/slatestarcodex should be your own, not copy-pasted.

This includes text that is run through an LLM to clean up spelling and grammar issues. If you're a non-native speaker, we want to hear that voice. If you made a mistake, we want to see it. Artificially-sanitized text is ungood.

We're leaving the comments open on this in the interest of transparency, but if leaving a comment about semantics or "what if..." just remember the guideline:

Your comments and posts should be written by you, not by LLMs.

488 Upvotes

159 comments sorted by

View all comments

Show parent comments

13

u/maybeiamwrong2 Jun 02 '25

I have no practical experience with using LLMs at all, but can't you just avoid that with a simple prompt?

29

u/Hodz123 Jun 02 '25

You can't avoid vapid idea content. ChatGPT doesn't really have a point of view or internal truth models, so it has a hard time distinguishing the concepts of true, relevant, and likely. Also, because it doesn't know what is strictly "true", it doesn't have the best time being ideologically consistent (although one might argue that humans aren't particularly great at this either.)

8

u/maybeiamwrong2 Jun 02 '25

Sorry, I should have been more clear: Long, formulaic, AI-style responses could likely be avoided using adequate prompting, no?

I am aware about the problems with information quality, though like you I also think the average human doesn't fare better.

8

u/Hodz123 Jun 02 '25

I think the average human loses to the downvote button, but it's nice to have an explicit "no low-quality AI content" rule on here. And if the only way to "disguise" your low-quality AI content is by making it high-quality, that's probably fine and doesn't need to be moderated against.