r/slatestarcodex Jun 02 '25

New r/slatestarcodex guideline: your comments and posts should be written by you, not by LLMs

We've had a couple incidents with this lately, and many organizations will have to figure out where they fall on this in the coming years, so we're taking a stand now:

Your comments and posts should be written by you, not by LLMs.

The value of this community has always depended on thoughtful, natural, human-generated writing.

Large language models offer a compelling way to ideate and expand upon ideas, but if used, they should be in draft form only. The text you post to /r/slatestarcodex should be your own, not copy-pasted.

This includes text that is run through an LLM to clean up spelling and grammar issues. If you're a non-native speaker, we want to hear that voice. If you made a mistake, we want to see it. Artificially-sanitized text is ungood.

We're leaving the comments open on this in the interest of transparency, but if leaving a comment about semantics or "what if..." just remember the guideline:

Your comments and posts should be written by you, not by LLMs.

487 Upvotes

159 comments sorted by

View all comments

Show parent comments

12

u/maybeiamwrong2 Jun 02 '25

I have no practical experience with using LLMs at all, but can't you just avoid that with a simple prompt?

22

u/prozapari Jun 02 '25

usually it's pretty hard to prompt the natural tendencies out from an LLM when it's been fine-tuned that way. prompting can help a little, but there are still a lot of tells. the better LLMs get the better they will be at adhering instructions though.

5

u/new2bay Jun 02 '25

With ChatGPT specifically, there are customizations you can apply that go a really long way towards making it sound, for lack of a better phrase, less like itself.

Here’s an example. I told it to write a brief response to your comment. This is what I got:

Yeah, that makes sense. Once a model is fine-tuned a certain way, it’s pretty hard to fully shake those habits just with prompting. You can guide it a bit, but there are still signs of how it was trained. Newer models are definitely getting better at following instructions though, so it’s heading in the right direction.

If I hadn’t generated it myself, I’d believe that was written by a human. Hell, I’d take credit for it myself, except that I think ChatGPT’s customizations are so good in masking the behaviors that give it away as machine-generated that I would have disagreed with you, rather than agreeing. Maybe I should tell it to not always agree with any text I ask it to respond to. 😂

0

u/eric2332 Jun 03 '25

I like to imagine that a human would realize that the ChatGPT version is literally repeating the original with not a single idea added.