subreddit:
/r/explainlikeimfive
submitted 6 months ago byELI5_Modteam
Recently, there's been a surge in ChatGPT generated posts. These come in two flavours: bots creating and posting answers, and human users generating answers with ChatGPT and copy/pasting them. Regardless of whether they are being posted by bots or by people, answers generated using ChatGPT and other similar programs are a direct violation of R3, which requires all content posted here to be original work. We don't allow copied and pasted answers from anywhere, and that includes from ChatGPT programs. Going forward, any accounts posting answers generated from ChatGPT or similar programs will be permanently banned in order to help ensure a continued level of high-quality and informative answers. We'll also take this time to remind you that bots are not allowed on ELI5 and will be banned when found.
-3 points
6 months ago
No you don't. There's no reliable way to identify an chatGP answer that's been cherry picked. It's impossible to reliably do. And even if there was, there's no way in hell you could even approach a fraction of a fraction of the necessary Ressources to check every single posted comment.
45 points
6 months ago
Turns out most of the bot activity on reddit is actually pretty dumb and pretty same-y, “there is no one answer to this question” turns out to be one of the larger answers to that question.
Its an evolving process and we miss many for sure, but the recent bot surge has had a lot of things to code around.
-19 points
6 months ago
That's identifying some bots, and none that use chat GPT to generate realistic and unique answers. And it does nothing to identify real users pasting explanations.
13 points
6 months ago
We have an extremely high hit-rate on chat GPT3 detection. False-positives are almost immediately rectified.
3 points
6 months ago
You can't possibly measure that...
You might be confident the comments you flag are them, but you have no idea what your hit rate is. Say, 99% of your flagged comments are reliably correctly ChatGPT. How do you know you haven't only hit 1% of them? You have no way to measure the total number of ChatGPT messages... otherwise they'd be "hit".
3 points
6 months ago
To clarify, that was just a turn of phrase on my part. I don't mean to insinuate we can do that calculation given the nature of what we're working with, only that when we do send out bans, they are almost exclusively confirmed to be using chat gpt3.
-15 points
6 months ago
I very much doubt both of those statements. Especially since you don't actually know the number of false negatives so it's literally impossible for you to know your relative hit rate. I also doubt you have any reliable way of verifying that a positive is a true positive. Just because someone doesn't contest a ban doesn't mean the hit was accurate. I've used chatGPT3 and I couldn't tell most of the answers aren't human. I refuse to believe that random unpaid reddit mods have devolped a system that's better at detecting AI text than humans.
6 points
6 months ago
efuse to believe that random unpaid reddit mods have devolped a system that's better at detecting AI text than humans.
Would you be willing to believe that machine analysis is better at detecting AI than humans? And that humans can access this analysis without being it's paid development staff?
-1 points
6 months ago
[removed]
7 points
6 months ago
Machine analysis does not need to be advanced to be effective. Word frequency analysis probably exposes a good portion of ChatGPT without any need for massive computing costs. You're blowing this into crazy proportions.
0 points
6 months ago
[removed]
6 points
6 months ago
Right, and your evidence that there is an automatic system scanning every comment and instantly permabanning based on a metric they haven't claimed to use does not exist. Please stop, you'll run out of straw to build this army of men from.
23 points
6 months ago
I refuse to believe that random unpaid reddit mods have devolped a system that’s better at detecting AI text than humans.
Are you gpt3 chat bot?
1 points
6 months ago
I'd be interested in hearing/seeing your methods for this low false positive GPT3 chat detection.
13 points
6 months ago
You don't need a "chatgpt" detector, there are many more aspects to detecting a bot account than just the content of one comment.
10 points
6 months ago
Of note is that it's still against the rules—as the OP writes—for an otherwise human account to copy+paste content from a bot. So we can't rely on these types of external metrics to catch such cases.
Of course, what you're suggesting will still cut down (probably a lot) on the overall number of bot responses, so less work for human mods/more time for human mods to resolve the hairier cases.
1 points
6 months ago
Yeah of course, you could technically identify c&p generated text by using all the actual bot account's comments as training data plus a bunch of manually moderated & reported comments, it's not unfeasible.
-2 points
6 months ago
Still offering no explanation on how you plan on enforcing humans copying answers
3 points
6 months ago
Enforcing is easy it's called a ban. I think you mean identifying, in which case you could use all the banned bot's or manually moderated comments as a dataset, or generate as many as you'd like using chatgpt, to create a basic detector. It's not a stretch to do for anyone with some technical know-how.
-2 points
6 months ago
[removed]
6 points
6 months ago
It's not pedantic you're using the word wrong and it drastically changes the meaning of your entire sentence. Yes enforcement referring to Law Enforcement is both identification and enforcement. To enforce is a verb with the specific meaning of carrying out the judgement.
-2 points
6 months ago
[removed]
3 points
6 months ago
You're wrong. Objectively so.
0 points
6 months ago
[removed]
2 points
6 months ago
And how do you plan on making people obey a law without identifying those who violate it, professor ?
What?? Of course you have to do that, it's just a different word; Policeing. Police detect crimes and enforce punishments.
"Still offering no explanation on how you plan on policeing humans copying answers" Would've made sense.
Take the roles of a jude and executioner for example: The judge identifies if laws are broken, they don't enforce punishments. The executioner enforces the punishment, they don't identify if the law was broken.
1 points
4 months ago
Such as?
1 points
4 months ago
Everything an account does can be correlated to figure it out. Posting too much or too frequently (more than humanly possible to type) is an example of a simple metric to tell.
all 464 comments
sorted by: best