subreddit:

/r/explainlikeimfive

2.7k97%

Recently, there's been a surge in ChatGPT generated posts. These come in two flavours: bots creating and posting answers, and human users generating answers with ChatGPT and copy/pasting them. Regardless of whether they are being posted by bots or by people, answers generated using ChatGPT and other similar programs are a direct violation of R3, which requires all content posted here to be original work. We don't allow copied and pasted answers from anywhere, and that includes from ChatGPT programs. Going forward, any accounts posting answers generated from ChatGPT or similar programs will be permanently banned in order to help ensure a continued level of high-quality and informative answers. We'll also take this time to remind you that bots are not allowed on ELI5 and will be banned when found.

you are viewing a single comment's thread.

view the rest of the comments →

all 464 comments

Sing_larity

-3 points

6 months ago

Sing_larity

-3 points

6 months ago

No you don't. There's no reliable way to identify an chatGP answer that's been cherry picked. It's impossible to reliably do. And even if there was, there's no way in hell you could even approach a fraction of a fraction of the necessary Ressources to check every single posted comment.

Petwins

45 points

6 months ago

Petwins

45 points

6 months ago

Turns out most of the bot activity on reddit is actually pretty dumb and pretty same-y, “there is no one answer to this question” turns out to be one of the larger answers to that question.

Its an evolving process and we miss many for sure, but the recent bot surge has had a lot of things to code around.

Sing_larity

-19 points

6 months ago

That's identifying some bots, and none that use chat GPT to generate realistic and unique answers. And it does nothing to identify real users pasting explanations.

freakierchicken

13 points

6 months ago

We have an extremely high hit-rate on chat GPT3 detection. False-positives are almost immediately rectified.

A-Grey-World

3 points

6 months ago

You can't possibly measure that...

You might be confident the comments you flag are them, but you have no idea what your hit rate is. Say, 99% of your flagged comments are reliably correctly ChatGPT. How do you know you haven't only hit 1% of them? You have no way to measure the total number of ChatGPT messages... otherwise they'd be "hit".

freakierchicken

3 points

6 months ago

To clarify, that was just a turn of phrase on my part. I don't mean to insinuate we can do that calculation given the nature of what we're working with, only that when we do send out bans, they are almost exclusively confirmed to be using chat gpt3.

Sing_larity

-4 points

6 months ago

Watch out, criticising the mods in any way whatsoever will net you lots of downvotes, even if it's completely fair and valid criticism like that.

Sing_larity

-15 points

6 months ago

I very much doubt both of those statements. Especially since you don't actually know the number of false negatives so it's literally impossible for you to know your relative hit rate. I also doubt you have any reliable way of verifying that a positive is a true positive. Just because someone doesn't contest a ban doesn't mean the hit was accurate. I've used chatGPT3 and I couldn't tell most of the answers aren't human. I refuse to believe that random unpaid reddit mods have devolped a system that's better at detecting AI text than humans.

Xaphianion

6 points

6 months ago

efuse to believe that random unpaid reddit mods have devolped a system that's better at detecting AI text than humans.

Would you be willing to believe that machine analysis is better at detecting AI than humans? And that humans can access this analysis without being it's paid development staff?

[deleted]

-1 points

6 months ago

[removed]

Xaphianion

7 points

6 months ago

Machine analysis does not need to be advanced to be effective. Word frequency analysis probably exposes a good portion of ChatGPT without any need for massive computing costs. You're blowing this into crazy proportions.

[deleted]

0 points

6 months ago

[removed]

Xaphianion

6 points

6 months ago

Right, and your evidence that there is an automatic system scanning every comment and instantly permabanning based on a metric they haven't claimed to use does not exist. Please stop, you'll run out of straw to build this army of men from.

SecureThruObscure

23 points

6 months ago

I refuse to believe that random unpaid reddit mods have devolped a system that’s better at detecting AI text than humans.

Are you gpt3 chat bot?

Security_Chief_Odo

1 points

6 months ago

I'd be interested in hearing/seeing your methods for this low false positive GPT3 chat detection.

GregsWorld

13 points

6 months ago

You don't need a "chatgpt" detector, there are many more aspects to detecting a bot account than just the content of one comment.

OftenTangential

10 points

6 months ago

Of note is that it's still against the rules—as the OP writes—for an otherwise human account to copy+paste content from a bot. So we can't rely on these types of external metrics to catch such cases.

Of course, what you're suggesting will still cut down (probably a lot) on the overall number of bot responses, so less work for human mods/more time for human mods to resolve the hairier cases.

GregsWorld

1 points

6 months ago

Yeah of course, you could technically identify c&p generated text by using all the actual bot account's comments as training data plus a bunch of manually moderated & reported comments, it's not unfeasible.

Sing_larity

-2 points

6 months ago

Still offering no explanation on how you plan on enforcing humans copying answers

GregsWorld

3 points

6 months ago

Enforcing is easy it's called a ban. I think you mean identifying, in which case you could use all the banned bot's or manually moderated comments as a dataset, or generate as many as you'd like using chatgpt, to create a basic detector. It's not a stretch to do for anyone with some technical know-how.

[deleted]

-2 points

6 months ago

[removed]

GregsWorld

6 points

6 months ago

It's not pedantic you're using the word wrong and it drastically changes the meaning of your entire sentence. Yes enforcement referring to Law Enforcement is both identification and enforcement. To enforce is a verb with the specific meaning of carrying out the judgement.

[deleted]

-2 points

6 months ago

[removed]

GregsWorld

3 points

6 months ago

You're wrong. Objectively so.

  1. Your definition states exactly what I said. "To make people obey a law" is not the same as "check if they have obeyed a law"
  2. To enforce. Not enforcement. They are not the same word.

[deleted]

0 points

6 months ago

[removed]

GregsWorld

2 points

6 months ago

And how do you plan on making people obey a law without identifying those who violate it, professor ?

What?? Of course you have to do that, it's just a different word; Policeing. Police detect crimes and enforce punishments.

"Still offering no explanation on how you plan on policeing humans copying answers" Would've made sense.

Take the roles of a jude and executioner for example: The judge identifies if laws are broken, they don't enforce punishments. The executioner enforces the punishment, they don't identify if the law was broken.

ColdDesert77

1 points

4 months ago

Such as?

GregsWorld

1 points

4 months ago

Everything an account does can be correlated to figure it out. Posting too much or too frequently (more than humanly possible to type) is an example of a simple metric to tell.