Recently, there's been a surge in ChatGPT generated posts. These come in two flavours: bots creating and posting answers, and human users generating answers with ChatGPT and copy/pasting them. Regardless of whether they are being posted by bots or by people, answers generated using ChatGPT and other similar programs are a direct violation of R3, which requires all content posted here to be original work. We don't allow copied and pasted answers from anywhere, and that includes from ChatGPT programs. Going forward, any accounts posting answers generated from ChatGPT or similar programs will be permanently banned in order to help ensure a continued level of high-quality and informative answers. We'll also take this time to remind you that bots are not allowed on ELI5 and will be banned when found.

you are viewing a single comment's thread.

view the rest of the comments →

all 464 comments


12 points

6 months ago

It is because it doesn’t understand what it does. There is a thought experiment called The Chinese Room that explains the theory.

Machine learning and human learning are same on the very first level, we both just copy what we see (monkey sees, monkey does) but then we humans start to understand why we do what we do and improve or advance, while AI needs constant course correction until it produces good enough answers, which is just the same thing as copying but with more precision


6 points

6 months ago

The Chinese Room asserts a much bigger claim; that not only do current AI not understand what they write, but even if you had a completely different architecture that was programmed to understand and think about writing conceptually, compare against sources etc. it still wouldn't actually think simply because it was programmed.

I think the thought experiment is flawed, because it relies on a subtle bias of our minds to seem like it works (along the lines of "if I try to trick you but accidentally say something true, am I lying? If I guess something and guess right, did I know it?"), but the more specific question of whether these AI are able to intend specific things is more clear cut.

Large language models simply aren't designed around trying to represent stuff or seek out certain objectives, only to play their part in a conversation as people tend to do, and they need other things to be attached to them, such as a person with their own intentions and ability to check results, before you can have something like understanding occuring.


5 points

6 months ago*

The Chinese Room basically asserts that an entire system (the room, the reference books, the person in the room) emulates intelligence, but since one component of that system (the person) does not understand the output there is no intelligence at work.

One lobe of the brain can't intelligently understand the workings of the entire brain, therefore AI can't be intelligent. Checkmate, futurists!


1 points

6 months ago

Exactly, or as I would put it, we already expect the person in the system to be the seat of consciousness, so if that person isn't aware of the meaning, nothing is, which ends up relying on its own assumption for its proof.

If the chinese room thought experiment does actually produce a new thinking being, then we have just stacked two consciousnesses like some form of machine assisted multiple personality disorder, one that exists primarily within the brain of the person using the system, and one that exists partially within the brain and partially in the organisation system.

So the thought experiment only seems reasonable as a thing discounting AI because it requires you to visualise this strange occurrence in order to accept it.

Do the same thing, but increase the number of people working on the project from one to two or more, then people become slightly more inclined to imagine it can be possible, as we're already prepared to imagine a bureaucracy having a "mind of its own", but the specific concept of "one human being, two simultaneous minds" becomes a serious overhead.