How the AGI trick works

Using the second person “you” which engages with the other person, building a relationship between the two

One typical feature of human subjectivity in language is the use of personal pronouns. The first person – “I” or “we” – allows us to express personal thoughts and experiences. The second person – “you” – engages with the other person, building a relationship between the two participants in a conversation. This is known as intersubjectivity.

Artificial empathy

The chatbot’s uses the first person to simulate awareness and seeks to create an illusion of empathy. By adopting a helper position and using the second person, it engages the user and reinforces the perception of closeness. This combination generates a conversation that feels human, practical, and appropriate for giving advice, even though its empathy comes from an algorithm, not from real understanding.

You are excused to believe in magic of ‘AI’ but not after I tell you the trick

‘A chatbot simply brings us easily to a delusional state where we assume the machine is a clever thinker, a mirror of thoughts or a smart companion. We are forgiven in this self generated magic thinking as naive users, but not after the trick is revealed. Which I will do now.

What we call illusions can be explained as a by-product of a functioning brain. Humans are hardwired to perceive text as the outcome of a thinking mind. Your brain simply sends the cue that there must be a person behind a text. This occurs even if this ‘text’ is no more than words in a row – made by a word machine that is calculating probability for the next token on a syntax level.

‘Language model’ is a word that tricks you even more into thinking that machines produce language, which is about meaning in a context. And this word covers how the word calculator really operates under the hood: by calculations and statistics.

To sprinkle the human touch onto a machine, creators gave them names (mostly female names because it sets up the frame of obedience), blinky eyes and the interface of a typing person by using dots in word balloons. And some tuning.

The brain is needed to do analytical and critical thinking, but it is also fuelling the illusion. Just as there are so called optical illusions, there are also communicational illusions.

What we call optical illusions can be seen as results of how the brain works and perceives and processes information. It is the label on the process we give to a brain that is calculating what enters the retina and gives meaning to it.

When people talk, they mutually assume they are telling the truth

Linguist and philosopher Paul Grice wrote about the basics of conversation and collaboration in 1975: Logic and conversation. His work is considered fundamental in communication sciences.

Core of his framework is the principle of cooperation, a mutual, unspoken trust in human conversations. In plain language: ‘I’ll do my best for my side of the story, you’ll do yours. You don’t talk nonsense, and neither will I.’

This principle automatically comes into place in every human conversation, and – because we cannot help it – in simulated conversations with chatbots like ChatGPT.

The communicative illusion is that we by default evaluate and interpret even a synthetic text as the outcome of thinking and communicating by humans, doing their best to get the message across.

After all the millions of years we have existed as humans, walking around chatting with each other, busy deconstructing meaning about wheels, food, sports and whatever we people talk about, we simply cannot help it that we are tricked now with synthetic text. Even if these texts have the soulless sound.

We are set up to believe there has to be a human behind it who puts effort in intention and meaning.

Conversational simulators like chatGPT are designed to chat along with you and can talk nonsense very cooperatively, frictionless nonsense if necessary. People get hooked on chatbots and attribute all sorts of human qualities to them. That’s how it works in people’s minds. Language = text = meaning = truth.

The negative effect of this is seen around us: people tearing up on TV in talk shows about the world of chatbots and ‘AI’, people losing their friends or people seeing a chatbot as therapist, romantic partner or even spirits of the deceased.

That’s the communicational illusion in action.

A computer doesn’t think nor speak. So it is no sign of healthy reasoning nor critical thinking that because the machine produces human-like communication, it must have or acquire human cognition. It is the opposite. Stating that there are human-like forces in a machine because it produces synthetic text, is delusional.

Just to repeat the message: a chatbot can’t produce anything other than words in a row. These sometimes approximate reality and sometimes seem to be accurate. But it is safest to assume that they’re never right, because there is no understanding in a machine.

Marketeers of course know that these machines produce bullshit, so they call it ‘hallucinations’ to make it sound like a bug instead of a feature. This is why they keep saying that ‘a human in the loop’ is needed. That is like saying: please, you do the checking and do the extra work. So we as a maker of the model, cannot be held accountable.’

Iris van Rooij, professor of computational cognitive science in the science of AI, states that one of the reasons people fall into this trap is that people underestimate the richness of human cognition -take it for granted – and overestimate that of machines.

Leave a comment