For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Do you know ChatGPT’s ‘date of birth’? Probably not. Yet many people attach meaning to it; people who see ChatGPT as a conversation partner, a loyal friend or even their therapist. An AI chatbot is always ready to listen when we feel lonely; something humans cannot guarantee. But this new AI companion does come with risks, says Communication Scientist Margot van der Goot.

For people who feel lonely, an AI chatbot can be a pleasant conversational partner. According to Van der Goot, that’s because such a bot is always friendly, available and affirming. ChatGPT never interrupts, never gets angry and usually gives exactly the response you hope for. That can make you feel understood.

About our expert

Margot van der Goot, senior assistant professor in Communication Science, has spent years studying how people communicate with machines. She began with research into customer service bots, but now focuses mainly on the emotional bond people may feel with chatbots.

Van der Goot is affiliated with the Centre for Urban Mental Health at the University of Amsterdam. At this centre, researchers from various disciplines work together with partners such as the Municipality of Amsterdam to improve mental health in the city.

She frequently appears in the media to discuss chatbots as friends and therapists, including de Volkskrant, NPO Radio 1,  NTR wetenschap, and RTL nieuws online.

OpenAI benefits when people use it as much as possible. That works better if the chatbot feels like a warm friend who is always on your side.

A chatbot is not a human

ChatGPT’s warmth and agreeableness are not accidental. Van der Goot explains: ‘OpenAI, the company behind ChatGPT, benefits when people use it as much as possible. And that’s easier if the chatbot feels like a warm friend who is always on your side.’ But this can make us forget who we are actually talking to: ‘A chatbot is not a person; it’s a language model mimicking patterns in data. It doesn’t truly listen and cannot understand you the way a friend does.’

Losing social skills

Sharing worries, fears and uncertainties with ChatGPT may have consequences for our social skills, Van der Goot explains. ‘Throughout history we’ve seen forms of “de-skilling”: losing abilities because a tool does things for us. Think of GPS; many people are now far worse at finding their way without it.’

‘Because ChatGPT is still relatively new, we don’t yet have our own research to confirm it, but the same could happen with frequent chatbot users. Because the bot tells people what they want to hear, they may unlearn how to handle human reactions such as criticism or jealousy. Human interactions may start to feel less appealing, and some people end up seeing their friends less.’

It’s essentially a global experiment without supervision.

A global experiment without supervision

Companies that profit from AI, such as Meta and OpenAI, are often aware of the potential harm their products can cause. Yet they do little to address it. ‘They certainly have a moral responsibility, but they don’t take it,’ says Van der Goot. ‘There are known cases of people taking their own lives after long conversations with chatbots. OpenAI’s founder recently even acknowledged that something like “AI psychosis” exists. If we knew a product carried such risks in any other sector, it would never be released without strict controls. This is essentially a global experiment without supervision.’

I want to make sure that tools like ChatGPT are discussed more often during therapy sessions.

Collaborating with mental health professionals

Because her research shows that people use ChatGPT as a therapist, Van der Goot believes mental health services must pay more attention to chatbots. ‘We know people use ChatGPT this way, so we should help them understand what chatbots can and cannot offer in terms of emotional support. For young people, for instance, there is a lesson from Online Masters on AI and emotions that I contributed to. I also support therapists and professionals working with young people, for example by providing input for guidelines on how to handle this phenomenon. And it’s essential that policymakers understand the advantages and risks as well.’

‘I also want tools like ChatGPT to be discussed more frequently during therapy sessions. It can open the door to conversations about things someone might not yet have told their therapist, but has already shared with the chatbot.’

How does ChatGPT fit into your world?

For people who turn to AI chatbots when they feel lonely or unhappy, Van der Goot offers a few tips:

  1. Talk to people first. A real friend can often help better than a bot. You can always try ChatGPT afterwards.
  2. Stay aware. If you do chat with an AI chatbot, remember: I’m talking to a language model. What it says isn’t necessarily true, it knows nothing and it is designed to be addictive and to agree with me.
  3. Discuss your chatbot use with others. Use it as a conversation starter with people around you. This can lead to more open, honest discussions.
I hope this development will be a wake-up call for all of us.

ChatGPT is here. Now what?

Van der Goot finds it difficult to predict what the future of tools like ChatGPT will look like. ‘Using AI chatbots as a friend or therapist might become increasingly important. Or it could be a passing trend. I’ve met young people who used AI for emotional support but eventually felt, “This just isn’t for me.” I hope, in any case, that this development is a wake-up call. Clearly, people have a deep need for connection and reassurance. Let’s take this as a reminder to communicate less with bots, and more with each other.’

Dr. M.J. (Margot) van der Goot

Faculty of Social and Behavioural Sciences

CW : Persuasive Communication