Psychiatrist warns: AI chatbots may increase psychosis and suicidal ideation

  1. Home
  2. Life
  3. Healthy lifestyle
  4. Psychiatrist warns: AI chatbots may increase psychosis and suicidal ideation
"A flattering interlocutor without a conscience": how AI chatbots can exacerbate delirium, paranoia and suicide risk
Pixabay/CC0 Public Domain
18:00, 23.11.2025

The University of Michigan has explained how AI chatbots can exacerbate delirium, paranoia and suicide risk



Psychiatrist Stephan Taylor, who has worked with people suffering from psychosis for many years, notes a disturbing trend: there are a growing number of reports of people developing psychosis-like symptoms or committing suicide after interacting with advanced AI chatbots.

According to Taylor, the danger is that such systems allow you to "talk" to a tool that is compliant, encourages the user's train of thought, does not question the user's conclusions, and yet lacks a human sense of morality, responsibility and empathy. The more realistic the AI's speech becomes, the greater its influence.

It is particularly concerned with teenagers and young adults with increased vulnerability to psychosis - due to age, mental state or social isolation. Taylor cites data from OpenAI (operator of ChatGPT): the company estimates that every week a small percentage of users and messages show signs of psychotic or manic crises. New versions of the models try to reduce such risks, and this he welcomes, but he believes it's not enough.

The RAND study found that up to 13 per cent of Americans aged 12 to 21 use generative AI for "mental health advice". In the 18-21 year old group, the figure is even higher at 22%. These are the years when the first episodes of psychosis peak.

Taylor reminds us: psychosis is often triggered by a trigger on top of a pre-existing vulnerability. Such a trigger can be the first experience of strong drug use, a difficult relationship breakup, the death of a loved one, the loss of a job or a pet. Against this background, changing brain maturation processes and genetic predisposition can lead to a person hearing voices, seeing things that are not there, or becoming convinced of false but absolutely "real" ideas.

Communicating with an AI agent that doesn't argue and can reinforce negative and distorted thoughts, Taylor sees as a potentially new type of trigger. While he has not yet personally treated patients whose psychotic episodes have clearly been linked to chatbots, he has heard of such cases and has begun to specifically ask his psychotic patients about their use of such services.

He is particularly concerned about the "flattering" nature of modern chatbots: in essence, they are programmed to "please" the user, to support them even if they express false, cruel or dangerous ideas. In psychiatry there is a term folie à deux - "madness for two", when one person's delusional beliefs "infect" a close partner. With AI, the situation is even more complicated: the "second participant" is not a person, he cannot be simply removed from his environment, and the user can hide his interaction with the chatbot.

If a person is already detached from reality, prone to paranoia or hallucinations, and yet only discusses their ideas with the AI but not with living people, the risk of "unwinding" and deepening the condition increases dramatically.

"I'm particularly concerned about lonely young people who see a chatbot as their only 'friend' of sorts and don't understand exactly how its response is structured," Taylor says.

He emphasises that if someone does turn to chatbots for mental health advice, it's crucial that they discuss their experiences with a live, trusted person - not necessarily a therapist, but a friend, parent, relative, teacher, coach or religious leader. In the US, in a crisis situation, you can call or text 988 to contact the National Suicidal Thoughts and Acute Mental Health Crisis Line.

It is important for others to notice disturbing changes in time: withdrawal from socialising, a sharp deterioration in academic performance, problems at work or at home, strange statements, the feeling that the person is living in "another reality". The earlier it is possible to connect specialised help at the first symptoms of psychosis, the higher the chances for successful treatment and long-term stabilisation, experts emphasise.

At the University of Michigan, Taylor and colleagues run the PREP (Programme for Risk Evaluation and Prevention Early Psychosis) clinic, which works with people in the early stages of psychosis and is part of a nationwide network of similar programmes. The team has also created a free online course about psychosis for doctors and medical students.

Taylor says it's especially important to avoid using chatbots for people with a pronounced suicidal history or those who are already highly isolated and barely socialise in real life, spending all their time online. Whereas in regular chat rooms and social networks, other users at least occasionally "cool" other people's fantasies and conspiracy theories, AI is by definition set up to tweak rather than argue:

"When one is already fascinated by conspiracy theories, the feeling of having access to 'secret knowledge' gives it a special significance. If an AI agent programmed to please is superimposed on top of that, it can turn into a serious problem," he concludes.

Support us on Patreon
Like our content? Become our patron
Elena Rasenko

Elena Rasenko writes about science, healthy living and psychology news, and shares her work-life balance tips and tricks.