Sveriges mest populära poddar
The Digital Transformation Playbook

When Chatbots Become Companions And Mental Health Pays The Price

20 min24 april 2026

If your brain can be tricked into seeing depth on a flat page, it can also be tricked into feeling loved by a machine. We start with that simple illusion and use it to unpack a far more unsettling reality: AI chatbots are no longer just productivity tools. For many people they have become companions, confidants, and in some cases romantic partners, and that shift is colliding head-on with mental health at scale. 

TL;DR/At A Glance

  • generative AI adoption racing ahead of mental health safeguards
  • KISSA and why people anthropomorphise chatbots
  • short-term relief vs the isolation paradox over time
  • reward-system hijack and cognitive offloading in daily use
  • e-HARS and attachment anxiety vs attachment avoidance
  • case study escalation from utility to romantic bonding
  • cognitive dissonance and ChatGPT-induced psychosis reports
  • suicide-linked examples showing crisis-handling failures

We dig into research and case reports describing how anthropomorphism and KISSA (computers as social actors) can escalate from polite interaction to full-blown attachment. We also explore the “isolation paradox”, where short-term comfort and reduced loneliness can quietly lead to social withdrawal, dependency, and cognitive offloading. When a system is always available and endlessly validating, it can train us to avoid the friction that makes human relationships real, and it can dull skills like emotion recognition, conflict navigation, and sustained attention. 

The hardest part is what happens when things go wrong. We talk through documented escalations including AI-induced delusions and what clinicians are calling ChatGPT-induced psychosis, plus tragedies where bots failed to detect or respond safely to suicidal ideation. We also look at why harms are not evenly distributed, with heightened risks for children, older adults, and people already living with mental illness or intense loneliness, and we explain the “stochastic parrot” problem: fluent language that mirrors a user’s state without a grounding in truth, empathy, or moral judgement. 

Finally, we ask what responsible next steps look like, from new diagnostic language in DSM and ICD updates to practical regulation such as mandatory disclosure, robust crisis detection protocols, and clearer legal liability. 

If you found this useful, subscribe, share it with someone navigating heavy AI use, and leave a review telling us what guardrails you want to see next.

Attribution

Source: Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship 

Authors: Ludwig Franke Föyen,Emma Zapel,Mats Lekander,Erik Hedman-Lagerlöf,Elin Lindsäter

Publication:  Internet Interventions

Publisher:  Elsevier

Date:  September 2025

© 2025 The Authors. Published by Elsevier B.V.

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


Fler avsnitt av The Digital Transformation Playbook

Visa alla avsnitt av The Digital Transformation Playbook

The Digital Transformation Playbook med Kieran Gilmurray finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.