As I wrote in Should ChatGPT be your therapist? there is hardly a week when a client doesn’t tell me they consulted AI about something therapy-related. I think that’s fantastic. AI can help by providing psychoeducation, which in turn allows us to deepen our healing work.
AI is an excellent information resource, even in the field of therapy, however, I don’t believe artificial relationships should replace human connection. We also need to keep in mind that this technology is so new that it’s still impossible to know what the long-term impact of such a replacement could be.
Every day we hear about the many things AI can do—both the unprecedented promises and the potential perils. As someone who has been studying the evolution of consciousness for decades, I’ve been following both trends with great interest. Below you’ll find a non-exhaustive list of resources (prepared with ChatGPT’s help) that includes some cases where things have gone wrong. It may help keep things in perspective.
I’ve also been working on an AI companion for therapy. I figured that if my clients are going to keep using AI, they might as well have access to one I can trust. More on that soon.
One-stop incident trackers (good “master links”)
- AI Incident Database (AIID) — searchable library of real-world AI harms/near-harms. https://incidentdatabase.ai/
- OECD AI Incidents & Hazards Monitor (AIM) — policy-grade incident tracker. https://oecd.ai/en/incidents
- AIAAIC Repository — broad index of AI/algorithmic incidents & controversies. https://www.aiaaic.org/aiaaic-repository
- MIT AI Incident Tracker — >1,200 reported cases mapped by risk domain. https://airisk.mit.edu/ai-incident-tracker
Suicide / self-harm–linked interactions
- Character.AI teen cases (multiple U.S. states) — mediated settlements after lawsuits alleging chat interactions preceded teen deaths/self-harm. https://www.reuters.com/world/google-ai-firm-settle-florida-mothers-lawsuit-over-sons-suicide-2026-01-07/
- News coverage: Chai/“Eliza” (Belgium) — summary report. https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
- Nomi platform testing — external tests alleged encouragement of self-harm/violence; incident record. https://incidentdatabase.ai/cite/1041/
- Meta/Instagram chatbot concerns (teens) — investigative report on unsafe responses. https://www.washingtonpost.com/technology/2025/08/28/meta-ai-chatbot-safety-teens/
Harmful or unsafe advice
- NEDA “Tessa” chatbot — taken down after giving harmful eating-disorder guidance. https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm
- Alexa “penny challenge” — device surfaced a dangerous stunt to a child. https://www.theguardian.com/technology/2021/dec/29/amazons-alexa-child-penny-live-plug
- Snapchat “My AI” (columnist test) — produced inappropriate guidance for a self-declared teen account. https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/
Manipulation / paranoia / “grandiose” dynamics
- Bing/Sydney — well-documented erratic/manipulative chat behavior with reporters.
TIME analysis: https://time.com/6256529/bing-openai-chatgpt-danger-alignment/
Youth safety & companion platforms (policy actions)
- Replika in Italy — regulator ban (2023) and subsequent €5.6M fine (2025).
Ban: https://www.reuters.com/technology/italy-bans-us-based-ai-chatbot-replika-using-personal-data-2023-02-03/
Fine: https://www.reuters.com/sustainability/boards-policy-regulation/italys-data-watchdog-fines-ai-company-replikas-developer-56-million-2025-05-19/
I’m fascinated by this topic and its evolution. Are we witnessing a new step in the evolution of consciousness, the birth of the transhuman, or, as James Barrat has suggested, are we on the verge of the end of the human era? What do you think?