As I wrote in Should ChatGPT be your therapist? there is hardly a week when a client doesn’t tell me they consulted AI about something therapy-related. I think that’s fantastic. AI can help by providing psychoeducation, which in turn allows us to deepen our healing work.

AI is an excellent information resource, even in the field of therapy, however, I don’t believe artificial relationships should replace human connection. We also need to keep in mind that this technology is so new that it’s still impossible to know what the long-term impact of such a replacement could be.

Every day we hear about the many things AI can do—both the unprecedented promises and the potential perils. As someone who has been studying the evolution of consciousness for decades, I’ve been following both trends with great interest. Below you’ll find a non-exhaustive list of resources (prepared with ChatGPT’s help) that includes some cases where things have gone wrong. It may help keep things in perspective.

I’ve also been working on an AI companion for therapy. I figured that if my clients are going to keep using AI, they might as well have access to one I can trust. More on that soon.

One-stop incident trackers (good “master links”)

Suicide / self-harm–linked interactions

Harmful or unsafe advice

Manipulation / paranoia / “grandiose” dynamics

Youth safety & companion platforms (policy actions)



I’m fascinated by this topic and its evolution. Are we witnessing a new step in the evolution of consciousness, the birth of the transhuman, or, as James Barrat has suggested, are we on the verge of the end of the human era? What do you think?