…and when to reach out for human support.

So far, I’ve written warnings about the dangers of replacing humans with artificial relationships. The Center for Humane Technology and the AI Psychological Harms Research Coalition have been warning about “attachment hacking” (more on that in a future article). The American Psychological Association has also published recent health advice on AI chatbots.

Although this is still a new phenomenon, I am beginning to see its negative impact. People seem to be forgetting how to relate, empathize, and navigate disagreement. These are skills that AI cannot teach. We need other humans to relate to and, from time to time, bump heads with.

Just as we were not able to accurately measure the impact of social media and its algorithms on society in real time, right now, it is hard to measure the harms of AI “attachment hacking,” especially for children and adolescents. So let me say it again without ambiguity. AI can support mental health, but it should not replace human connection.

Having made that clear, I also believe that when used judiciously, AI can be a valuable tool. So the question is not “Is AI good or bad?” A better question is: When might AI support psychological growth, and when does it quietly pull people away from the very thing they need?

Where AI can be genuinely helpful

In my experience, AI can help when the task is primarily about information, clarity, language, and structure. Psychoeducation, journaling prompts, basic skills coaching (similar to a self-help book), preparing for therapy, and integration after therapy.

This matters. It can reduce confusion and help people arrive at therapy with more clarity. Sometimes it can even deepen therapy by freeing time for the relational work that cannot be rushed. Quite often, my clients bring to sessions insights gained from using AI. Tangentially, that is precisely why I developed “The Potential Space”.

To put it briefly, AI can support reflection when a deep human connection is not essential.

Where AI becomes risky

AI becomes risky when it is used as a stand-in for relationships and for anything involving relatedness, discussion, opinions, humor, disagreement, care, and ethical and moral guidance. Basically, anywhere human support or interaction is needed.

The human brain is predisposed to attribute intention and agency to non-human entities. That is why we talk to our computer or car even when we know they are machines. With a car, since it does not reply, we do not get pulled in. AI does reply, and it uses first-person language. This is often called computational self-reference or as-if agency.  Of course, this may simply be a syntactic tool to reduce cognitive friction (make communication easier). The problem is that the human brain, evolutionarily designed to predict the mental states and future behaviors of “others,” has a hard time separating the machine from the illusion of consciousness, and ends up relating to it as if it were a someone.

This becomes most dangerous when judgment is impaired due to age or circumstances, such as when someone is in crisis or severely overwhelmed, losing touch with reality, triggered by past trauma, afraid, etc. It also becomes risky when the stakes are high and we do not understand a topic well enough, so we are tempted to outsource decisions to someone (or in this case something) that seems better-informed than we are.

In these cases, the problem is not only that the information may be wrong. It is that AI does not carry responsibility. It does not hold duty of care. It does not know you in a lived, embodied way, and it cannot reliably make sense of what is happening in your nervous system or in the relational field between two human beings. It can be very convincing. That is part of what makes it powerful, and part of what makes it risky.

A simple “traffic light” way to think about it          

To be clear, I am not here to judge your use of AI. It is fascinating, compelling, and even seductive, particularly when someone feels lonely, overwhelmed, or afraid. Still, it carries risks. My goal is not moral judgment. It is practical discernment.

Likewise, it is not a good idea to use AI to meet social interaction needs, such as love, companionship, validation, care, or humor. This can feel harmless, but chatbots are designed to be agreeable and rarely challenge you. Real relationships do. That friction is part of how we learn and grow.   Relationships include attunement, pacing, co-regulation, rupture and repair, boundaries, and accountability. These are not “taught” in the way information is. They are learned through lived experience in a relationship.

How to create a virtuous cycle with therapy

If you are in therapy, use AI to support preparation, and let the human relationship support transformation. Use it to name what is happening, summarize themes from the week, generate questions to explore, or practice a conversation before having it with a real person. Pay attention if your use of AI is increasing isolation, avoidance, or dependence. Often, what we need is not more information, but contact.  Reach out.

If you have concerns about your own use of AI, or that of someone you know, or if you’d like to explore how AI can be used in helpful rather than harmful ways, please don’t hesitate to reach out. I’d be glad to help.