top of page
Search

Can AI therapy reduce mental illness?

  • Writer: Behroz Dehdari
    Behroz Dehdari
  • Apr 25
  • 5 min read

I was stuck with my book and asked a friend for help. “Use AI,” he suggested. “I do it all the time.” So I did. I fed the entire book into ChatGPT and asked what it thought could be improved. Within seconds, it had apparently gone through my 200-plus-page document and bombarded me with suggestions.

My first reaction—one I suspect I share with many—was open-mouthed wonder. It seemed to understand what my text was about and offered thoughtful edits. “Shall I continue?” it asked, and a new round of suggestions appeared.


AI-generated image
AI-generated image

My friend mentioned that researchers are currently exploring AI-powered therapy. The technology is still in its early stages, but the potential is enormous. It could slash wait times for mental health assessments and treatment, effectively putting a personal therapist in everyone’s pocket. In theory, an AI therapist would have an unmatched knowledge base—not only recalling a client’s full history and every past reaction but also instantly accessing the entirety of psychological research. And unlike human therapists, it would never show up tired, stressed, or having an off day.


This raises the question: what actually makes therapy effective? People seek therapy for all kinds of reasons, but it usually boils down to this—they don’t like themselves. There’s something they’re ashamed of, something they can’t manage. They feel “demoralized.” The goal of therapy is to help the client like themselves. But how?


After much debate, psychotherapy research seems to have landed on the idea that what determines the outcome of therapy is not the specific theory or technique. Not whether the therapist is trained in CBT or in the psychodynamic tradition. What matters are the so-called common factors across all forms of therapy: a strong alliance between client and therapist, the client’s expectations, and the therapist’s empathy and unconditional acceptance.

But how do you get someone who doesn’t accept themselves to feel unconditionally accepted by someone else? Different schools might argue that this has to happen in the “right” way.


Telling someone who feels shame to just stop feeling that way doesn’t help. Nor does telling the client they’re good enough as they are. I would argue that it’s only once the client dares to open up—takes the risk of being rejected—that the relationship with the therapist can dissolve shame. So in theory, the therapist has to be capable of failing the client in order for therapy to work. Interestingly, studies show that in securely attached relationships, parents fail to meet their children’s needs about half the time.


But couldn’t an AI therapist learn this? We could program it to occasionally be dismissive or uncomprehending. Here, it’s helpful to compare how AI and children learn. Simply put, AI starts with a string of code and is then exposed to an environment where it absorbs and analyzes information, looking for patterns. It’s obvious that it can become highly skilled at this. And if necessary, a human can modify the algorithms.


We humans are also born with a set of codes—our genes—and through countless attempts we gain control over our bodies and learn to connect sensory input with motor output. With both my children, I’ve pointed at the light switch, pressed it, and then pointed at the lamp that’s now turned on. Then I press it again and show the darkened lamp. At first, they don’t understand a thing. But we do it over and over again, and soon they grasp the pattern and begin to expect the result. AI can do that too. But there is a crucial difference.


As parents, we’re not just presenting stimuli and observing outcomes: we invite our children to share an experience with us. We notice their reactions, but we do so out of care. When they laugh, we become happy and laugh with them. If the lightbulb exploded, we’d hug and comfort them—not to optimize a process, but because we care and want them to feel safe. Their reaction affects us. Our interaction is based on love. I would go further and say that the very ability to think depends on love—and by love I mean something simple: the will to see someone else thrive.


But love isn’t lip service. It’s not “likes” on social media. What we often show there is a secondary goodwill: we want to want to care. But true goodwill expresses itself in action. The degree of discomfort we’re willing to endure for someone else’s well-being is directly proportional to our love for them. That is: if I truly want your good, I must be able to change myself even if I don’t want to. I must be willing to stray from my path—only then am I capable of being moved by your situation. This, I believe, is the foundation of all therapy: that the therapist shows that they’re affected by what the client says, while also being capable of not being affected.


And that’s exactly what AI lacks: the ability to be moved. AI doesn’t care—because it can’t care. Its way of being in the world is best described by the word “indifference,” which is the opposite of love. What it does is imitate caring.


This became clear when ChatGPT continued commenting on my text and suddenly skipped over a large paragraph. When I pointed it out, it apologized humbly, said it understood how frustrating it must be for me, and promised to do better—only to immediately make the same mistake again. But AI, of course, can’t apologize in any real sense. It doesn’t feel remorse, nor does it care about our feelings. It can’t regret anything. An apology is merely a calculated guess at what words should come next.


Still, I asked it to continue, and as I read my edited text, I noticed that all the personal traces were gone. The associations the text used to evoke had vanished. Hidden meanings evaporated. What was once a winding, meandering river had become a straight and tidy canal. The text was undeniably more efficient—but it was no longer mine.


And the starting point was that my friend didn’t have time to read my manuscript again, and I didn’t feel I had time to wait. This stress, this rush, has become the hallmark of our lives. We’re expected to do more and more. More tasks to check off. More aspects of our lives to optimize. Now we don’t even need to read long emails from colleagues or friends—AI does that for us and summarizes in a few bullet points. The pace makes it hard to learn anything deeply, which lowers our engagement. And as a result, our ability to be moved diminishes. So does our satisfaction with life. More and more people feel that their work lacks meaning.


At the same time, our anxiety and depression increase—and to escape those feelings, we flee. We stuff ourselves with sugar, pills, and drugs, and flood our minds with porn, shopping, gambling, or meaningless scrolling. What these things have in common is the promise of immediate and maximum gratification for minimal engagement. Exactly what an AI therapist offers.


What does the future look like? Will our closest friends be AI robots? Will we dine with them? Sleep beside them and be comforted by them? Studies are currently underway on whether AI therapy can reduce mental illness. I wouldn’t be surprised if it turns out that it can. But what will have happened is that we allowed ourselves to be comforted by something incapable of caring about us. Our anxiety was soothed in a relationship that was not a relationship. It’s not just that AI has become more like us—we’ve also become more like it.



Behroz Dehdari


 
 
 

Comments


© 2023 by Behroz Dehdari, MD., Powered and secured by Wix

bottom of page