Feeling Good Podcast | TEAM-CBT - The New Mood Therapy
Health & Fitness:Mental Health
366: AI and Psychotherapy: Doomsday or Revolution?
Featuring Drs. Jason Pyle and Matthew May
Today we feature Jason Pyle, MD, PhD and our beloved Matthew May, MD on a controversial, exciting and possibly anxiety-provoking podcast on the future of AI in psychotherapy and mental health. Will AI shrinks replace humans in a doomsday scenario for shrinks? Or will AI serve shrinks and patients in a revolutionary way that sees the dawning of a new age of psychotherapy?
You are all familiar with Matt, due to his frequent and highly praised appearances on our Ask David segments, but Jason Pyle, MD, PhD, will probably be new to you. Jason joined the Evolve Foundation as Managing Director in 2022 to focus his work on the mass mental health crisis and the rampant diseases of despair, which afflict tens of millions of Americans. The Evolve Foundation is a private foundation dedicated to the advancement of human consciousness. Evolve is active in philanthropy and venture investments in the mental health fields.
Jason is an accomplished biotechnology executive with over twenty years of executive management and technology development experience. He is committed to developing healthcare technologies and bringing science-backed healing to the most important problems of our generation.
Jason is a veteran who served as a US Ranger, and earned an Engineering degree from the University of Arizona. He received both his MD and PhD in Neurosciences from the Stanford University School of Medicine, where he met Matt May and they became close friends. At the start of today’s podcast, Matt and Jason reflected on their long friendship, starting as classmates at the Stanford Medical School 20 years ago.
The following questions were submitted by Jason, Matt, and David prior to the start of today’s podcast.
Jason’s Questions:
Matt’s Questions about AI:
David’s question about AI:
Jason kicked off the discussion with a brief description of AI and machine learning, and outlined four potential roles for AI in psychiatry and psychology:
The ensuing dialogue was illuminating and exciting. In fact, I got so engrossed that I stopped taking notes, so you’ll have to give it a listen to find out. However, one thing that was interesting and unexpected was highlighting the strengths and weaknesses of AI. For example, a patient with social anxiety might benefit greatly from armchair work, focusing on ways to combat distorted negative thoughts, but will still have to interact strangers in social situations to conquer this type of fear.
David and Matt nearly always go with the patient out into the world for interpersonal exposure exercises, and find that the presence and trust and “push” from the human therapist can be invaluable and necessary. It is not at all clear that an AI therapist working via a smart phone could have the same effect, but that might require an experiment to find out.
Jumping to conclusions without data is rarely safe or accurate! Maybe an AI “helper” could be very helpful to individuals with social anxiety!
Jason raised the question of whether AI could replicate the trust and warmth and rapport of a human therapist, and whether the warmth and rapport of the therapeutic relationship was necessary to a good therapeutic outcome. I (David) summarized some of the findings with our Feeling Good App showing that app users actually rated the “Digital David” in the app substantially higher on warmth and understanding that the people in their lives. And now that we are incorporating AI into the Feeling Good App, the quality of the empathy / rapport from our app may be even higher than in our prior beta tests.
We have not done a direct comparison between the rapport of human therapists and the rapport experienced by our Feeling Good App users. Many people might jump to the conclusion that human shrinks have better rapport than would be possible from a cell phone app, but this might be the opposite of the truth! In my research (David), I’ve seen that most human shrinks believe their empathy and rapport skills are high, when in fact their patients do not agree!
In my research on the causal effects of empathy on recovery from depression in hundreds of patients at my clinical in Philadelphia, and also in more than 1300 patients treated at the Feeling Good Institute in Mountain View, California, it did not appear that therapist empathy had substantial causal effects on changes in depression.
The late and famous Karl Rogers believed that therapist empathy is the “necessary and sufficient” condition for personality change, but most subsequent research has failed to support this popular belief.
I (David) believe that AI therapists are likely to outperform human shrinks in rapport, warmth, trust, and understanding, but it remains to be seen whether this will be sufficient to make much of a dent in the patient’s symptoms of depression, anxiety, marital conflict, or habits and addictions. Other techniques are likely to be required.
However, we may have new data on this question shortly, as we will be directly studying the effectiveness of AI empathy on the reduction in negative feelings. We might be surprised, as our research nearly always gives us some unexpected results!
Rhonda gave a strong and appreciated pitch for the idea that there is something about a person to person interaction, like a hug, that will never be duplicated by an app. If this is true, or even believed to be true, then there will likely never be a complete replacement of human shrinks by AI apps.
But once again, you can believe this on a religious, or a priori, basis, or you can take it as a hypothesis that can easily be tested in an experiment. We do have very sensitive and accurate tests of therapists’ warmth and empathy, so “rapport” can now be measured with short, reliable scales, making head to head comparisons of apps and humans possible for the first time. At one time, it was thought that AI would never be able to beat human chess champions, but that belief turned out to be false.
The podcast group also discussed some of the potential shortcomings of an AI shrink. For example, the AI does not yet have the insight of how to “see through” what patients are saying, and takes the patient’s words at face value. But a human therapist might often be thinking on multiple levels, asking what’s “really” going on with the patient, including things that the patient might be intentionally or unintentionally hiding, like feelings of anger, or antisocial behaviors.
At the end, all four participants gave their vision, or dream, for what a positive impact of AI might have on the world of mental illness / mental health. Rhonda had tears in her eyes, I think, over the suggestion that an effective and totally automated AI therapist would be scalable and might have the potential to bring ultra low-cost relief of suffering to millions or even hundreds of millions of people around the world who do not currently have access to effective mental health care.
And I would add the individuals who now have access to mental health care, often cannot find effective treatment due to severe limitations in therapists as well as all current schools of therapy.
Jason described his vision for an AI shrink as the helper of human therapists, extending their impact and enhancing their effectiveness. Jason is super-smart and wise, and I found his vision very inspiring! I have trained over 50,000 therapists who have attended my training programs over the past 35 years, and one thing I have learned is that most shrinks, including David, have tons of room for improvement.
And if a brilliant and compassionate AI helper can enhance our impact? Hey, I’m all for that!
Thanks for listening today! Let us know what you thought about our show!
Jason, Matt, Rhonda, and David
Create your
podcast in
minutes
It is Free