亚色影库app

AI and Mental Health Care: Issues, Challenges, and Opportunities

QUESTION 8: What are the economic and other impacts of AI adoption in mental health on health care providers?

Back to table of contents
Project
AI and Mental Health Care

Background

AI is already beginning to alter how mental health services are delivered, but its long-term effects on care systems remain unclear. Some predict that AI will 鈥渙ffer future advantages . . . in terms of accessibility, cost reduction, personalization, and work efficiency鈥 for mental health care.167 One widely agreed-upon area where shifting to AI could assist is in cost reduction.168

A 2022 systematic review of Internet- and mobile-based interventions (generally not AI-enabled) found that guided digital therapies are cost-effective for depression and anxiety treatment, with favorable cost per quality-adjusted life year.169 Similarly, a randomized trial reported computer-assisted CBT to be cost-effective for treating depression.170 These findings align with analyses from the National Health Service that project digitally enabled therapies could save thousands of clinician hours per every one thousand patients treated. Improved mental health outcomes from earlier or expanded interventions are also predicted to assist with reducing the $478 billion cost.171 However, this figure has not been analyzed at the level of individual or specific interventions, and no current evidence links AI tools directly to reductions in productivity loss.

Real-world cost savings remain largely theoretical at this stage. Few health systems have systematically tracked whether AI adoption actually reduces emergency visits, hospitalization rates, or therapy dropout. Others caution that AI may introduce new costs in managing false positives or addressing ethical and legal issues.172 In some cases, integration costs, including EHR compatibility, staff training, and data governance, may offset any short-term efficiency gains.

AI tools are also beginning to transform the workload of mental health professionals by automating routine and administrative tasks. A recent survey of psychiatrists identified 鈥渄ocumenting/updating medical records and synthesizing information鈥 as two tasks with potential for AI automation. Some experts argue that LLM chatbots and decision-support systems may be an 鈥渦nderappreciated solution to the shortage of therapists,鈥 able to deliver scalable support and improve work efficiency for existing staff.173 Early reports in general mental health care settings indicate that digital scribes and automated scheduling systems have improved clinicians鈥 sense of efficiency and reduced exhaustion from menial tasks.174

Surveys of mental health professionals reveal that many see the potential for AI to enhance care and improve efficiency but also harbor concerns about deskilling and the erosion of the therapeutic relationship.175 These concerns include diminished empathic connection, reduced narrative exchange, and loss of clinical judgment in favor of algorithmic guidance, despite early studies showing that LLMs are comparable to therapists in clinical outcomes.176 While the evidence to date shows that AI has caused few direct job losses in mental health care, the potential for AI to supplement the workforce, at a minimum, and raise the bar or disrupt the entrenched system of providers in the future seems inevitable for mental health care given its current trajectory.

Responses

A photo of Kacie Kelly, a person with light skin and long blonde hair, wearing red glasses and colorful business attire, and smiling at the viewer.

Kacie Kelly
 

How will AI adoption affect the cost and sustainability of mental health services?

Many AI applications could help optimize the limited mental health workforce and help them perform at the top of their license. For example, clinical note-taking is well-documented as contributing to provider burnout, and precious clinician time is often spent on administrative tasks rather than clinical care, reducing clinical productivity. As such, many health systems have embraced AI 鈥渟cribes鈥 to address these pervasive challenges. Doctors and mental health providers report AI tools enabling them to focus more on patients than on note-taking and data entry while cutting down on administrative burdens.177 A recent taskforce led by the Peterson Health Technology Institute found that initial adoption of AI scribes has led to a reduction in cognitive load and burnout, though whether AI scribes actually increase provider capacity is still unclear.178

Other workforce efficiencies will come from more effectively getting people to the right care sooner. When thoughtfully designed for diverse populations, AI data analytics can help more effectively identify patient needs and match those needs to providers鈥 expertise. Interventions work best when administered early in symptom onset and before symptoms reach a crisis point, ideally before the patient even knows they are ill.179 As such, early identification and treatment are key for expanding access to care and treating conditions when they are more manageable and require less credentialed clinical intervention.

What potential disruptions to the mental health workforce could arise from AI integration?

The introduction of AI into mental health provider practice will inevitably disrupt, and cause friction within, the mental health workforce, even as it leads to efficiencies in many areas. At the most basic level, some providers鈥攑articularly those who have been practicing for many years鈥攎ay be resistant to adopting new technologies, particularly AI. Nonetheless, all providers must be prepared to encounter patients who are using AI to support their mental health and well-being.180 Practices that are more willing to embrace AI may have to make changes to workflows and necessary investments in AI infrastructure. Providers and practices will have to become more knowledgeable about data and cybersecurity, areas that may pre颅sent significant learning curves.

The continual and rapid pace of AI advancement may present challenges to providers and health care systems, necessitating a rethink of what skills and training are needed in the workforce; many providers may need to be 鈥渦pskilled.鈥181 The AI literacy gap will also likely change how mental health workforce training and education occur. The mental health workforce training and education process may need to focus more on computational skills, understanding of how AI works, knowledge around AI bias relevant to mental health, and more. Entirely new roles within the mental health workforce may also be needed. As measurement-informed mental health care becomes more commonplace due to advancements in AI, these new algorithms, data-driven approaches, and potential biomarkers may raise new questions. Just as genetic counseling was established as a new field in 1969, a new field may be needed to help patients and providers navigate the increase in data and algorithms shaping mental health care.182

Likewise, significant questions remain about accountability if an AI system does not properly triage or handle signs of distress (e.g., suicidality). These questions will reshape the practice of mental health care, including ethics, liability, and accountability.

Finally, given that the AI policy landscape is uncertain and constantly evolving, changes to AI regulation could affect the way AI integrations are rolled out in the mental health space, causing additional uncertainty.183 The rapid advancement of AI itself may also cause disruptions to the mental health workforce. Updates to models and AI advancements are occurring rapidly, and many key technical experts argue that artificial general intelligence (AGI) is on track to occur in the next one to five years.184 The mental health space may struggle to keep up with the latest models in AI and risk technology becoming outdated very quickly. The mental health workforce could be severely disrupted by not having an AGI strategy.

What measures can ensure equitable access to AI-based mental health care across socioeconomic groups?

Ensuring AI-mental health applications are deployed equitably requires both attention to equal access and protection against bias. A multitude of measures will be needed to ensure access to AI-enabled mental health care. For example:

  • Many mental health applications are available only through private payment (鈥渄irect to consumer鈥).185 For AI-enabled applications to benefit all socioeconomic groups, both private and public insurance reimbursement are essential. Medicaid coverage is necessary for AI digital mental health technologies to support the nearly 40 percent of children and youth who are Medi颅caid beneficiaries.186 At the same time, we must ensure that AI mental health interventions complement but do not replace human care; access to AI-enabled digital mental health interventions should not replace other efforts to improve access to mental health care.

  • The use of AI-enabled care鈥攔anging from clinical notetaking assistance to treatment鈥攚ill require health system and provider investment in new technologies, data security, adjustments to workflow, and training of personnel.187 Safety net systems such as local health authorities, community health centers, and rural health providers are at a disadvantage because they often lack the data and technological infrastructure to support many AI technologies. Significant public investment will be needed to ensure safety net systems are not left out of technological advances.188

  • Digital literacy, the 鈥渧arying ability of both children and adults to use technologies and understand their risks,鈥 is an important consideration when ensuring equitable access to and use of AI-enabled tools.189 Uncertainty about how AI technologies work can lead to a lack of motivation or even resistance to using AI-enabled digital mental health tools.190 Employing digital navigators and other nonlicensed roles to support patients鈥 use of tech applications will be critical to facilitate their use in some populations, such as older adults or people with severe mental illness.

 

A photo of Arthur Kleinman, a person with light skin, gray hair, and a gray beard and mustache, wearing a brown jacket and blue shirt, and smiling at the viewer.

Arthur Kleinman
 

If we look at the impact of telepsychiatry and telephone- and video-based psychotherapy, I do not believe we yet have convincing evidence that these approaches to care reduce the cost of providing care, although there is evidence that they can provide care of equal quality to that given by human beings in mental health care.191 The evidence-based argument for AI鈥檚 adoption should be carefully assembled and thought through so that it avoids immodest claims of causality. We require economic analyses of augmenting practices to make the case for AI鈥檚 adoption; not practices that substitute bots for people. If AI substitutes bots for people, we will be well on our way to undermining the sovereignty of human beings in health care and replacing it with the sovereignty of AI. But we have no evidence that doing so would improve care and there are multiple arguments for why it would create havoc in our health care system, which is already chaotic, disorganized, and broken. No measures can ensure equitable access to AI-based mental health care for different socioeconomic groups, because such inequality is a fundamental reality of our health care system.

 

A photo of Daniel Barron, a person with light skin and short brown hair, wearing a dark business suit and white shirt and smiling at the viewer.

Daniel Barron
 

The economic and workforce impacts of AI in mental health care will hinge less on AI as a general concept and more on which specific clinical and administrative jobs AI tools perform鈥攁nd how well they do them (see Table 1). Automating narrow, clearly defined tasks (e.g., billing, documentation, and symptom tracking) may reduce costs and increase throughput. But developing and deploying AI for complex, high-touch clinical jobs demands hefty investments. Justus Wolff and colleagues remind us that many economic analyses of AI overlook its full costs鈥攄evelopment, implementation, integration, and maintenance鈥攖hus obscuring the true value proposition.192

Potential disruptions to the mental health workforce are more likely to look like a reshuffling of roles based on the tasks AI absorbs, rather than outright replacement. If AI takes over routine data-gathering or -summarization tasks, clinicians might find themselves focusing more on complex decision-making and the uniquely human, interpersonal aspects of care鈥攋obs that seem less suitable for today鈥檚 AI. Jo-An Occhipinti and colleagues suggest that AI can augment the workforce by helping with diagnostic and administrative tasks, thereby freeing up health workers for direct patient care jobs.193

AI could reshape health care policies by paving the way for new care delivery models centered on specific tasks, which in turn would necessitate policies to ensure these AI-driven jobs are done safely and equitably. The division of labor could blend digital and human workflows in ways current reimbursement and licensure policies are not yet structured to support. Policymakers will need to define the boundaries and standards of these AI-executed jobs: What gets reimbursed? Who holds medicolegal liability? What constitutes 鈥渟tandard of care鈥 when care is shared with a machine?

Equity must be deliberately engineered into this future鈥攏ot to support any specific ideology or theory, but for the simple fact that supporting the health of the entire population is cheaper and more cost effective for the health system as a whole. Widespread AI adoption risks entrenching disparities unless systems invest in access and usability. Without the early investment in reaching all people in our society, the short-term gains of any AI-based tool/system risk being buried in the long-term accumulation of (ultimately costly) chronic diseases and comorbidities that might have been avoided with proper planning. That includes subsidizing AI tools that perform high-value tasks in under-resourced settings, funding digital literacy programs (many patients still struggle with basic portals), and co-designing tools around the needs of marginalized communities. Task performance must be validated not just in clinical trials, but across real-world populations representative of all the lives covered by a payer, including the largest payer in the United States: the federal government.

Ultimately, AI鈥檚 impact won鈥檛 be defined by the technology itself but by the labor, payment, and policy ecosystems it reconfigures. The question is not 鈥淲ill AI disrupt?鈥 but rather 鈥淲hich jobs? For whom? At what cost? And who will benefit?鈥

Endnotes

  • 167

    Shane Cross, Imogen Bell, Jennifer Nicholas, et al., 鈥,鈥 JMIR Mental Health 11 (1) (2024): e60589.

  • 168

    Hong and Emanuel, 鈥淟everaging Artificial Intelligence to Bridge the Mental Health Workforce Gap and Transform Care.鈥

  • 169

    Fanny K盲hlke, Claudia Buntrock, Filip Smit, and David Daniel Ebert, 鈥,鈥 npj Digital Medicine 5 (1) (2022): 175; and Kaiser Family Foundation, Mental Health Care Health Professional Shortage Areas.

  • 170

    Shehzad Ali, Feben W. Alemu, Jesse Owen, et al., 鈥,鈥 JAMA Network Open 7 (11) (2024): e2444599; and American Psychological Association, State of Mental Health in America.

  • 171

    Hong and Emanuel, 鈥淟everaging Artificial Intelligence to Bridge the Mental Health Workforce Gap and Transform Care.鈥

  • 172

    Barry Solaiman, Abeer Malik, and Suhaila Ghuloum, 鈥,鈥 American Journal of Law and Medicine 49 (2鈥3) (2023): 250鈥266.

  • 173

    Hong and Emanuel, 鈥淟everaging Artificial Intelligence to Bridge the Mental Health Workforce Gap and Transform Care.鈥

  • 174

    Lee et al., 鈥淎rtificial Intelligence for Mental Health Care.鈥

  • 175

    Cross et al., 鈥淯se of AI in Mental Health Care.鈥

  • 176

    Zainab Iftikhar, Sean Ransom, Amy Xiao, et al., 鈥,鈥 preprint, arXiv, September 3, 2024.

  • 177

    Iyesatta Massaquoi Emeli, 鈥,鈥 STAT, April 9, 2025.

  • 178

    Peterson Health Technology Institute,  (Peterson Health Technology Institute, March 2025).

  • 179

    H. Membride, 鈥,鈥 British Journal of Nursing 25 (10) (2016): 552鈥557; and J. M. Kane, D. G. Robinson, N. R. Schooler, et al., 鈥,鈥 American Journal of Psychiatry 173 (4) (2016): 362鈥372.

  • 180

    Melody Zhang, Jillian Scandiffio, Sarah Younus, et al., 鈥,鈥 JMIR Formative Research 7 (2023): e47847.

  • 181

    鈥,鈥 American Health Information Management Association (AHIMA).

  • 182

    Kelly E. Ormond, Mercy Ygo帽a Laurino, Kristine Barlow鈥怱tewart, et al., 鈥American Journal of Medical Genetics Part C: Seminars in Medical Genetics 178 (1) (2018): 98鈥107; and Lee et al., 鈥淎rtificial Intelligence for Mental Health Care.鈥

  • 183

    Elizabeth M. Renieris, David Kiron, and Steven Mills, 鈥,鈥 MIT Sloan Management Review, October 29, 2024.

  • 184

    Ryan Browne, 鈥,鈥&苍产蝉辫;CNBC, March 17, 2025.

  • 185

    Adam B. Cohen, Simon C. Mathews, E. Ray Dorsey, David W. Bates, and Kyan Safavi, 鈥,鈥 The Lancet Digital Health 2 (4) (2020): e163鈥揺165.

  • 186

    Kaiser Family Foundation, 鈥,鈥 October 28, 2022.

  • 187

    Mario Aguilar, 鈥溾 STAT, March 27, 2025.

  • 188

    Meadows Mental Health Policy Institute, (Meadows Mental Health Policy Institute, 2023).

  • 189

    Maria del Pilar Arias L贸pez, Bradley A. Ong, Xavier Borrat Frigola, et al., 鈥,鈥 PLOS Digital Health 2 (10) (2023): e0000279.

  • 190

    Chiara Berardi, Marcello Antonini, Zephanie Jordan, Heidi Wechtler, Francesco Paolucci, and Madeleine Hinwood, 鈥,鈥 BMC Health Services Research 24 (1) (2024): 243.

  • 191

    Daisy R. Singla, Richard K. Silver, Simone N. Vigod, et al., 鈥,鈥 Nature Medicine 31 (2025): 1214鈥1224.

  • 192

    Justus Wolff, Josch Pauling, Andreas Keck, and Jan Baumbach, 鈥,鈥 Journal of Medical Internet Research 22 (2) (2020): e16866.

  • 193

    Jo-An Occhipinti, Ante Prodan, William Hynes, et al., 鈥,鈥 Bulletin of the World Health Organization 103 (2) (2025): 155鈥163.