Article

Artificial Intelligence in Behavioral Health and Suicide Prevention: Opportunities, and Challenges

Kyle Thompson
Policy Associate
Additional Contributors
No items found.
May 22, 2023
Read time
Download Fact Sheets
Click here to RSVP
Subscribe to our Newsletter
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Download this as a PDF

Throughout the past year, there has been a steady rise in interest in artificial intelligence (AI). The overall onset of AI technology has provided remarkable innovations in computation, completing a large variety of tasks at speeds that were once deemed impossible. With the recent explosion of AI technologies revolutionizing so many different industries at once, many are taken aback by the implications of the technology. Amidst the skeptical perspectives that many have about AI, one thing is certain - the technology is not going anywhere. The potential for harm is always a factor in the innovation of new technology, and this should be considered when servicing vulnerable populations. If this is done without significant oversight, it could raise tremendous ethical issues. In the treatment of behavioral health, we should consider how these technologies may play a role in shaping service delivery, and the opportunities and limitations of AI as a resource for treatment.

What is AI?

Essentially, AI is the practice of building and creating intelligent machines to think like humans. The overall goal of AI is to perform enhanced learning, perception, and reasoning that can provide insights a typical human cannot. The term artificial intelligence was coined in 1955 by Stanford professor John McCarthy. McCarthy defined AI as the engineering and science of building intelligent machines. Bill Gates’ definition of AI is like McCarthy’s, but he expands on its definition. Gates defines AI as a model used to provide a specific service or solve a particular problem. This is distinct from machine learning, which is the training of devices or software to perform a task and improve by providing data over time.

Opportunities for AI in behavioral health

Many possibilities for AI-driven behavioral health exist, both in institutional and interpersonal contexts. Many individuals who are trying to prioritize self-care could benefit from the available resources that AI chatbots provide. It could also streamline the onboarding process healthcare professionals use to make decisions to improve outcomes of a patient. Therapist chatbots could also be an effective tool in life coaching and self-care, encouraging mindfulness practices. One life coach documents their experience in detail, showing how the chatbot helps them work through basic issues. In a sense, AI chatbots could potentially be a modern use of self-preservation and self-resiliency against burnout, which could be especially impactful to advocates and activists.

 AI chatbots could potentially be a modern use of self-preservation and self-resiliency against burnout, which could be especially impactful to advocates and activists.

Outside of behavioral health, other government agencies have found uses for AI in streamlining service delivery. A collaboration focused on benefits programs in Ohio created AI tools focused on completing low complexity tasks such as data management, data analytics, and robotics process automation. But they operated behind the scenes in the administration of programs, rather than interacting with service recipients.

How is AI being used in health and human services?

AI is already being used in many health and human service industries. In healthcare, for example, AI is used to identify anomalies and suggest diagnoses for informed by patient medical records. AI has a computational ability to read and process vast amounts of patient data, which allows healthcare professionals to make decisions quicker. AI is also revolutionizing treatment techniques by analyzing medical data, which helps researchers create new discoveries. The University of St. Augustine wrote that AI could improve healthcare in four ways:

  • Improve diagnostics: AI can assist in the decision-making process for healthcare professionals. AI analyzes symptoms, recommends treatment services, and provides insight into health risks in individuals or populations.
  • Advance treatment: AI is used to improve the development of medical innovation. This can be done by reducing the time needed to develop a new drug. This is also evident in AI’s emerging use as a therapeutic tool.
  • Boost patient engagement: This is achieved through personalized treatment plans that encourage interaction with services prescribed to patients. Reminders, notifications, or other forms of digital communication helps patients remain on target for their treatment plan.
  • Support administrative tasks: Administrative tasks that typically take up a large portion of daily workflow can be streamlined into automated processes. In healthcare, reviewing medical records takes 34% to 55% of a physician’s time[1]. AI reduces this time tremendously, increasing time for clinicians to focus on delivering care.In each of these areas, AI is revolutionizing the way that healthcare and health treatment is being used.

AI in Suicide Prevention and Therapy

CareSource is a nonprofit organization that is one of Ohio’s Medicaid managed care providers. In 2022, Caresource announced a collaboration with Clarigent Health to integrate AI into behavioral health services in Ohio. The software used for the partnership, Clarity, analyzes speech and patient reported symptoms. Clarity also tracks vocal indicators over time as a way to detect depression, anxiety, suicide risk, and other mental health issues, coupled with clinical assessments by care providers. Clarity is trained on a variety of voice samples that are used to analyze speech. Based on the level of risk exhibited by a patient, Clarity will alert the physician. With this information, the physician can make informed decisions for the patient to improve health outcomes.

 AI has also been used to enhance the well-being of patients that deal with trauma, stress, addiction, or other psychiatric disorders.

AI has also been used to enhance the well-being of patients that deal with trauma, stress, addiction, or other psychiatric disorders. Like suicide prevention chatbots, therapeutic chatbots respond to speech or text to assess the wellbeing of a person and provide strategies for coping with day-to-day problems. One example of this is the application Woebot, a mental health and self-care tool created by Stanford University. The intention behind the tool was to improve the difficulty of finding and connecting to therapists to get access to mental healthcare. This resulted in the creation of an application that can provide individuals with mental health care using a mixture of Interpersonal Psychotherapy (IPT), Dialectical Behavioral Therapy (DBT), and Cognitive Behavioral Therapy (CBT). This is coupled with AI and natural language processing (NLP) to provide mental health support using an evidence-based approach.

Digital Therapeutics in the State Budget

The Department of Mental Health and Addiction Services (OhioMHAS) has a line item in its proposed budget dedicated to Digital Therapeutics. This funding would require OhioMHAS to design a pilot program to evaluate and determine the effectiveness of prescription digital therapeutics. Digital therapeutics are technologies that provide digitally driven medical interventions to diagnose and treat patients using digital means of identification and treatment. The major advantage of digital therapeutics is that they can cover gaps behavioral health needs when there are provider shortages. Recent research from Community Solutions highlights difficulties Ohioans face in finding behavioral health providers while covered by private insurance.  

In the state-funded pilot, the populations treated would be individuals who are suffering from addiction or other behavioral health disorders. The current version of the state budget identifies two software companies that will be utilized in the pilot. Below is a dive into both software companies.

 The major advantage of digital therapeutics is that they can cover gaps behavioral health needs when there are provider shortages.

Pear Therapeutics: Pear Therapeutics is the first company to receive FDA approval to treat human disease. Products used to treat disease are called Prescription Digital Therapeutics (PDTs). PDTs are regulated by the FDA and provide software driven diagnosis that are evidence-based. Pear Therapeutics have two approved PDTs by the FDA:

  1. 1. 1. reSET: reSET treats substance use disorder (SAD) using CBT. reSET is used on a 12-week (90 day) prescription-only treatment plan. This plan is for individuals that abuse multiple substances in addition to alcohol, do not exclusively engage in alcohol abuse, are not on any opioid replacement therapy, or do not abuse opioids as their predominant substance of abuse. reSET is only used under prescription by a patient’s medical provider.
        2. reSET-O: reSET-O is software that treats Opioid Use Disorder (OUD) for outpatient treatment using CBT. reSET-O uses a combination of contingency management and a pharmacy medication called buprenorphine for patients 18 and older who are supervised by a clinician. reSET-O is only used under prescription by a patient’s medical provider.Pear Therapeutics has licensed the use of an artificial-intelligence keystroke detection algorithm from a company called KeyWise. KeyWise produces digital biomarkers through smartphone keyboard interactions. These technologies are used to track individual health measures using natural language processing in the application of PDTs. These tools are used to treat a variety of behavioral health disorders, including depression, substance use disorder, and opioid use disorder.  

Orexo Digital Therapeutics: Orexo is a company that provides digital therapies and pharmaceutical treatment to improve gaps in service for individuals dealing with substance use disorder and mental illness. Orexo also provides services that work to address opioid addiction using PDTs. Orexo has two PDTs authorized under the FDA:

  1. 1. 1. MODIA: MODIA is a PDT used to treat OUD. MODIA can be used via browser, smartphone, computer, or tablet. MODIA is for adults that are receiving medication assisted treatment for OUD, and it is meant to be used in combination with medication and counseling that a patient receives. Based off the answers the patient gives, the software develops pattern recognition over time that shapes the process of with the patient based on how they respond to queries. MODIA has an 80-day use cycle.
        2. deprexis: deprexis is a 12-week online program used to treat depression. The service can be accessed via the web, similar to MODIA. The treatment process for deprexis is centered around utilizing counseling techniques that are personalized to the patient. deprexis is used in combination with other prescription treatments to reduce depression within patients.Orexo’s web-based services provide unique advantages to behavioral health treatment because they provide 24/7 access and can be accessed across all devices that can connect to the internet. Orexo uses artificial intelligence to tailor individualized service delivery to patients to address specific therapy needs.

AI criticisms and challenges

Naturally, the innovations provided by AI would not come without their pitfalls. It is important to recognize that, even though this technology is new and innovative, the effectiveness may vary based on the individual and the type of application it used. There is a significant difference between the commercial use of AI to complete general tasks, versus its application in providing clinical treatment.  

Criticisms of artificial intelligence are plentiful, and people across disciplines and professional circles have voiced their opinions on the uses of this technology. In the context of non-prescriptive behavioral health services, critics say that AI is limited, no more than just an interactive tool for guided care. Hannah Zeavin, a professor at the Luddy School of Informatics at the University of Indiana stated that “We know we can feel better from writing in a diary or talking aloud to ourselves or texting with a machine. That is not therapy…not all help is help”. Rosalind Picard, director of MIT’s Affective Computing Research Group also notes that many people could find longer conversations empty and superficial based on how limited AI systems are in responding to a series of inputs. The engagement and connection could be missing for some individuals who are seeking aid in ways that only a clinician can provide.

Cultural biases: AI is still built by humans

Other criticisms speak on the cultural biases of the chatbots. Aniket Bera, professor of computer science at Purdue University spoke on how much of the available data used in chatbot therapy is heavily weighted toward white males. Cultural bias within AI is not surprising, especially because it is already pervasive in behavioral health systems. Community Solutions has published research on biases in mental health systems, highlighting the ways that systemic inequalities play a role in hindering the efficacy of behavioral health treatment. These biases can be especially detrimental to multi-system youth who struggle with undiagnosed mental, social, or emotional disabilities. It also leads to inadequate treatment for youth of color and creates disparities in the mental health pipeline, hindering service and prevention. AI could become part of this detrimental cycle if these tools are not created and deployed with a culturally competent design toward communities that need them most.

Regulations and monitoring

Some also feel concerned about the rapid scale of the technology, arguing that it poses a significant threat to humanity if it is not ethically monitored. Recently, an open letter petition began circulating around the internet calling for an immediate pause on AI experiments. This petition has currently collected over 22,000 signatures, some of the most notable being Steve Wozniak, Pinterest co-founder Evan Sharp, and Elon Musk. The authors of the letter go on to say that AI developers must work with policymakers to accelerate governance to regulate and oversee AI projects.  

Broadly speaking, AI has danger in perpetuating misinformation and socioeconomic inequality. A 2018 report on artificial intelligence is very comprehensive in its assessment and analysis of AI and its potential for malicious use. A significant detail of this report highlights how implicit bias impacts the ways that algorithms are trained and developed. In 2015, Carnegie Mellon University conducted a study that found Google’s ad-service system showing hiring paying jobs to men more often than it did women. Olga Akselrod of the ACLU also wrote about how discrimination in AI tools could perpetuate housing discrimination, hiring practices, and financial services.

 AI has seen significant use in mental and behavioral health.

What can be done to improve AI?

AI has seen significant use in mental and behavioral health. As technologies improve and become more sophisticated, ethical considerations should be taken to ensure that the wellbeing of marginalized populations is considered when receiving services. Algorithms that are created for behavioral health purposes should be revised and tested regularly to ensure that the impact of the tool will be equitable and beneficial to patients. The entire process of creating an AI model should be considerate equity. Doing so will create better safeguards for health and human services and improve behavioral health treatment for marginalized populations.  

[1] Lin et al. (2018). Retrieved from https://www.mayoclinicproceedings.org/article/S0025-6196%2818%2930142-3/pdf

Download Fact Sheets
No items found.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Download report

Subscribe to our newsletter

5 Things you need to know arrives on Mondays with the latest articles, events, and advocacy developments in Ohio

Explore Topics

Browse articles, research reports, fact sheets, and testimony.