AI Chatbots in Mental Health: Earkick's Unique Approach and Regulatory Challenges
AI chatbots like"Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles". Earkick, an AI chatbot, offers support for mental health issues with its comforting and sympathetic approach. The debate over whether these chatbots provide a mental health service or act as self-help tools is crucial for the digital health industry.
The rise of AI chatbots in the field of mental health is gaining momentum, particularly among Gen Z individuals who are seeking support for their mental health struggles. One such chatbot, Earkick, is making waves with its unique approach to providing comfort and guidance. When users download the app, they are greeted by a bandana-wearing panda, reminiscent of a character from a children's cartoon. The app generates sympathetic statements and offers various techniques to manage anxiety, such as guided breathing exercises and reframing negative thoughts.
However, Earkick's co-founder, Karin Andrea Stephan, is quick to clarify that while their app may be seen as a form of therapy, they do not actively promote it as such. Stephan, a former professional musician and self-described serial entrepreneur, explains that they are cautious about labeling their app as therapy and prefer to avoid any misrepresentation.
The question of whether AI chatbots like Earkick are providing a mental health service or simply acting as a self-help tool is crucial for the emerging digital health industry and its survival. These chatbots, including Earkick, are part of a growing number of free apps that aim to address the mental health crisis among teens and young adults. Since these apps do not claim to diagnose or treat medical conditions, they are not regulated by the Food and Drug Administration (FDA). However, the lack of regulation raises concerns about their effectiveness and safety.
While chatbots offer the advantage of being free, accessible 24/7, and devoid of the stigma associated with traditional therapy, there is limited data to support their efficacy in improving mental health. None of the leading chatbot companies have undergone FDA approval to demonstrate their effectiveness in treating conditions like depression, although some have initiated the voluntary approval process.
Vaile Wright, a psychologist and technology director with the American Psychological Association, highlights the absence of a regulatory body overseeing these chatbots, leaving consumers uncertain about their effectiveness. However, Wright believes that while chatbots cannot replace traditional therapy, they may be beneficial for individuals with less severe mental and emotional problems.
Earkick's website explicitly states that the app does not provide any form of medical care, opinion, diagnosis, or treatment. However, some health lawyers argue that such disclaimers may not be sufficient to address concerns about the use of these apps for mental health services. Glenn Cohen of Harvard Law School suggests that apps should include more direct disclaimers, clearly stating that they are solely for entertainment purposes.
Despite the ongoing debate surrounding their effectiveness, chatbots are already playing a role in addressing the shortage of mental health professionals. The UK's National Health Service has introduced a chatbot called Wysa to assist with stress, anxiety, and depression among adults and teens, including those awaiting therapy. Similarly, some US insurers, universities, and hospital chains are offering similar programs.
Dr. Angela Skrzynski, a family physician in New Jersey, notes that patients are often receptive to trying a chatbot when faced with long waiting lists for therapy. Virtua Health, Skrzynski's employer, has implemented a password-protected app called Woebot for select adult patients, recognising the challenge of hiring and training enough therapists to meet the demand. Patients tend to use Woebot for about seven minutes per day, primarily during the early morning hours.
Woebot, founded in 2017 by a Stanford-trained psychologist, takes a different approach compared to Earkick and other chatbots. Instead of using large language models, Woebot relies on structured scripts written by company staff and researchers. This rules-based approach is considered safer for healthcare use, as generative AI chatbots have been known to provide inaccurate information. While Woebot offers apps for various demographics, including adolescents, adults, and individuals with substance use disorders or postpartum depression, none of their apps have FDA approval.
A comprehensive review of AI chatbots conducted last year found that while chatbots could significantly reduce symptoms of depression and distress in the short term, there is limited evidence regarding their long-term effects or overall impact on mental health. Concerns have also been raised about the ability of chatbots to recognise suicidal thinking and emergency situations. While Woebot and other apps provide contact information for crisis hotlines and resources, there is a need for further research and regulation to ensure the safety and effectiveness of these tools.
As the debate continues, some experts, like Ross Koppel from the University of Pennsylvania, advocate for FDA regulation of chatbots, potentially based on a sliding scale of potential risks. Currently, the FDA primarily focuses on regulating AI in medical devices and software used by healthcare professionals, rather than consumer-facing products.
AI chatbots like"Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles"
Earkick, an AI chatbot, offers support for mental health issues with its comforting and sympathetic approach.
The debate over whether these chatbots provide a mental health service or act as self-help tools is crucial for the digital health industry.
Source: AP NEWS