The Dark Side of AI in Mental Health

The Dark Side of AI in Mental Health

— High demand for AI training data may increase unethical practices in collecting patient data

by Michael DePeau-Wilson, Enterprise & Investigative Writer, MedPage Today April 11, 2024

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording

therapy sessions without any additional information about how the recordings would be used.

The company’s advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTok. However, archived screenshots of the website revealed the company was seeking recorded therapy sessions “to better understand the format, topics, and treatment associated with modern mental healthcare.”

Their stated goal was “to ultimately provide mental healthcare to more people at a lower cost,” according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The

company requested that the patients withhold their names to keep the recordings anonymous.

The website stated that the company was committed to providing “top-quality therapy services” for individuals. And the recordings would be used by its research team “to learn

more about approaches to mental healthcare.”

There were no further details about how the company planned to use those recordings, and they did not respond torequests from MedPage Today to clarify their business model.

However, experts suggested this is just one example of an unexpected incentive created from the growth of AI in mental healthcare.

John Torous, MD, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston, told MedPage Today that misuse of patient data related to AI models are an “extremely legitimate concern,” because large language models are only as good as their training data.

“Their chief weakness is they need vast amounts of data to truly be good,” Torous said. “Whoever has the best data will likely have the most practical or — dare I say — the best

model.”

He added that high-quality patient data is likely going to be the limiting resource for developing AI-powered tools related to mental healthcare, which will increase the demand and, therefore, the value of this kind of data.

“This is the oil that’s going to power healthcare AI,” Torous added.

“They need to have millions, if not billions, of examples to train on,” he added. “This is gonna become a bigger and bigger trend.”

Torous highlighted that mental healthcare technology companies have already been caught crossing this line with unethical use of patient-facing AI tools.

For example, in early 2023, a nonprofit mental health platform announced that it used OpenAI’s GPT-3 to experiment with online mental health counseling for roughly 4,000 people without their informed consent. The announcement, which came from the CEO Rob Morris’ X (formerly Twitter) account, highlighted the lack of understanding around ethical concerns related to patient consent from these companies, Torous said.

Another example, he noted, came when users of the text message-based mental health support tool Crisis Text Line learned that the company was sharing data with a former AI sister company called Loris.ai. Eventually, the company ended the relationship after substantial backlash from its users.

While concerns around patient data persist, there are also notable clinical implications for patient care and safety, according to Jacob Ballon, MD, MPH, of Stanford University in California.

“I would not want someone to do AI therapy on its own,” he told MedPage Today, adding that people seek out psychotherapy to help with complex, sometimes life threatening, mental health conditions. “These are serious things that people are dealing with and to leave that to an unregulated, unmonitored chatbot is irresponsible and ultimately dangerous.”

Ballon added that he doesn’t think AI models are capable of producing the nuanced expertise needed to help individual patients address their unique mental health concerns. Even if a company could train their AI chatbot on enough high[1]quality patient data, it would not be able to appreciate the complexity of each patient, he noted.

Despite those concerns, Torous thinks there will be growth in companies attempting to train AI models on patient data, whether it is collected ethically or not.

“There’s probably going to be this whole world where I wonder if patients are going to be pressured or cajoled or convinced to give up their [personal health data],” he said, predicting that the market for patient mental health data will only continue to grow in the coming years.

Michael DePeau-Wilson is a reporter on MedPage Today’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news.