Mental health care can be difficult to access in the U.S. Insurance coverage is spotty and there aren’t enough mental health professionals to cover the nation’s need, leading to long waits and costly care.
Enter artificial intelligence (AI).
AI mental health apps, ranging from mood trackers to chatbots that mimic human therapists, are proliferating on the market. While they may offer a cheap and accessible way to fill the gaps in our system, there are ethical concerns about overreliance on AI for mental health care — especially for children.
Most AI mental health apps are unregulated and designed for adults, but there’s a growing conversation about using them with children. Bryanna Moore, PhD, assistant professor of Health Humanities and Bioethics at the University of Rochester Medical Center (URMC), wants to make sure these conversations include ethical considerations.
“No one is talking about what is different about kids — how their minds work, how they’re embedded within their family unit, how their decision making is different,” says Moore, who shared these concerns in a recent commentary in the Journal of Pediatrics. “Children are particularly vulnerable. Their social, emotional, and cognitive development is just at a different stage than adults.”
In fact, AI mental health chatbots could impair children’s social development. Evidence shows that children believe robots have “moral standing and mental life,” which raises concerns that children — especially young ones — could become attached to chatbots at the expense of building healthy relationships with people.
A child’s social context — their relationships with family and peers — is integral to their mental health. That’s why pediatric therapists don’t treat children in isolation. They observe a child’s family and social relationships to ensure the child’s safety and to include family members in the therapeutic process. AI chatbots don’t have access to this important contextual information and can miss opportunities to intervene when a child is in danger.
AI chatbots — and AI systems in general — also tend to worsen existing health inequities.
“AI is only as good as the data it’s trained on. To build a system that works for everyone, you need to use data that represents everyone,” said commentary coauthor Jonathan Herington, PhD, assistant professor of in the departments of Philosophy and of Health Humanities and Bioethics. “Unfortunately, without really careful efforts to build representative datasets, these AI chatbots won’t be able to serve everyone.”
A child’s gender, race, ethnicity, where they live, and their family’s relative wealth all impact their risk of experiencing adverse childhood events, like abuse, neglect, incarceration of a loved one, or witnessing violence, substance abuse, or mental illness in the home or community. Children who experience these events are more likely to need intensive mental health care and are less likely to be able to access it.
“Children of lesser means may be unable to afford human-to-human therapy and thus come to rely on these AI chatbots in place of human-to-human therapy,” said Herington. “AI chatbots may become valuable tools but should never replace human therapy.”
Most AI therapy chatbots are not currently regulated. The U.S. Food and Drug Administration has only approved one AI-based mental health app to treat major depression in adults. Without regulations, there’s no way to safeguard against misuse, lack of reporting, or inequity in training data or user access.
“There are so many open questions that haven’t been answered or clearly articulated,” said Moore. “We’re not advocating for this technology to be nixed. We’re not saying get rid of AI or therapy bots. We’re saying we need to be thoughtful in how we use them, particularly when it comes to a population like children and their mental health care.”
Moore and Herington partnered with Serife Tekin, PhD, associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical, on this commentary. Tekin studies the philosophy of psychiatry and cognitive science and the bioethics of using AI in medicine.
Going forward, the team hopes to partner with developers to better understand how they develop AI-based therapy chatbots. Particularly, they want to know whether and how developers incorporate ethical or safety considerations into the development process and to what extent their AI models are informed by research and engagement with children, adolescents, parents, pediatricians, or therapists.
Source link