Using AI to improve psychiatric care

A new paper in Schizophrenia Bulletin shows that artificial intelligence (AI) can help doctors to diagnose the mental health of patients with speed and accuracy. The new app designed for mobiles has been developed to improve patient monitoring using cues picked up from their speech, and is based on machine-learning tools.

Image Credit: Ktsdesign / Shutterstock
Image Credit: Ktsdesign / Shutterstock

The tool could come in extremely handy for American mental patients living in remote locations, as almost 20% do. This makes coming in to see psychiatrists or psychologists a big challenge for many of them. It could also be a boon for those who are simply too poor, too busy, or can’t get in to see a doctor for any other reason.

And it’s not just distance or poverty. Most therapists base their management of individual patients on what they hear their patients say. However, many studies have shown that human listening is both highly subjective and unreliable. According to researcher Brita Elvevag, who is a specialist in cognitive neuroscience (the role of the brain in thinking and related processes), “Humans are not perfect. They can get distracted and sometimes miss out on subtle speech cues and warning signs. Unfortunately, there is no objective blood test for mental health.”

Computers are different. They are programmed to do something and they do that, come hell or high water. While this makes for boring people, it makes for excellent diagnostic accuracy in AI. In search of a possible solution based on AI, the researchers delved into machine learning to develop a program that could ‘learn’ how normal people talk, and from such a foundation, pick up small clues that show how a patient is doing on any given day. They say language is a critical tool to detecting the mental state of patients, and changes in language from day to day can be picked up using this simple tool to monitor patients on a daily basis from afar.

Some such hints could come from irrational sentence patterns, typical of schizophrenia. If the patient with bipolar disorder talks too loud or too fast, it could signal mania. The opposite in the same patient could mean depression is setting in.

And if the patient rambles disconnectedly, and doesn’t remember things properly, the doctor would look into dementia as well as mental illness possibilities.

The app

The new AI-based app is based on a short series of questions that must be answered by the patients by talking into the phone. It will take just 5-10 minutes in all, but will give crucial information to the software. Some of the questions include queries about how they feel, a request to tell a short story, to listen to and then repeat a story, and finally to perform some touch-and-swipe tasks to test motor skills.

The AI system will evaluate the speech samples, do an instantaneous comparison with previous samples on a patient-by-patient basis, compare them to the population at large, and arrive at a rating that indicates how healthy the patient is mentally.

The app was tested in 225 patients recently, who were simultaneously seen by doctors as well. Half the patients were ill with severe psychiatric issues, the other half were healthy volunteers for the study. The patients were from two locations – rural Louisiana and Northern Norway. The results from both arms were compared. The diagnosis made by the app was quite as accurate as that made by the humans.

Fellow-researcher Peter Foltz says, “We are not in any way trying to replace clinicians. But we do believe we can create tools that will allow them to better monitor their patients.” What they want to do is to allow remote monitoring for very ill patients, as well as to provide a backup and additional diagnostic resource for patients who are seen face-to-face by therapists.

In the first case, the app could alert the patient’s doctor to the occurrence of an abnormal change. This could cut down on unnecessary clinical visits while not putting patients at risk of serious illness by oversight. When such acute deteriorations occur, not only do they put the patient and others in danger but cost the health system a lot. On the other hand, there is a shortage of trained professionals to conduct such interviews as often as desired to prevent such unfortunate events. The app is designed to meet this dilemma head-on by providing a reliable alternative.

Foltz has experience with AI already: he was part of the commercial development of a commonly used essay-grading system using AI.

The researchers say larger studies must be done to prove that the new system is effective and trustworthy. Most important, public trust for the mobile app is critical before introducing it into medical practice.

AI is a mysterious thing to most laypeople and even to most doctors, who are trained in a different field. It is hard for medical people to understand how an AI-based technology actually works. Doctors need to help AI experts to set up a framework whereby AI-based tools in psychiatry can be reviewed rigorously but fairly. The researchers mention three issues as crucial in this discussion:

  • Being able to explain the process
  • Transparent functioning of the process
  • Generalizability of the findings

This will help doctors to discuss the benefits of this technology, and how it can help achieve even better patient care, free of human fallibility. The whole point, according to the researchers, is just this: “Rather than looking for machine learning models to become the ultimate decision-maker in medicine, we should leverage the things that machines do well that are distinct from what humans do well.” And in the end, both humans and machines will team up to offer patients unique and high-quality care.

Journal reference:

Chelsea Chandler, Peter W Foltz, Brita Elvevåg, Using Machine Learning in Psychiatry: The Need to Establish a Framework That Nurtures Trustworthiness, Schizophrenia Bulletin, , sbz105, https://doi.org/10.1093/schbul/sbz105, https://academic.oup.com/schizophreniabulletin/advance-article-abstract/doi/10.1093/schbul/sbz105/5611057?redirectedFrom=fulltext

Dr. Liji Thomas

Written by

Dr. Liji Thomas

Dr. Liji Thomas is an OB-GYN, who graduated from the Government Medical College, University of Calicut, Kerala, in 2001. Liji practiced as a full-time consultant in obstetrics/gynecology in a private hospital for a few years following her graduation. She has counseled hundreds of patients facing issues from pregnancy-related problems and infertility, and has been in charge of over 2,000 deliveries, striving always to achieve a normal delivery rather than operative.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Thomas, Liji. (2019, November 13). Using AI to improve psychiatric care. News-Medical. Retrieved on November 23, 2024 from https://www.news-medical.net/news/20191113/Using-AI-to-improve-psychiatric-care.aspx.

  • MLA

    Thomas, Liji. "Using AI to improve psychiatric care". News-Medical. 23 November 2024. <https://www.news-medical.net/news/20191113/Using-AI-to-improve-psychiatric-care.aspx>.

  • Chicago

    Thomas, Liji. "Using AI to improve psychiatric care". News-Medical. https://www.news-medical.net/news/20191113/Using-AI-to-improve-psychiatric-care.aspx. (accessed November 23, 2024).

  • Harvard

    Thomas, Liji. 2019. Using AI to improve psychiatric care. News-Medical, viewed 23 November 2024, https://www.news-medical.net/news/20191113/Using-AI-to-improve-psychiatric-care.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI model combines speech and brain activity to diagnose depression