AI Takes on the Challenges of Deep Learning and Psychiatric Diagnosis

AI Takes on the Challenges of Deep Learning and Psychiatric Diagnosis

Deep learning uncovers what the patient said, not what the clinician heard

It is equally dangerous at either extreme — to have either an expanding concept of mental disorder that eliminates normal or to have an expanding concept of normal that eliminates mental disorder.― Allen Frances, MD

Psychiatry is a relatively new medical specialty, having been included with other medical specialties in the mid-19th Century. But it was primarily limited to psychiatric hospitals.

As was true of much of medicine at the time, treatment was rudimentary, often harsh, and generally ineffective. Psychiatrists did not treat outpatients, i.e., anyone who functioned even minimally in everyday society. Instead, neurologists treated “nervous” conditions, named for their presumed origin in disordered nerves.

Gradually, with Freud’s promotion of psychoanalysis as a fashionable medical intervention for the ills of wealthy patients, the treatments changed. Less-well-to-do patients were still subjected to caging, tubbing, bloodletting, extreme confinement, and other questionable interventions, both new and old.

Image: Wikipedia.org

From Bloodletting to the Couch to What?

What could possibly motivate so many physicians to declare such
benefits of bleeding if it was a useless procedure? Does not this tried
and tested practice, persisting doggedly for most of two millennia,
actually comprise the longest clinical trial in medical history, involving
thousands of physicians and millions of patients?

The same questions would seem appropriate today regarding therapeutic techniques and their efficacy.

One comic take on analysis is in Woody Allen’s film, Annie Hall, where he says, “I was in analysis. I was suicidal. As a matter of fact, I would have killed myself, but I was in analysis with a strict Freudian, and if you kill yourself, they make you pay for the sessions you miss.”

Truly psychotic patients were still chained to hospital walls and shuttered in back wards, but the couch and conversation soon became the medical “instrument” for treatment.

The diagnostic method, in an age of advancing medical equipment, remained conversation with patients with questionable results. The issue of accuracy in diagnosis arose not merely because of the inability to communicate adequately, but inferences on the part of the clinician. Poor interviewing skills were also a sore point in psychiatric diagnosis.

A need for a more accurate assessment for diagnostic determination was evident, as was the need for objective screening measures. But the accurate diagnosis, as determined by the DSM (Diagnostic & Statistical Manual of Mental Disorders), could still be a hit-or-miss proposition. Despite the guidelines in the manual, conversation was the tool that could prove its undoing.

The situation was summed up by Tom Insel, MD, co-founder of Mindstrong Health and direction of the NIMH from 2002–2015. “The way we do diagnosis today is really pretty limited. It’s a little bit like trying to diagnose heart disease without using any of the modern instruments, like an EKG, cardiac scans, blood lipids, and everything else.”

Now the advent of artificial intelligence may be providing better guidelines and more accurate diagnoses in its ability to mine verbal data from psychiatric consultations as well as streaming data from smartphones.

Photo by James Harrison on Unsplash

AI’s Deep Learning Meets Psychiatric Diagnosis

Computational psychiatry may prove to be the breakthrough between conversation and accurate diagnostic tools for psychiatric diagnosis. The problem is that there are no current biomarkers that would provide data for psychiatric disorders that have neurologic underpinnings. Apps are filling the void.

As has previously been noted, mental health disorders are not simple and usually combine many disorders in a spectrum presentation. The problem, therefore, is how to discover how these groups of disorders might present in a diverse population, which would then allow clinicians to make a diagnostic determination.

Perhaps what will be found is an incredibly fractured series of disordered models that fit a few individuals. Such a state would then present an additional challenge to plan an effective, valid treatment for these individuals. The simplicity that was previously seen in the prior generations has now been revealed to be complexity. And complexity is best handled by AI.

Psychiatry’s dependence on language has been found wanting, and new AI methods have begun to emerge to meet a serious need.

Evaluating patients’ verbal fluency by counting the number of unique words (e.g., animals) produced in a short-period (e.g., 1–3 min) is one of the most widely employed cognitive tests in psychiatric research.

New technology, specifically automatic speech recognition and natural language processing, can derive new metrics on temporal dynamics and semantic relationships in verbal fluency responses.

Seeking such information is the path forward that researchers have embarked upon. There may be a few methods in terms of deep learning and artificial intelligence that are of use.

Photo by Youssef Sarhan on Unsplash

The Current Programs Being Tested

One system in a small study of community-based psychiatric patients used an interactive voice app called MyCoachConnect to sample patient activities. Its primary purpose was to allow patients to check in with their therapists and provide a sense of support.

Two other apps were either customized or used with minor application changes, mindLAMP, and BiAffect. The latter is a phone app that monitors mood and how it impacts cognition. The former is open source and stands for “Learn, Assess, Manage, Present.”

MindLAMP is utilized on a smartphone and can data stream surveys, cognitive tests, GPS, exercise, medication side effects, and mood and is customizable by the psychiatrist and patient working together. Fifteen institutions around the world are using this open-source app in their research.

BiAffect was used in a study to assess patient’s fluctuations in speech. These researchers believed that a clinician wouldn’t have the degree of sensitivity to voice fluctuations, nor could it be quantified accurately, and something else had to be sourced.

A number of clinical observations suggest that reduced speech activity and changes in voice features such as pitch may be sensitive and valid measures of prodromal symptoms of depression and effect of treatment.

Testing out these AI applications found them to be as accurate as clinicians in identifying persons with schizophrenia as opposed to healthy individuals. In other studies by additional researchers, tools such as the Natural Language Toolkit and the Kaldi speech recognition toolkit were used to make needed adjustments and assessments of speech. How can these new AI tools help?

A prominent researcher in the field of neurocognitive disorders, Murali Doraiswamy, MD, believes the future of AI will fill an unmet need, a shortage of psychiatrists and therapists in the US, but moreso in poorer countries of the world.

“You can use, for example, a multiclustering algorithm if you have 45 different types of information on a person — it could be genomic, could be biomarkers, could be brain scans, could be metabolomics, the microbiome. No individual human can actually go through all this to try to identify patterns, it would be too complicated.”

Language may still be the tool most useful in psychiatric evaluations. But it will now be assessed by algorithms written specifically to detect mental disorders as opposed to spotting typical concerns of living.

Further references:

Voice analysis as an objective state marker in bipolar disorder

Furthering the reliable and valid measurement of mental health
screening, diagnoses, treatment, and outcomes through health
information technology

Clinical state tracking in serious mental illness
through computational analysis of speech

App uses voice analysis, AI to track wellness of people with mental illness

Deep learning in mental health outcome research: a scoping review

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top