And how does that make you feel?”
Empathy is the foundation of therapeutic intervention. But how can you know if your therapist is or will be empathetic? Technology developed by researchers from USC, the University of Washington and the University of Utah can tell you.
Leveraging developments in automatic speech recognition, natural language processing and machine learning, researchers developed software to detect “high-empathy” or “low-empathy” speech by analyzing more than 1,000 therapist-patient sessions. The researchers designed a machine-learning algorithm that takes speech as its input to automatically generate an empathy score for each session.
The methodology is documented in a forthcoming article titled “Rate My Therapist: Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.” According to the authors, it’s the first study of its kind to record therapy sessions and automatically determine the quality of a therapy session based on a single characteristic. The study appears in the PLoS ONE.
Some things don’t change
Currently, there are very few ways to assess the quality of a therapy session. In fact, according to the researchers, the methods for evaluating therapy have remained unchanged for 70 years. Methods requiring third-party human evaluators are time-consuming and affect the privacy of each session.
Instead, imagine a natural language processing app like SIRI listening in for the right phrases and vocal qualities. The researchers are building on an emerging new field in engineering and computer science called behavioral signal processing, which “utilizes computational methods to assist in human decision-making about behavioral phenomena.
The authors taught their algorithm to recognize empathy via data from training sessions for therapists, specifically looking at therapeutic interactions with individuals coping with addiction and alcoholism. Using automatic speech recognition and machine learning-based models, the algorithm then automatically identified select phrases that would indicate whether a therapist demonstrated high or low empathy.
Key phrases such as: “it sounds like,” “do you think,” and “what I’m hearing,” indicated high empathy, while phrases such as “next question,” “you need to” and “during the past” were perceived as low empathy by the computational model.
“Technological advances in human behavioral signal processing and informatics promise to not only scale up and provide cost savings through automation of processes that are typically manual, but enable new insights by offering tools for discovery,” said Shrikanth Narayanan, the study’s senior author and professor at the USC Viterbi School of Engineering. “This particular study gets at a hidden mental state and what this shows is that computers can be trained to detect constructs like empathy using observational data.”
Narayanan’s team in the Signal Analysis and Interpretation Lab at USC Viterbi continues to develop more advanced models — giving the algorithm the capacity to analyze diction, tone of voice, the musicality of one’s speech (prosody), as well as how the cadence of one speaker in conversation is echoed with another (for example, when a person talks fast and the listener’s oral response mirrors the rhythm with quick speech).
In the near term, the researchers are hoping to use this tool to train aspiring therapists.
“Being able to assess the quality of psychotherapy is critical to ensuring that patients receive quality treatment, said David Atkins, a University of Washington research professor of psychiatry.
“The sort of technology our team of engineers and psychologists is developing may offer one way to help providers get immediate feedback on what they are doing — and ultimately improve the effectiveness of mental health care,” said Zac Imel, a University of Utah professor of educational psychology and the study’s corresponding author.
In the long run, the team hopes to create software that provides real-time feedback or rates a therapy session on the spot. In addition, the researchers want to incorporate additional elements into their algorithm, including acoustic channels and the frequency with which a therapist or patient speaks.