SHARE

If you’re looking for it, there is plenty of bad news in the tech world. From concerns about hacking and identity theft to a 2017 survey out of England that ranked Instagram as “worst for young people’s mental health” compared to four other social platforms, it can be enough to make you want to become a Luddite.

But the other side of the issue might be able to put a smile on your face: Tech companies and researchers are turning to AI and other software to try to solve just about any problem you can think of, from identifying fake news, to noticing if someone falls, to looking for ways to speed up the amount of time an MRI scan takes.

Some companies are building software to help you change your thoughts for the better or even analyze a voice for signs of depression.

For example, Woebot is a cute chatbot app designed to be an on-call emotional helper.

It’s free, and looks like a texting application, like iMessages. The chipper ‘bot asks questions like “Got a second to reflect on what you’re grateful for today?” It can also graph your moods over time, or teach the user about “all-or-nothing thinking” statements like “Nobody likes me ever.”

Launched last year, the chatbot is clearly no human therapist; its responses are scripted, and it’s not going to fool anyone into thinking it’s a real person. But it is not designed to replace one. Instead, the point is to help people who may not need a mental health clinician, or those who don’t have access to one at the moment. It’s a resource for people who are having a panic attack in the middle of the night, for example.

“It’s designed to be mental health for everyone,” says its creator, Alison Darcy, a psychologist and the founder of Woebot Labs. “We’re really trying to encourage a cultural shift towards this acknowledgment that everybody has mental health and everybody need to look after it every day.” It is not intended to diagnose a mood disorder—that’s the purview of a clinician.

“Mental health is one of the best use cases for AI because we don’t have enough clinicians,” Darcy says. It’s a way of “task-shifting,” or moving some of the burden of work from humans to an app like Woebot that doesn’t ever sleep—and doesn’t send a bill.

Another app in the same general category is called Talkspace, although it connects people with real human therapists, not a chatbot. Patients communicate with professionals via long text messages, audio notes, video messages, or live video conversations. Plus, Talkspace uses the platform’s own de-identified, aggregated data to give the therapists recommendations they can choose to accept or ignore, like suggesting that they send a recorded video message at a certain time, according to Neil Leibowitz, the company’s chief medical officer.

Detecting depression

Then there’s a company called Cogito, born out of the MIT Media Lab. It makes AI-powered software that’s designed for call centers—more on that in a bit. While that’s their main focus, they also do work involving veterans.

For that, they partner with the U.S. Department of Veterans Affairs at an office in Colorado focused on preventing suicide. In a program involving “several hundred” veterans, according to Skyler Place, the chief behavioral science officer at Cogito, vets have access to an app, the Cogito Companion, that they can speak into and record a message to themselves. “Our technology then analyzes the mood of the veteran, as it relates to depression, from the intonation and the energy and the pauses in that monologue,” he says. Clinicians receive a score, from one to 100, reflecting the vet’s mood.

Other research at MIT has a similar focus. Tuka Al Hanai, a doctoral candidate in the computer science and electrical engineering department, focuses on using AI to detect problems like depression by analyzing audio files, or text, from a person’s voice.

She uses neural networks—a common artificial intelligence technique—to figure out whether the speaker is depressed or not. The best neural network, which took in combined data from both text and audio, is 77 percent accurate, she says.

“Mainly, the way we think about this [kind of tech], is that it potentially could be used as a screening aid for medical professionals,” she says. “It probably wouldn’t replace them.”

“The big vision is that you have a system that can digest organic, natural conversations, and interactions, and be able to make some conclusion about a person’s well being,” she adds. She says that people could even learn the signs of depression that the algorithms themselves have learned. “You as a human can learn to internalize whatever it is that the algorithm has learned, so you can go around and be the best friend or neighbor someone has.”

The listener

While Cogito is working with Veterans Affairs on mental health, their main focus is actually in a much different arena: call centers.

Their AI-powered software listens in on conversations between agents working in call centers (from the likes of MetLife or Humana) and the regular humans calling in. It then provides guidance to help the worker do a better job. “We listen for pauses, interruption, tension, energy, enthusiasm, boredom,” Place says. Their system analyzes the audio in real time, every 16 milliseconds.

The result of this AI analysis is visual cues for the call center worker. Since those employees frequently deal with the same issue again and again (and again), they can “go into autopilot and sound robotic and start speaking in a monotonic low-energy voice,” Place says. The call center worker is greeted by a coffee cup notification on the screen telling them to increase their energy. “It’s a really interesting example of AI helping humans sound more human.”

Besides that coffee cup, agents might see a pink heart—an “empathy cue,” Place says, a reminder for the worker to “pause and acknowledge the emotional state of the customer.” Is it sad that people may need AI-driven reminders to be empathetic? That’s another story.