AI can detect COVID-19 from the sound of your cough
The algorithm listens to subtle differences in coughs between healthy people and infected people.
People with COVID-19 who are asymptomatic can spread the disease without any outward signs that they're sick. But a newly developed AI, with a keen algorithmic ear, might be able to detect asymptomatic cases from the sounds of people's coughs, according to a new study.
A group of researchers at MIT recently developed an artificial intelligence model that can detect asymptomatic COVID-19 cases by listening to subtle differences in coughs between healthy people and infected people. The researchers are now testing their AI in clinical trials and have already started the process of seeking approval from the Food and Drug Administration (FDA) for it to be used as a screening tool.
The algorithm is based on previous models the team developed to detect conditions such as pneumonia, asthma and even Alzheimer's disease, a memory-loss condition that can also cause other degradation in the body such as weakened vocal cords and respiratory performance.
Related: Coronavirus live update
Indeed, it is the Alzheimer's model that the researchers adapted in an effort to detect COVID-19. "The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs," co-author Brian Subirana, a research scientist in MIT's Auto-ID Laboratory said in a statement. "Things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person's gender, mother tongue or even emotional state. There's in fact sentiment embedded in how you cough."
First, they created a website where volunteers — both healthy and those with COVID-19 — could record coughs using their cellphones or computers; they also filled out a survey with questions about their diagnosis and any symptoms they were experiencing. People were asked to record "forced coughs," such as the cough you let out when your doctor tells you to cough while listening to your chest with a stethoscope.
Through this website, the researchers gathered more than 70,000 individual recordings of forced-cough samples, according to the statement. Of those, 2,660 were from patients who had COVID-19, with or without symptoms. They then used 4,256 of the samples to train their AI model and 1,064 of the samples to test their model to see whether or not it could detect the difference in coughs between COVID-19 patients and healthy people.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
They found that their AI was able to pick up differences in the coughs related to four features specific to COVID-19 (which were also used in their Alzheimer's algorithm) — muscular degradation, vocal cord strength, sentiment such as doubt and frustration and respiratory and lung performance.
The sound of a cough
The AI model correctly identified 98.5% of people with COVID-19, and correctly ruled out COVID-19 in 94.2% of people without the disease. For asymptomatic people, the model correctly identifed 100% of people with COVID-19, and correctly ruled out COVID-19 in 83.2% of people without the disease.
These are "a pretty encouraging set of numbers," and the results are "very interesting," said Dr. Anthony Lubinsky, the medical director of respiratory care at NYU Langone Tisch Hospital who was not a part of the study.
But "whether or not this performs well enough in a real-world setting to recommend its use as a screening tool would need further study," Lubinsky told Live Science. What's more, further research is needed to ensure the AI would accurately evaluate coughs from people of all ages and ethnicities, he said (The authors also mention this limitation in their paper).
Related: Most promising COVID-19 vaccine candidates
If a doctor were to listen to the forced cough of a person with asymptomatic COVID-19, they likely wouldn't be able to hear anything out of the ordinary. It's "not a thing that a human ear would be easily able to do," Lubinsky said. Though follow-up studies are definitely needed, if the software proves effective, this AI — which will have a linked app if approved — could be "very useful" for finding asymptomatic cases of COVID-19, especially if the tool is cheap and easy to use, he added.
The AI can "absolutely" help curb the spread of the pandemic by helping to detect people with asymptomatic disease, Subirana told Live Science in an email. The AI can also detect the difference between people who have other illnesses such as the flu and those who have COVID-19, but it's much better at distinguishing COVID-19 cases from healthy cases, he said.
The team is now seeking regulatory approval for the app that incorporates the AI model, which may come within the next month, he said. They are also testing their AI in clinical trials in a number of hospitals around the world, according to the paper.
And they aren't the only team working on detecting COVID-19 through sound. Similar projects are underway in Cambridge University, Carnegie Mellon University and the U.K. start-up Novoic, according to BBC.
"Pandemics could be a thing of the past if pre-screening tools are always-on in the background and constantly improved," the authors wrote in the paper. Those always-listening tools could be smart speakers or smart phones, they wrote.
The study, partly supported by the drug company Takeda Pharmaceutical Company Limited, was published Sep. 30 in the IEEE Open Journal of Engineering in Medicine and Biology.
Originally published on Live Science.
Yasemin is a staff writer at Live Science, covering health, neuroscience and biology. Her work has appeared in Scientific American, Science and the San Jose Mercury News. She has a bachelor's degree in biomedical engineering from the University of Connecticut and a graduate certificate in science communication from the University of California, Santa Cruz.