Voices contain a great deal of information. These can also assist in identifying illnesses since researchers are developing an app for this purpose. Researchers are working with multiple human voices, which will be used to develop AI-based technologies that might potentially identify major diseases ranging from Alzheimer's to cancer. The National Institutes of Health-funded initiative, launched on Tuesday, Sept. 13, 2022, aims to transform the human voice into a measurement that may be used as an indicator for illness, similar to blood or temperature.
Anything from vocal cord oscillations to inhalation patterns when someone talks can provide data about one's overall health, according to laryngologist Dr. Yael Bensoussan, director of the University of South Florida's Health Voice Center, as well as a research leader.
"What's great about voice details is that it's probably one of the cheapest kinds of data that you can collect from individuals," says Olivier Elemento, a lecturer at Weill Cornell Medicine's Institute for Computational Biomedicine and one of the project's principal investigators. "It's a fairly accessible sort of information that any patient may provide," added Doctor Bensoussan.
The Bridge2AI Program
The study is supported by the National Institutes of Health's Bridge2AI program, particularly funds studies that provide ethical, rigorous, as well as accessible statistics which can be used to generate AI capabilities. It will operate for four years and may receive up to $14 million in the financing, as stated in the study's note.
The study's authors will begin by developing an app to gather speech data from those experiencing ailments such as vocal fold paralysis, Alzheimer's disease, Parkinson's disease, anxiety, pneumonia, and autism. A clinician will oversee all voice recordings. "For instance, someone with Parkinson's disease may have a lower voice and speak slowly," Bensoussan observes. They may be asked to speak sounds, comprehend phrases, and read whole messages using the app.
It's not the first instance academics were using AI to analyze people's voices, but this is the first time data on this scale will be acquired - the study is cooperation involving USF, Cornell, plus ten other universities.
"We observed that every patient was doing very identical tasks, but on a lesser scale," Bensoussan remarks. "We are required to work together and establish a network," as noted in a report from The Verge.
From Database to One's Mobile Phone App
Its ultimate objective is to create an app that may help general practitioners connect patients to specialists, bridging access to remote or disadvantaged communities. Throughout the long run, iPhones or Alexa may accurately detect in your voice, like a cough, and recommend that you seek medical assistance.
To do it, scientists must first gather information because AI will only be as good as the databases from which it learns. Researchers want to gather roughly 30,000 voices by the conclusion of the four years, together with data on other indicators, such as clinical evidence and genetic material.
"We need to construct something scalable," continues Bensoussan, "since if we can just gather data in our acoustical laboratory so everyone has to travel to an accredited college to do it, that sort of eliminates the possibility," as Bensoussan said on a report from NPR.
There are numerous impediments. HIPAA, the statute that governs medical privacy, is ambiguous about whether scientists can reveal their voices.
Even though other health data may be isolated from one patient's identity and employed for study, voices are frequently recognizable. Each institution has distinct restrictions regarding what can and cannot be communicated, which raises a slew of legal and ethical concerns that a group of bioethicists will analyze.
RELATED ARTICLE: AI Trained With More Than 10,000 Brain Scans Can Identify Patterns Linked to Autism, Alzheimer's, Schizophrenia, and More Before Symptoms Appear
Check out more news and information on Tech & Innovation in Science Times.