The voices offer a lot of information. It turns out they can even help diagnose disease – and researchers are working on an app for that.
The National Institutes of Health is funding a massive research project to collect voice data and develop an AI that could diagnose people based on their speech.
Everything from vibrations in your vocal cords to breathing patterns when you speak offer potential insights into your health, says laryngologist Dr. Yael Bensoussan, director of the University of South Florida Health Voice Center and leader of the study.
“We asked the experts: Well, if you close your eyes when a patient walks in, just listening to their voice, can you get a sense of what diagnosis they have?” said Bensoussan. “And that’s where we got all our information.”
Someone who speaks softly and slowly could have Parkinson’s disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will begin by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders, and pediatric disorders such as autism and developmental delays. speech.
The project is part of the NIH’s Bridge to AI program, launched more than a year ago with more than $100 million in federal government funding, with the goal of building large-scale healthcare databases. scale for precision medicine.
“We were really missing what we call open source databases,” says Bensoussan. “Each institution kind of has its own database. But creating these networks and infrastructures was really important to then allow researchers from other generations to use this data.”
Travis Curry / Olivier Element
/
Olivier Element
This isn’t the first time researchers have used AI to study human voices, but it’s the first time data will be collected at this level — the project is a collaboration between USF, Cornell, and 10 other institutions.
“We saw that everyone was doing very similar work but still on a smaller level,” says Bensoussan. “We needed to do something as a team and build a network.”
The ultimate goal is an app that could help bridge access in rural or underserved communities, helping GPs refer patients to specialists. In the long run, iPhones or Alexa might detect changes in your voice, such as a cough, and advise you to seek medical attention.
To get there, researchers must start by amassing data, because AI can only be as good as the database it learns from. By the end of the four years, they hope to collect around 30,000 votes, with data on other biomarkers – like clinical data and genetic information – to match.
“We really want to build something that’s scalable,” says Bensoussan, “because if we can only collect data in our acoustic labs and people have to go to an academic institution to do it, it’s going a bit to against the objective”.
There are a few roadblocks. HIPAA — the law that regulates medical confidentiality — isn’t exactly clear on whether researchers can share their voices.
“Let’s say that you give your voice to our project”, launches Yael Bensoussan. “Who owns the voice? What do we have the right to do with it? What do we have the right to do with it? Can it be marketed?”
While other health data can be separated from a patient’s identity and used for research, voices are often identifiable. Each institution has different rules about what can be shared, and that opens up all sorts of ethical and legal questions that a team of bioethicists will explore.
In the meantime, here are three voice samples that can be shared:
Credit to SpeechVive, by YouTube.
These last two clips come from the Perceptual Speech Qualities Database (PVQD), whose license can be found here. No changes were made to the audio.
Copyright 2022 NPR. To learn more, visit https://www.npr.org.
#Artificial #intelligence #diagnose #disease #based #sound #voice