Video and Slides
Outline: Voice technology for healthcare
Dr. Shona D’Arcy, CEO Kids Speech Labs
Speech Recognition is often seen as the holy grail for healthcare, imagine removing all the form filling that is currently taking up clinicians valuable time. In this talk I will try to answer a few pertinent questions:
- Why aren’t all hospitals completely voice enabled?
- What are examples of interactive voice applications that are working in healthcare and delivering value?
- What does the future hold for interactive voice applications in healthcare?
Review of Presentation
You can ask Shona questions in the comments section of this weblog, or contact Shona directly on LinkedIn.
This is an excellent introduction to the many uses of speech recognition in the Healthcare industry, which accounts for about half the voice technology investment.
Because voice requires neurological, cognitive, and physical capabilities its a powerful tool beyond the automation of taking notes, into improving accessibility to health services, and it can even be used diagnostically.
Shona provides some useful background on her projects in speech recognition, and the importance of good training data. A point David Curran has made in several of his TADSummit presentations over the years, e.g. How to improve Natural Language Datasets.
Shona highlights the 20 years of work Nuance has done in Electronic Health Records, a major time sink for doctors. The importance of the training data with all the verified medical terminology is critical for a lower word error rate.
For example, the medical transcription company SOPRIS Health, has been able to use its verified data from over 10 years of transcription, to build a competitive automated tool. This highlights why Google is so keen to offer its tools to build up verified training data for its algorithms.
BUT Shona highlights a critical point, parts of the transcription still need human verification, e.g. dosage, as people’s lives are at stake. While transcription will never be 100% automated, it can still lessen the workload.
On patient engagement using voice, 2 issues that have slowed adoption are privacy and security. Your voice can be considered a personal identifier. Using smart speakers is not necessarily a private. Also there are important design issues in building voice interfaces for the needs of patients, elderly or disabled.
The final point on the role voice can play diagnostically is very interesting; across cognitive decline, Parkinson’s, cardiac arrest, depression, schizophrenia, and even COVID. I have noticed that people going into a depressive episode talk and interact differently, their face can even look different. It’s still early days in the application of speech technology diagnostically, its moving out of the lab into trials. There are so many exciting applications of speech recognition!
Thank you Shona for an inspirational presentation 🙂