New AI system successfully identifies Alzheimer’s disease using speech analysis

July 4, 2024

  • Boston University built an AI system capable of detecting Alzheimer's from speech
  • It's 78.5% accurate in predicting the progression of the disease across six years
  • The model unlocks the door to earlier diagnosis, and in turn, treatment
AI brain

By analyzing speech patterns, researchers at Boston University have developed an AI system that can predict with nearly 80% accuracy whether someone with mild cognitive impairment will develop Alzheimer’s disease within six years.

The study, published in the journal Alzheimer’s & Dementia, uses AI to extract valuable diagnostic information from cognitive assessments, accelerating Alzheimer’s diagnosis and, in turn, treatment. 

The team’s AI model achieved an accuracy of 78.5% and a sensitivity of 81.1% in predicting progression from mild cognitive impairment (MCI) to Alzheimer’s disease within a six-year timeframe. This beats other traditional and non-invasive tests.

Crucially, though, the system relies solely on easily obtainable data: speech transcribed from cognitive assessments and basic demographic information like age, sex, and education level.

Cognitive assessments like the Boston Naming Test involve a clinician talking to the patient. The audio from these tests is often recorded for further analysis. 

“We wanted to predict what would happen in the next six years—and we found we can reasonably make that prediction with relatively good confidence and accuracy,” said Ioannis (Yannis) Paschalidis, director of the BU Rafik B. Hariri Institute for Computing and Computational Science & Engineering and one of the study’s lead researchers.

“If you can predict what will happen, you have more of an opportunity and time window to intervene with drugs, and at least try to maintain the stability of the condition and prevent the transition to more severe forms of dementia.”

More about the study

Here’s a breakdown of how the study worked:

  1. The research team began by collecting audio recordings of cognitive assessments from 166 participants diagnosed with mild cognitive impairment (MCI). They then tracked these individuals over a six-year period to determine who progressed to Alzheimer’s disease and who remained stable.
  2. The team used advanced speech recognition technology to transcribe the audio recordings and prepare the data for analysis. 
  3. Next, the researchers applied sophisticated natural language processing techniques to extract a wide array of linguistic features and patterns that they believed could potentially serve as indicators of Alzheimer’s risk.
  4. They then used the speech features and demographic information to develop multiple machine learning models.
  5. These AI models were designed to predict the likelihood that a given individual would progress from mild cognitive impairment to Alzheimer’s disease based on their unique speech patterns and personal characteristics.
  6. The models achieved an accuracy of 78.5% and a sensitivity of 81.1% in predicting which participants would develop Alzheimer’s within the six-year study period.
  7. In a final analysis, the research team identified cognitive tests with the most predictive power for Alzheimer’s risk, such as the Boston Naming Test, similarity tests, and the Wechsler Adult Intelligence Scale.

“Digital is the new blood,” said Rhoda Au, a professor at BU’s Chobanian & Avedisian School of Medicine and co-author of the study. 

“You can collect it, analyze it for what is known today, store it, and reanalyze it for whatever new emerges tomorrow.”

One of the most interesting aspects of the study found that certain parts of the cognitive assessments were especially predictive of future Alzheimer’s risk. 

“Our analysis revealed that subtests related to demographic questions, the Boston Naming Test, similarity tests, and the Wechsler Adult Intelligence Scale emerged as the top features driving the performance of our model,” the researchers note. 

This could inform the development of more targeted cognitive assessments, further streamlining the screening process.

While the results are promising, the researchers admit the need for further validation in larger, more diverse populations. 

Speech recognition can open the door to early diagnosis

Speech analysis has proven a valuable technique for predicting Alzheimer’s and other diseases.

In a 2020 study similar to the Boston University study, University of Sheffield researchers demonstrated their AI’s ability to distinguish between participants with Alzheimer’s disease or mild cognitive impairment and those with functional cognitive disorder or healthy controls with an accuracy of 86.7%. 

Researchers at Klick Labs also developed an AI model that can detect type 2 diabetes using brief voice recordings of just 6 to 10 seconds. Advanced diabetes can impact the voice through nerve damage, impaired blood flow, and dry mouth, resulting in detectable changes. 

The study analyzed 18,000 recordings to identify subtle acoustic differences between diabetic and non-diabetic individuals.

When combined with factors like age and BMI, the model achieved a maximum test accuracy of 89% for women and 86% for men.

Together, these studies prove that AI-supported noninvasive tests and diagnostic methods could lead to quicker, more effective treatment, even when specialist doctors and equipment are absent.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions