Ann Johnson, now 48, suffered a devastating stroke in 2005 that robbed her of speech and paralyzed her body.
Now, a pioneering blend of neuroscience and AI converts Johnson’s brain signals into written text and audible speech.
Even facial expressions like smiles and pursed lips are mimicked by an on-screen avatar chosen by Johnson to resemble her own face.
Just two years prior, the same research team had enabled a paralyzed man known as Pancho to generate basic words like “hello” and “hungry” at a rate of 15 to 18 words per minute.
Johnson’s more advanced implant can produce sentences at a speed of 78 words per minute, close to half the rate of typical conversational speech, which is around 160 words per minute.
The algorithm behind Johnson’s implant focuses on phonemes – the individual units of sound that makeup words – rather than just the words themselves, according to David Moses, the project manager.
When Johnson was accepted onto the 6-year long trial at the University of California San Francisco and Berkeley, she said, “This is a research study. Its purpose is not to improve my life but to possibly help others like me in the future, I’m a guinea pig.”
“Some family members think it’s too risky (it involves brain surgery). It lets me feel like I’m contributing to society.”
Kaylo Littlejohn, a graduate student involved in the research, emphasized the importance of adding voice to the avatar.
He said, “Speech has a lot of information that is not well preserved by just text, like intonation, pitch, expression.”
Combining facial expressions and vocalizations into a single technology
Published in the journal Nature, this marks the first time that both vocalizations and facial expressions have been synthesized directly from brain signals.
“We’re just trying to restore who people are,” stated Dr. Edward Chang, head of the University of California, San Francisco research team.
The technology, which currently relies on a wired connection between Johnson’s implant and a computer, could revolutionize the lives of those living with speech-limiting conditions like cerebral palsy and amyotrophic lateral sclerosis (ALS).
The technology uses a brain-computer interface (BCI) to solicit messages between the brain and a computer system. BCIs have been successfully used to replenish speech and movement in those with spinal injuries or paralyzing diseases.
Such technology has rapidly evolved, and experts predict that wireless versions could receive federal approval in the US within a decade.
Johnson’s commitment to overcoming her life-altering condition has been nothing short of inspiring, and she aims to be a trauma counselor and help others with her newfound ability.
“I feel like I have a job again,” she said. This is a poignant example of AI’s transformative benefits and the latest entry in a long sequence of phenomenal advances in medical technology.