
(image credit/Noah Berger/Scientific American)
Medical advances previously thought impossible have come to fruition due to artificial intelligence.
Two weeks ago, published studies detailed how AI-assisted brain implants enabled two paralyzed individuals to communicate through an animated avatar. The brain-computer interface (BCI) translated the study participant's brain signals into speech and facial movements.
This breakthrough development succeeded in two separate studies, one at Stanford University and one at the University of California, San Francisco. The new implants can decode desired speech at 62 and 78 words per minute.
The Stanford study shows this is still slower than the average of 160 words per minute for natural conversations, but the progress is 3.4 times faster than the former record. Previously, individuals with severe speech and motor impairment or a complete lack of speaking ability could utilize technology based on hand movement, which allowed typing speeds of between 8 to 18 words per minute.
The innovation in the studies utilized different technologies.
The Stanford Study developed a recurrent neural network (RNN) to emit the probability of each phoneme being spoken at a specific time, which are distinct sound units in language. These probabilities are combined with a large-vocabulary language model to predict the most probable sentence.
Scientists trained the software by asking Pat Bennett, a 67-year-old ALS patient, to try to say sentences aloud. The BCI learned to recognize the neural signals associated with the orofacial movements utilized in speech.
"Speech BCIs have the potential to restore natural communication at a much faster rate but have not yet achieved high accuracies on large vocabularies (that is, unconstrained communication of any sentence the user may want to say),” the study reported. Bennett, for instance, could communicate with a 23.8% word error rate, indicating this technology still needs improvement to be ready for everyday use.
However, the findings indicate strong promise that as the language models continue to learn, they will eventually have a word error rate low enough for everyday use as "word error rate decreases as more channels are added, suggesting that intracortical technologies that record more channels may enable lower word error rates in the future."
In the study conducted by the University of California, San Francisco, researchers employed a unique device that sits on the brain's cortex rather than being inserted inside it. This device, called electrocorticography (ECoG), boasted 253 electrodes and was utilized on a 47-year-old woman named Ann. Ann had lost her ability to speak due to a stroke she suffered at the age of 18.
This study trained AI algorithms to decode Ann's speech, allowing her to say 78 words a minute with a 25.5% error rate.
"Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis," the study concluded.
Despite much of the fear surrounding artificial intelligence, it is clear that revolutionary breakthroughs in the medical field with AI's help will lead to substantially raised standards of living for many worldwide.
From the Stanford study, Pat Bennett said, "For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships."
Share This Post On
0 comments
Leave a comment
You need to login to leave a comment. Log-in