Researchers at Stanford University developed a Brain-computer interface (BCI) which converts neural signals into spoken words.
This BCI, detailed in a study published in Cell, uses sensors implanted in the motor cortex to detect brain activity linked to inner speech.
Machine-learning models interpret these signals to predict intended words in real time.
These advancements now provide more options for individuals with severe paralysis, allowing them to utilize their inner speech to communicate.
Devices that track eye movements or subtle muscle twitches allow users to choose words on screens.
The study involved four participants: three with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke. All had pre-existing brain sensors.
Participants imagined sentences, which appeared on screens from a 125,000-word vocabulary.
This system achieved decoding speeds of 120 to 150 words per minute, matching natural conversation rates. It demonstrated about 74% accuracy in interpreting imagined sentences.
Unlike prior inner speech decoders limited to small word sets, this version handles expansive language.
Compared to attempted speech BCIs, the inner speech method avoids physical effort like inhaling or making sounds.
Users reported less fatigue and no unwanted noises or expressions. Participants expressed excitement at regaining quick communication, including the ability to interrupt conversations.
The technology functions only when users can form speech plans mentally, suitable for conditions like dysarthria where planning remains intact but execution fails.
It does not assist those unable to convert ideas into articulatory plans. Experts note it targets specific impairments in the speech process.
To protect privacy, the device includes a mental code phrase, “chitty chitty bang bang,” to activate or deactivate transcription. This prevents unintended decoding of thoughts.
Researchers emphasize patient-focused development to address real needs.
Concerns about mental privacy arise with brain-reading implants, but current work prioritizes ethical practices.
Teams led by doctors aim to solve patient problems, such as restoring natural voices. Ongoing research explores safeguards against misuse.
Lead researcher Erin Kunz entered the field after her father lost speech to ALS, becoming his translator. She highlights the technology’s potential impact on daily life. Kunz credits participants for volunteering to advance solutions for others with paralysis.
This breakthrough in inner speech decoding marks a significant leap in assistive technologies, offering new hope for individuals facing severe paralysis and communication barriers.
Patient volunteers play a pivotal role, selflessly contributing to developments that may benefit future generations. As these neural prosthetics evolve, they promise to restore natural, effortless interaction, ultimately enhancing quality of life.
Read the full article here