
In a first of its kind breakthrough, researchers at Stanford University have developed a brain computer interface capable of decoding inner speech, your silent inner monologue, with up to 74% accuracy in real time.
This advancement offers fresh hope for people with severe paralysis, such as ALS or brainstem stroke, by allowing them to communicate using thoughts alone, without any physical attempt at speech.
How Brain-Computer Interface Works
Electrode arrays implanted in the motor cortex recorded neural signals from four participants who were asked to either imagine words silently or attempt to speak them aloud.
Although inner speech activated similar brain regions as attempted speech, the signals were weaker yet distinguishable. These signals trained AI models to decode imagined sentences from a vocabulary of 125,000 words, achieving up to 74% accuracy.
A ‘Mental Password’ for Privacy
Critically, the system includes a “mental password” feature to protect privacy. Users must think of a preselected phrase, such as “chitty chitty bang bang,” to activate decoding. This safeguard locked out unintended inner speech readings with over 98% success.
Traditional speech assist technology requires effortful attempts to speak, often exhausting for users. This new inner speech BCI bypasses physical speech efforts, making communication more fluid and less fatiguing.
Despite early stage status and technical limitations, the study signals a significant step toward restoring natural, effortless communication for those who have lost their ability to speak.
Researchers aim to improve decoding accuracy using higher resolution implants and smarter AI. The hope is for future BCIs to handle free form conversation and to further refine privacy protections for users’ most intimate thoughts.