
Developed at two California universities, the innovative technology combines brain-computer interfaces with advanced AI to decode neural activity into audible speech.
Researchers in California have achieved a significant breakthrough with an AI-powered system that restores natural speech to paralyzed individuals in real time, using their own voices, specifically demonstrated in a clinical trial participant who is severely paralyzed and cannot speak.
This innovative technology, developed by teams at UC Berkeley and UC San Francisco, combines brain-computer interfaces (BCI) with advanced artificial intelligence to decode neural activity into audible speech.
Compared to other recent attempts to create speech from brain signals, this new system is a major advancement.
The system uses devices such as high-density electrode arrays that record neural activity directly from the brain’s surface. It also works with microelectrodes that penetrate the brain’s surface and non-invasive surface electromyography sensors placed on the face to measure muscle activity. These devices tap into the brain to measure neural activity, which the AI then learns to transform into the sounds of the patient’s voice.
The neuroprosthesis samples neural data from the brain’s motor cortex, the area controlling speech production, and AI decodes that data into speech. According to study co-lead author Cheol Jun Cho, the neuroprosthesis intercepts signals where the thought is translated into articulation and, in the middle of that, motor control.
AI ENABLES PARALYZED MAN TO CONTROL ROBOTIC ARM WITH BRAIN SIGNALS
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
EXOSKELETON HELPS PARALYZED PEOPLE REGAIN INDEPENDENCE
One of the key challenges was mapping neural data to speech output when the patient had no residual vocalization. The researchers overcame this by using a pre-trained text-to-speech model and the patient’s pre-injury voice to fill in the missing details.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
HOW ELON MUSK’S NEURALINK BRAIN CHIP WORKS
This technology has the potential to significantly improve the quality of life for people with paralysis and conditions like ALS. It allows them to communicate their needs, express complex thoughts and connect with loved ones more naturally.
“It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future,” UCSF neurosurgeon Edward Chang said.
The next steps include speeding up the AI’s processing, making the output voice more expressive and exploring ways to incorporate tone, pitch and loudness variations into the synthesized speech. Researchers also aim to decode paralinguistic features from brain activity to reflect changes in tone, pitch and loudness.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES
What’s truly amazing about this AI is that it doesn’t just translate brain signals into any kind of speech. It’s aiming for natural speech, using the patient’s own voice. It’s like giving them their voice back, which is a game changer. It gives new hope for effective communication and renewed connections for many individuals.
What role do you think government and regulatory bodies should play in overseeing the development and use of brain-computer interfaces? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.