,

Scientists develop a brain implant that translates brain signals into speech

Yasin Polat Avatar

·

Scientists develop a brain implant that translates brain signals into speech

In a groundbreaking development, scientists have successfully integrated a multitude of miniature sensors into a compact device, roughly the size of a postage stamp, capable of interpreting the intricate electrical signals associated with speech muscle movements. This innovative ‘speech prosthetic’ aims to foresee the intended sounds a person is trying to articulate, paving the way for a future where individuals facing speech impediments due to neurological conditions can communicate through their thoughts.

Contrary to the notion of mind-reading, these sensors specifically identify the muscles involved in lip, tongue, jaw, and larynx movements. Neuroscientist Gregory Cogan from Duke University emphasizes the potential impact on individuals with motor disorders such as ALS or locked-in syndrome, for whom current communication tools are often slow and cumbersome.

Existing technology decodes speech at roughly half the average speaking rate, whereas this new approach, incorporating a more extensive array of electrodes on a compact platform, holds promise for improving the speed. Although there is still work to be done before this speech prosthetic becomes widely available, the researchers are optimistic about its potential.

Scientists develop a brain implant that translates brain signals into speech

New technology could restore speech using brain activity

The electrode array, constructed on ultra-thin, flexible plastic, features closely spaced electrodes that can detect specific signals even from neurons in close proximity. In preliminary tests on four patients without speech impairments, the device was temporarily implanted during surgeries related to movement disorders or tumor removal.

The recorded brain activity from the speech motor cortex, while patients repeated various non-words, revealed distinct patterns of signal firing corresponding to different phonemes. The researchers observed dynamic adjustments in speech patterns, akin to musicians blending notes in an orchestra.

Utilizing a machine learning algorithm, the team assessed the recorded data’s predictive capability for future speech. Impressively, some sounds were predicted with 84 percent accuracy, particularly those initiating non-words. While accuracy varied for more complex situations, the decoder achieved an average accuracy rate of 40 percent based on just a 90-second data sample from each participant, a noteworthy improvement compared to existing technologies requiring hours of data for decoding.

The project has received substantial funding from the National Institutes of Health, enabling further research and refinement of the technology. The team envisions developing wireless recording devices, allowing users greater mobility without the need for a constant electrical connection.

References: Duraivel S, Rahimpour S, Chiang C-H, Trumpis M, Wang C, Barth K, Harward SC, Lad SP, Friedman AH, Southwell DG, Sinha SR, Viventi J, Cogan GB. High-resolution neural recordings improve the accuracy of speech decoding - Nature Communications [Internet]. Nature. 2023. https://www.nature.com/articles/s41467-023-42555-1

Author and editor

  • Yasin Polat

    Hi, I’m Yasin Polat, the founder of UNILAB, managing LifeWare, Postozen, MyUNILAB, Legend Science, Dark Science and a number of other UNILAB projects. In this adventure that I started with Legend Science and Dark Science projects, I enjoy improving myself by diving into new areas of knowledge every day despite my lack of experience. I am currently continuing my education at Istanbul Medeniyet University in the Department of Bioengineering.

Leave a Reply

Your email address will not be published. Required fields are marked *