In the new study, the Stanford team wanted to find out whether neurons in the motor cortex also contain useful information about speech movements. That is, could they detect how “subject T12” was trying to move her mouth, tongue, and vocal cords in an attempt to speak?

These are small, subtle movements, and one big discovery, according to Sabes, is that just a few neurons contain enough information for a computer program to predict with good accuracy what words the patient was trying to say. This information was relayed by Shenoy’s team to a computer screen, where the patient’s words, spoken by the computer, appeared.

The new result builds on previous work by Edward Chang of the University of California, San Francisco, who wrote that speech involves the most complex movements that humans make. We squeeze out air, add vibrations that make it audible, and shape it into words with our mouths, lips, and tongue. To pronounce the sound “f”, you need to put your upper teeth on your lower lip and blow out air – this is one of dozens of mouth movements necessary for conversation.

The way forward

Chang previously used electrodes placed on top of the brain to allow a volunteer to talk through a computer, but in a preprint, the Stanford researchers say their system is more accurate and three to four times faster.

“Our results point to a possible way forward for restoring communication to people with paralysis at conversational speed,” wrote the researchers, who include Shenoy and neurosurgeon Jamie Henderson.

David Moses, who works with Chang’s team at UCSF, says the current work achieves “impressive new performance measures.” Still, even as records continue to be broken, he says, “it will become increasingly important to demonstrate consistent and reliable performance over many years.” Any commercial brain implant may have a difficult time getting past regulators, especially if it degrades over time or if recording accuracy drops.

The way forward will likely involve both more advanced implants and closer integration with artificial intelligence.

The current system already uses several types of machine learning programs. To improve its accuracy, the Stanford team used software that predicts which word usually comes next in a sentence. “I” is more often followed by “am” than “ham,” even though the words sound similar and can trigger similar patterns in someone’s brain.

Source by [author_name]

Previous articleJustice Department files second antitrust lawsuit against Google, seeking to break up its ad business
Next articleThe Importance of Cyber ​​Security in Today’s Digital Age – One America News Network