Updated April 25, 2019, 19:15
Research breakthrough gives people with speech barriers a new hope: Scientists use artificial intelligence to translate brain signals into complete and comprehensible sentences. This technology may one day give speech to paralyzed or silent people.
Language is essential to our communication in everyday life – worse is when people lose their ability to talk through illness. Especially in strokes, but also in neurodegenerative diseases such as Alzheimer's or Parkinson's disease, in which nerve cells in the brain are no longer renewed, this often happens. But even incidents with brain damage can damage or completely lose control of speech.
Patients then rely on technical means of communication. But previously available systems, such as computerized inscriptions, work only to a limited extent – and very slowly.
For some time, neurologists have been trying to develop a direct interface between imaginary words and spoken language. A recent study has shown its first major success: For the first time, researchers from the University of California at San Francisco used a new brain and computer interface to translate words from brain signals from test subjects to the computer in sound language.
From the brain signal to the acoustic sentences
Neurosurgeon Edward Chang and his team had five epilepsy patients who spoke 100 sentences and meanwhile enrolled brain signals using an electrode-brain pad. Patients do not suffer speech loss but are still observed due to other electrode diseases on the brain.
In preparation, participants are already reading short sentences and stories while the electrodes record the generated brain waves. Researchers introduced this data into a neural network, artificial intelligence modeled by the human brain. Through this training, the network was pre-trained completely independently to assign signals to the brain to specific speech sounds.
The brain signals from the five epilepsy patients recorded in the 100 sentences were analyzed by the AI-based computer model of the human vocal tract. An algorithm finally translates the "speech movements" of the computer into a sound speech. Particularly impressive: the sentences generated with it were well understood – on average 50 to 70% of the words were recognizable.
Hope for people who can not talk?
But the new technology is not a mind reader: it is not yet clear how well it will work if patients are not even able to move their mouths. Because if patients silently pronounce sentences, the results are much worse.
The reason for this is that technology translates mainly the signals of fine motor brains that are transmitted to the relevant organs when they talk. However, if you just do not just put the electrodes on the surface of the brain but implant them directly into the brain tissue, better results could be expected, says Andrew Schwartz, a scientist at the University of Pittsburgh's Technology Review.
Research manager Edward Chang is also confident: in the next step, he and his team want to optimize the method specifically for reading neuronal signals only by imaginary words and reproducing them acoustically. If they succeed, it may one day give anyone who has lost their ability to talk a new opportunity to communicate quickly and clearly with the world around them. (KAD)
- Nature.com: Synthesis of speech from neural decoding of sentences pronounced
- Technological review: Scientists have found a way to decode brain signals in speech
- Wissenschaft.de: Brain signals become a sound language