Researchers Came Up With Brain Interface that Can Sing What a Bird’s Thinking
Does the title of the blog amuse you? Well, it amused me for sure!
Recently, Timothy Gentner and his students from the University of California San Diego came up with a system mastering gadget that foretells what a hen’s almost to sing as they’re making a song.
In the report published on the website bioRxiv, the scientists announced,
“We decode realistic synthetic birdsong directly from neural activity.”
The team that also includes Ezequiel Arneodo, who is an Argentinian birdsong expert, describes the system as the prototype of
“a decoder of complex, natural communication signals from neural activity.”
The researchers say that the same approach could fuel progress towards a human thought-to-text interface.
Why did they use Birdsongs to Understand Human Speech?
Are you thinking that why are scientists using the birdsong to understand human speech? As the psychology professor Timothy Gentner from the University of California, San Diego (UCSD) explained in a press release, bird brains are pretty similar to human ones. And the research could help people with ailments that weaken their capability to communicate.
The study authors Timothy Gentner, Daril Brown II, Shukai Chen, Vikash Gilja, and Ezequiel Arneodo said,
“Our approach also provides a proving ground for vocal prosthetic strategies,”
Further, they said,
“While birdsong differs in important ways from human speech, the two vocal systems have many similarities, including features of the sequential organization and strategies for their acquisition, analogies in neuronal organization and function, genetic bases, and physical mechanisms of sound production. The experimental accessibility, relatively advanced understanding of the neural and peripheral systems, and status as a well-developed model for vocal production and learning make songbirds an attractive animal model to advance speech BMI, much like the nonhuman primate model for motor BMI.”
Biomechanical Models and AI Machine Learning
To easily map the brain activity to sound patterns and decrease the dimensionality, the researchers trained AI machine learning algorithms to outline the brain activity to mathematical equations that model the dynamic physical changes that occur in the finches’ syrinx (vocal organ) when singing rather than mapping neural activity to the songs.
The researchers reported,
“We employ a biomechanical model of the vocal organ that captures much of the spectro-temporal complexity of song in a low-dimensional parameter space.”
“This dimensionality reduction, compared to the full time-frequency representation of song, enables training of a shallow feedforward neural network (FFNN) that maps neural activity onto the model parameters.”
The team further used AI to generate synthetic vocalizations that sound like the actual finches.
The scientists stated that the research has “yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill” and that the biomechanics of song production in birds “bear similarity to those of humans and some nonhuman primates.”
This new study has demonstrated that complex high-dimensional behavior can be synthesized from ongoing neural activity directly using brain-computer interfaces and artificial intelligence technologies.
The researchers reported,
“We have demonstrated a BMI for a complex communication signal, using computation blocks that are implementable in real-time in an established animal model for production and learning of complex vocal behavior.”
How is the Research Going to Help?
One day, using a brain-machine interface as a vocal prosthesis may help those who are incapable of communicating due to brain conditions. Aphasia is an ailment that impairs the expression and understanding of language resulting from brain injury due to stroke, aneurysm, head trauma, brain tumor, neurological conditions such as Alzheimer’s disease and dementia, and other causes. Based on the National Aphasia Association (NAA), there are over two million Americans and 250,000 people in Great Britain living with aphasia.
According to U.S.-based Grand View Research, by 2027, the brain-computer interface market is predicted to reach USD 3.7 billion with 15.5 percent of a compound annual growth rate (CAGR) during 2020-2027.
Take Away Thoughts
Sure it sounds like science fiction, but researchers made it possible.
Many previous pieces of research showed that if you delay the sound of a person’s voice, playing it again but just a bit later, it will begin to stutter. The same is with birds when they sing.
And the fundamental research we discussed above is based on the same.
Hope you got the answers to the queries you were looking for.
Stay tuned to Technology & Trends for more such updates!