While the average hearing aids work very well for most people in a setting with one or two people, some even in a small group, it’s often difficult for someone with hearing loss to keep up with the conversation where there is excessive background noise.
Restaurants, clubs, and parties are some of the common settings that get a lot of complaints in that area. Trying to pick out certain speakers in a sea of other voices, music, or even traffic is an effort in futility for many. This is the result of a lowered ratio of signal to noise. Hearing devices work to correct the loss of certain frequency sensitivity through amplification of specific frequencies.
Many device users are frustrated with the fact that they are not able to pick out a specific speaker while surrounded by many others because the device is busy amplifying the sound as a whole, not one individual. Some background noises can be downplayed, but if there is a similarity in this noise and the speaker, there is often a failure to differentiate between these sounds.
For the average hearing person, it’s not too difficult to identify the position of multiple sources of sound surrounding us. Unfortunately for those with hearing loss, even if they utilize hearing aids, it’s not that simple. In the past, without prior knowledge of which speaker the listener wants to emphasize, it’s been impossible to pick them out specifically.
With a breakthrough in science that proves that the auditory cortex of humans can pick out the voice of a specific speaker compared to others speaking in the room, possibilities are being explored into brain-controlled hearing aids. The hope is that these devices are able to constantly monitor the listener’s brainwaves and then cross-check them to other sounds around them to identify who they are attempting to listen to.
After identifying the chosen speaker, the hearing device then boosts that specific speaker or sound so that it’s easier for the listener to hear them. Research has increased exponentially in this area, which is a process known as auditory attention decoding (AAD), however, scientists still have a long way to go.
Major stepping stones include how to noninvasively measure the neural signals that will give the device this information as well as how to successfully and efficiently decode this information in a way that is quick yet accurate. In addition to this, a large stumbling block is the lack of audio for a singular speaker.
The need to separate speakers automatically through mixed audio needs to be the first challenge undertaken. Though there has been some progress towards speaker-independent speech separation, or where the listener has no preliminary knowledge of a specific speaker, it has only recently begun getting attention.
An initial version of this idea was published in 2017 but it had a flaw. It had to be trained to recognize certain speakers prior to interaction. This presented a problem if a new person entered the conversation such as a waiter or nurse.
As explained in an article on the Science Daily website, these advancements in technology are a huge step towards increasing the quality of sound available to hearing aid wearers and the ability to communicate more smoothly and effectively.
“The brain area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison,” said Nima Mesgarani, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author. “By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do.”
“In crowded places, like parties, hearing aids tend to amplify all speakers at once,” said Dr. Mesgarani, who is also an associate professor of electrical engineering at Columbia Engineering. “This severely hinders a wearer’s ability to converse effectively, essentially isolating them from the people around them.”
This hearing technology designed by the team at Columbia is more advanced. Rather than depending on only amplified sounds such as that from an external microphone, it actually is able to monitor the brain waves of the person wearing the device. It separates the individual speakers in a group, then compares them to the brain waves of the listener. Whichever speaker is the closest match to the listener’s brain waves is then emphasized over all the others by voice amplification.
“Previously, we had discovered that when two people talk to each other, the brain waves of the speaker begin to resemble the brain waves of the listener,” said Dr. Mesgarani.
With these advancements in technology, it’s only a matter of time before people who have hearing loss can attend parties, dinners, or meetings with multiple speakers and hear exactly who they came there to hear.