Hearing aid development has moved forward by leaps and bounds in the past 100 years. The bulky handheld ear trumpets from the beginning of the 1900s were replaced by eyeglass hearing aids in the middle of the century. Shortly after that, came the hearing aid worn behind the ear, which is still in use—albeit in a somewhat smaller version.
Technically speaking, the size of hearing aids cannot be much smaller than the tiny in-the-ear aids currently available on the market. DTU researchers are therefore focusing on a completely different area—namely gaining greater insight into the brain’s perception of sound.
“The dream is to develop a hearing aid that can reproduce the sound in the same way as normal hearing perceive sounds like speech, music, etc. However, this requires that we first gain greater insight into the audiological area and understand how the brain perceives sound,” explains Professor Torsten Dau—one of the researchers at DTU Electrical Engineering working on hearing aid development.
Torsten Dau is primarily concerned with research into the audiological perception of sound and objective measurements of audiological function. He also draws on knowledge from other fields of research—among others—speech understanding, linguistics, and acoustics, to gain insight into how a hearing aid needs to process sound in order to reproduce the complex and multi-faceted soundscape that a normal hearing person experiences.
Decoding listener’s brain
A core problem in hearing technology is getting the hearing aid to focus on the sound the wearer wants to hear. Hearing impaired people often experience difficulties when many people talk at once—a phenomenon known as ‘cocktail party syndrome’.
“We’ve all tried sitting in the canteen and focusing on hearing what was being said at the neighbouring table while we trying to filter out the sound from those sitting next to us. However, a hearing aid doesn’t know which sounds the user wants to focus on—and which he wants to filter out,” explains Torsten Dau.
“My vision is therefore to develop a hearing aid that can decode the information from the listener’s brain—e.g. that he or she is interested in hearing what’s being said at a neighbouring table, for example, and then boosting the relevant sounds. This requires that we, as engineers, gain extensive knowledge about the nature of the processes in the brain so that we can build models and replicate this in the hearing aid.”
According to Torsten Dau, engineers have an important role to play in the cross-field between artificial intelligence and neuroscience. Engineers are not traditional brain researchers, but include knowledge of the brain in their work with hearing aids.
“This knowledge makes it possible for us to simulate and prepare advanced mathematical models for how the brain perceives sound. Armed with this insight, it is possible to understand how a hearing aid needs to process sound in order to reproduce it in the same way as a normal hearing person.”
Linking to Internet of Things
Concurrently with the development of better hearing aids, we are also witnessing an adaptation in relation to the surroundings, explains Finn Möhring, Vice President, R&D at Oticon—one of Denmark’s and the world’s leading hearing aid manufacturers.
“We are already familiar with Siri from our phones. Similar personal assistants could be incorporated into hearing aids, helping the user, for example, to find directions to unknown places and looking up phone numbers on the internet. We will also see hearing aids that to know when a person needs to make an adjustment in order to be able to follow a lecture or watch TV,” says Finn Möhring.
Both Finn Möhring and Torsten Dau expect to see a fusion of the hearing aid and the mobile phone headset. And classic hearing aid problems such as as ringing caused by background interference and dampening of wind noise will be eliminated through intensive research.