Tuning in to a new hearing mechanism

Findings from MIT scientists could lead to hearing aids that mimic the ear’s ability to focus on particular frequencies.


Press Contact

Jen Hirsch
Email: newsoffice@mit.edu
Phone: 617-253-2700
MIT News Office

More than 30 million Americans suffer from hearing loss, and about 6 million wear hearing aids. While those devices can boost the intensity of sounds coming into the ear, they are often ineffective in loud environments such as restaurants, where you need to pick out the voice of your dining companion from background noise.

To do that, you need to be able to distinguish sounds with subtle differences. The human ear is exquisitely adapted for that task, but the underlying mechanism responsible for this selectivity has remained unclear. Now, new findings from MIT researchers reveal an entirely new mechanism by which the human ear sorts sounds, a discovery that could lead to improved, next-generation assistive hearing devices.

“We’ve incorporated into hearing aids everything we know about how sounds are sorted, but they’re still not very effective in problematic environments such as restaurants, or anywhere there are competing speakers,” says Dennis Freeman, MIT professor of electrical engineering, who is leading the research team. “If we knew how the ear sorts sounds, we could build an apparatus that sorts them the same way.”

In a 2007 Proceedings of the National Academy of Sciences paper, Freeman and his associates A.J. Aranyosi and lead author Roozbeh Ghaffari showed that the tiny, gel-like tectorial membrane, located in the inner ear, coordinates with the basilar membrane to fine-tune the ear’s ability to distinguish sounds. Last month, they reported in Nature Communications that a mutation in one of the proteins of the tectorial membrane interferes with that process.

Sound waves

It has been known for more than 50 years that sound waves entering the ear travel along the spiral-shaped, fluid-filled cochlea in the inner ear. Hair cells lining the ribbon-like basilar membrane in the cochlea translate those sound waves into electrical impulses that are sent to the brain. As sound waves travel along the basilar membrane, they “break” at different points, much as ocean waves break on the beach. The break location helps the ear to sort sounds of different frequencies.

Until recently, the role of the tectorial membrane in this process was not well understood.

In their 2007 paper, Freeman and Ghaffari showed that the tectorial membrane carries waves that move from side to side, while up-and-down waves travel along the basilar membrane. Together, the two membranes can work to activate enough hair cells so that individual sounds are detected, but not so many that sounds can’t be distinguished from each other.

Made of a special gel-like material not found elsewhere in the body, the entire tectorial membrane could fit inside a one-inch segment of human hair. The tectorial membrane consists of three specialized proteins, making them the ideal targets of genetic studies of hearing.

One of those proteins is called beta-tectorin (encoded by the TectB gene), which was the focus of Ghaffari, Aranyosi and Freeman’s recent Nature Communications paper. The researchers collaborated with biologist Guy Richardson of the University of Sussex and found that in mice with the TectB gene missing, sound waves did not travel as fast or as far along the tectorial membrane as waves in normal tectorial membranes. When the tectorial membrane is not functioning properly in these mice, sounds stimulate a smaller number of hair cells, making the ear less sensitive and overly selective.

Until the recent MIT studies on the tectorial membrane, researchers trying to come up with a model to explain the membrane’s role didn’t have a good way to test their theories, says Karl Grosh, professor of mechanical and biomedical engineering at the University of Michigan. “This is a very nice piece of work that starts to bring together the modeling and experimental results in a way that is very satisfying,” he says.

Mammalian hearing systems are extremely similar across species, which leads the MIT researchers to believe that their findings in mice are applicable to human hearing as well.

New designs

Most hearing aids consist of a microphone that receives sound waves from the environment, and a loudspeaker that amplifies them and sends them into the middle and inner ear. Over the decades, refinements have been made to the basic design, but no one has been able to overcome a fundamental problem: Instead of selectively amplifying one person’s voice, all sounds are amplified, including background noise.

Freeman believes that by incorporating the interactions between the tectorial membrane and basilar membrane traveling waves, this new model could improve our understanding of hearing mechanisms and lead to hearing aids with enhanced signal processing. Such a device could help tune in to a specific range of frequencies, for example, those of the person’s voice that you want to listen to. Only those sounds would be amplified.

Freeman, who has hearing loss from working in a noisy factory as a teenager and side effects of a medicine he was given for rheumatic fever, worked on hearing-aid designs 25 years ago. However, he was discouraged by the fact that most new ideas for hearing-aid design did not offer significant improvements. He decided to conduct basic research in this area, hoping that understanding the ear better would naturally lead to new approaches to hearing-aid design.

“We’re really trying to figure out the algorithm by which sounds are sorted, because if we could figure that out, we could put it into a machine,” says Freeman, who is a member of MIT’s Research Laboratory of Electronics and the Harvard-MIT Division of Health Sciences and Technology. His group’s recent tectorial membrane research was funded by the National Institutes of Health.

Next, the researchers are continuing their studies of tectorial membrane protein mutations to see if tectorial membrane traveling waves play similar roles in other genetic disorders of hearing.


Topics: Research Laboratory of Electronics, Biology, Electrical engineering and electronics, Health sciences and technology, Hearing, National Institutes of Health (NIH)

Comments

As a person with a hearing loss I am delighted to see more research regarding hearing aids. Perhaps more emphasis on directional microphone technology should be investigated. Directional microphone technology contributes positive experiences with hearing aids. It was developed because of the dissatisfaction with hearing aids in not understanding spoken speech in noise. Directional microphones have been around since the 1970's but many hearing health providers fail to explain this program nor do the hearing aids have them. If you ever need someone to try out this new research please let me know.
Having dealt with hearing loss for over 30 years, surgery and ineffective aids, all to no avail, I am delighted to hear about your research I would be happy to be a test subject, if you need.
Other than the mechanics of the sound reception and communication into the brain, there is the main purpose of this event, which is to inform the hearer. For example, if this article on hearing research was a TV news item at the 6:00PM newcast, I would be very interested and focused on every word. If my wife were to approach me with information of the mail she had picked up that day, or some fact I was totally unfamiliar with, like her co-worker's actions that day, I would not understand what she had to say. I could tell that there was another source of audio in my presence, but if you pressed me for an identification of what was said, I could not. Part of this is the volume level. If my wife had matched or exceeded the volume of my preferred audio source of the moment, I would have understood more, if not all of what she said, because the stronger signal would have captured my attention, forcing the other source lower in the priority. In a non-committal hearing situation, the loudest audio source, or the most desired to concentrate upon, or the one you can understand (in cases of mixed languages) wins out. It's easy to pick out the language you understand from a group of people speaking because the other languages turn into the category of noise to yourself because they convey no meaning from your position of experience. There is the Charlie Brown phenomena where, if you recall the teacher in the seasonal TV cartoon feature is heard as a muted monotone trumpet. I thought this was genius! To a child with limited understanding of the language, the complex nuances of language meaning turn words into sounds. Very young children at parties fall asleep partly due to boredom when people are mainly speaking because much of what is said is unintelligible and sounds like mere muted trumpet monotones.
I also have severe hearing loss, primarily, I think, from long term exposure to factory work when young and aircraft engines. While this article describes research that is very valuable, I do not see the very important aspect of background noise reduction. The topic is mentioned in the article, but specific sound signal processing would not seem to address that issue. I am not an expert in that field, so hopefully they know a lot that I do not. Background noise was addressed by responder GaryV as a situation of loudest sounds getting the attention. In a restuarant, that would get to be self defeating as everyone starts to yell. I think the mind has some built in directionality under normal hearing situations, but with aids, that function seems to be over-ridden by the aid amplification. How would the signal processing help that?
Back to the top