• Graduate student Tyler Perrachione, left, and Professor John Gabrieli found that people with dyslexia have a much harder time identifying voices of different speakers than non-dyslexics.

    Photo: Patrick Gillooly

    Full Screen

Recognizing voices depends on language ability

Study finds that for people with dyslexia, it’s much harder to identify who is speaking.


Distinguishing between other people's voices may seem like a trivial task. However, if those people are speaking a language you don't understand, it becomes much harder. That's because you rely on individuals' differences in pronunciation to help identify them. If you don't understand the words they are saying, you don't pick up on those differences.

That ability to process the relationship between sounds and their meanings, also known as phonology, is believed to be impaired in people with dyslexia. Therefore, neuroscientists at MIT theorized that people with dyslexia would find it much more difficult to identify speakers of their native language than non-dyslexic people.

In a study appearing in Science on July 29, the researchers found just that. People with dyslexia had a much harder time recognizing voices than non-dyslexics. In fact, they fared just as poorly as they (and non-dyslexics) did when listening to speakers of a foreign language.

The finding bolsters the theory that impaired phonology processing is a critical aspect of dyslexia, and sheds light on how human voice recognition differs from that of other animals, says John Gabrieli, MIT's Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience and senior author of the Science paper.

"Recognizing one person from another, in humans, seems to be very dependent on human language capability," says Gabrieli, who is part of MIT's Department of Brain and Cognitive Sciences and also a principal investigator at the McGovern Institute for Brain Research.

Verbal cues

The lead author of the study, MIT graduate student Tyler Perrachione, earned his undergraduate and master's degrees at Northwestern University, where he was involved in studies showing that it is easier to recognize voices of people speaking your own language.

"Everybody's speech is a little bit different, and that's a big cue to who you are," he says. "When you're listening to somebody talk, it's not just properties of their vocal cords or how sound resonates in their oral cavity that distinguishes them, but also the way they pronounce the words."

After Perrachione arrived at MIT, he and Gabrieli decided to try to link this research with evidence showing that phonological processing is impaired in people with dyslexia. They tested subjects in identifying people speaking their native language (English), then Chinese.

When listening to English, the non-dyslexic subjects were correct nearly 70 percent of the time, but performed at only 50 percent when trying to distinguish Chinese speakers. Dyslexic individuals performed at 50 percent for both English and Chinese speakers.

"It's a beautiful study, in the sense that it's so simple," says Shirley Fecteau, a visiting assistant professor at Harvard Medical School and research chair in cognitive neuroplasticity at Laval University in Quebec. "It really seems like a very clear effect on voice recognition in people with dyslexia."

The finding suggests that people with dyslexia may have even more trouble following a speaker than they may realize, Gabrieli says. This adds to the growing evidence that dyslexia is not simply a visual disorder.

"There was a big shift in the 1980s from understanding dyslexia as a visual problem to understanding it as a language problem," Gabrieli says. "Dyslexia may not be one thing. It may be a variety of ways in which you end up struggling to learn to read. But the single best understood one is a weakness in the processing of language sounds."

Friend versus foe

Recognizing other members of one's species by their voices is critical for humans and other social animals. "You want to know who is a friend and who is a foe, you want to know who your partner is," Perrachione says. "If you're cooperating with someone for food, you want to know who that person is."

However, it appears that humans and animals perform that task in different ways. Animals can identify other members of their own species by the sounds they make, but that ability is innate and based on the sounds themselves, rather than the meaning of those sounds.

"We notice individual differences in this learned feature of our communication, which is the words that we use, and that's what really distinguishes human communication from animal communication," Perrachione says.

The researchers believe their work may also offer insight into the performance of computerized voice-recognition systems. Voice-recognition programs with access to dictionary meanings of words might do a better job of understanding different speakers than systems that only identify sounds, Perrachione says.

The researchers are now using functional magnetic resonance imaging (fMRI) to determine which parts of the brain are most active in dyslexics and non-dyslexics as they try to identify voices.


Topics: Brain and cognitive sciences, McGovern Institute, Dyslexia, Voice recognition

Comments

My wife discovered that I was dyslexic when I was 21 and a junior in college. As other people with handicaps, I learned many compensatory skills by necessity. I did, and do have a hard time with word pronunciation. In 27 year of marriage my wife has worked to minimizes this. To compensate, I rely on sentence structure and phrases to distinguish people and words. I can usually tell who sent me an email by the end of the first sentence. I also notice confidence (or insecurity) levels in a persons first few words. Much research has been done on dyslexic children. I encourage the researchers to investigate cognitive differences in dyslexic v.s. non-dyslexic people over 40. As the blind develop an acute sense of hearing, the dyslexic develops attention to other elements to interpret the world around them. Andrew
Thanks for the really good article. As an dyslexic I also often experience that I am struggeling to be able to estimate the age of the people I am talking to on the phone. I am not really good estimating the age in my mother tongue, however, as soon as I have to listen to other language, I am absolutly lost telling the age of the other person. If the person is a teenage or a pernsionar, I cannot destinguish. I would be curious to know, whether that is also related to me being dyslexic.
What about bi-lingual dyslexics? Will any studies be conducted for bi-lingual speakers?
I have a multilingual friend. English(1st), Japanese(2nd), who I tested the McGurk effect on. He was not effected by it. However, this friend also happens to be a communications major, so using his ears is a highly trained skill. Just wondering how much better multilingual people are at voice recognition than monolingual people? Also, given brain plasticity, could a young person identified as dyslexic begin auditory training exercises at an early age to rewire their brains?
@evolutionschildren in particular: as a speech-language pathologist, it's my raison d'etre to believe that good auditory training can and will affect the course and severity of dyslexia and language disorders in a broader sense. It's a great field of interest in research and applied sciences, i.e. therapies, diagnostics, etc. I'm always on the lookout for ways in which to improve communication skills and the prognosis for disabilities once they've been identified, and as yet I haven't found anything close to the ideal treatments for phonological disorders. Brain (or neuro-)plasticity is exactly the right target (or do I mean mechanism?) via which we hope to change the course of these pathologies, but as a clinician how do I best change the brain to eliminate or mitigate the problem? Any help or info on this from anyone? Seen anything going from research to practice? I don't want to be always the student, never the master on issues like this:)
There may not be any definitive masters among those who look for an ideal answer in the interrelated disorders that impact speech, voice recognition and dyslexia. There is help and info that may take us all a step closer to some mastery in preventing/overcoming dyslexia. A training exercise is being made available online to parents, parent teacher organizations and schools. It has been used with great success in leading reading remediation/learning skill development clinics. The primary exercise set, Sound Analysis (SA), has 41 levels and can be completed in ten minutes each school day for four to six weeks. It manipulates 17 core sounds to build innate auditory processing skills. Also coming on the scene is a normed evaluation, the PASS (Phonemic Awareness Skill Set) Assessment (PA), is being introduced this week as a screening tool for core auditory processing skills. Now a parent or educator can easily examine a child’s skill base for reading readiness. It also includes measurement for a learned skill, word attack. Both PA and SA are currently available at no cost from the nonprofit, Cognitive First, as a part of the CogRead Literacy Campaign. Building a stronger phonemic awareness skill set in k-1 children is a key element in equipping more children to succeed in phonics and meet 3rd grade reading proficiency standards. Supportive research and application information from Dr. Joseph Torgesen, the community of Kennewick, Washington, and from the National Science Foundation and Virginia State University as well as various background studies from the US Department of Health can be found at CogRead.org.
Back to the top