Touching the Listener
By Fenella Saunders
Stretching people's facial skin can bias what words they hear
Stretching people's facial skin can bias what words they hear
DOI: 10.1511/2008.70.111
It comes as no surprise that jaw position affects the sound of words that people produce. The jaw moves during speech, so it's logical that the process isn't entirely dictated by throat shape, for instance. But might people's facial position affect how they hear as well? A study led by Takayuki Ito of Haskins Laboratories at Yale University, as reported at the November meeting of the Acoustical Society of America, has determined that, indeed, manipulating the skin around the mouth area alters how people hear words. The explanation for this phenomenon may contribute not only to the understanding of how we speak and hear, but also how we learn verbal information.
Takayuki Ito
In their experiment, Ito and his Haskins colleagues Mark Tiede, also of Massachusetts Institute of Technology, and David Ostry, also of McGill University, used recordings of the words "head" and "had." They electronically manipulated the two words to create 10 intermediate steps through which one word gradually morphed into the other. Listeners heard one of these steps at a time, in random order, and were asked to decide whether the word sounded more like "head" or "had."
Once they had established this baseline, the listeners had small plastic tabs affixed to either side of their mouths, attached by a wire to an automated device that would tug their facial skin either upwards, backwards or downwards, at the same time a word was played.
Although the magnitude of the effect was subtle, the investigators found that skin stretching altered the steps that the listeners identified as "head" or "had" to a significant degree. The direction of stretching was vital. When the skin was stretched upward, the words sounded more like "head," whereas downward stretching made the words sound more like "had." Backwards stretching had no effect.
"The direction of skin stretching that affected hearing corresponded to the position the jaw would take when the person produced these words," explains Ostry. The jaw is in a higher position when a person says "head," and a lower one during "had." Ostry and Ito speculate that the brain may be taking nerve cues from the face that normally occur with jaw movement during speech production and combining them with the auditory information to give a perceived sound that blends the two kinds of sensory information.
"The pattern of deformation of the facial skin also has to increase and then decrease as would happen when you're talking, or else the effects aren't present," Ostry says. "The nervous system is able to use this cutaneous deformation only when it has the same pattern that would ordinarily accompany speech."
Fooling the auditory centers of the brain with other sensory information is not without precedent. In a phenomenon called the McGurk Effect, when listeners are played the syllable "ga," but are simultaneously shown a face saying "ba," perceptually they hear "da," a sound somewhere in between what they were played and what they saw.
To understand why jaw movement might affect hearing, consider that when people learn a new word, they often repeat it, or mouth a word silently when they are trying to remember it. "When children learn to talk, they develop some expectations about what they want the sounds to be, and what the movement should be as well," says Ostry. This could help explain why people who go deaf in adulthood can speak intelligibly long after they can't hear themselves.
Ostry has recently studied adults with cochlear implants. With their devices turned off, the patients were asked to repeat a word while a robotic device applied a small force to their jaws. "With practice they adapted and corrected for the forces that were making their speech movements somewhat unnatural, even though they couldn't hear their speech to begin with," says Ostry. "To us it underscores the fact that the nervous system, in producing speech, is apparently just as concerned about getting the movements right as it with generating the appropriate sounds."
Increasingly, Ostry says, researchers recognize that there is this link between action and perception. "We're applying a pattern of skin deformation that would normally accompany production, yet this is something that affects the way in which people hear speech sounds. We're realizing, not just in speech but in work on limb movement as well, that the distinction between sensory and motor areas of the brain is blurred."
Ostry and Ito hope that this work may be beneficial to speech therapy. Movement training may be particularly useful in helping those who stutter, Ostry suspects. "Given that our manipulation produces perceptual effects," he says, "I think there is the potential to use it as a kind of augmentative strategy for dealing with perceptual disorders."
Click "American Scientist" to access home page
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.