While analyzing and untangling multiple environmental sounds is an important social tool for humans, for animals that analysis is a critical survival skill. Yet humans and animals use similar cues to make sense of their acoustic worlds, according to new research from the University at Buffalo.
The study, published in the Journal of the Acoustic Society of America, fills an important gap in the literature on how animals group sounds into auditory objects.
When several sounds occur simultaneously in social settings, like music, a ticking clock and the buzz of fluorescent lighting, humans have no difficulty identifying those as separate auditory objects.
This is auditory stream segregation.
“There have been many studies like this in humans, but there has been a lot less work done to figure out how animals parse auditory objects,” says Micheal Dent, an associate professor in UB’s Department of Psychology in the College of Arts and Sciences.
“But animals can decipher the auditory world in a similar way as humans,” she says.
Dent’s study used budgerigars (parakeets) and zebra finches (songbirds), both vocal learners, to investigate the utility of cues used in stream segregation of the zebra finches’ song.
People use cues like intensity (volume), frequency (pitch), location and time to segregate sounds. This capacity can facilitate conversation in a noisy room, but for animals, segregating sounds in the environment can mean the difference between distinguishing a suitable mate from a potential predator.
Whether stream segregation happens in many species is limited by a lack of understanding about how it’s accomplished, according to Dent. But this new study provides important insights and suggests that stream segregation is not a uniquely human ability.
“Finding something like this in an animal that is not evolutionary related to humans suggests that stream segregation is something that happens across the animal kingdom,” say Dent, who last year was named a fellow of the Acoustic Society of America for her contributions to spatial hearing in animals.
In the study, birds were trained to peck a specific key when they heard a whole zebra finch song and another key when the heard a song with a deleted syllable, a broken song.
This identification task demonstrated the birds’ ability to differentiate between a natural whole song and an unnatural broken song.
The researchers then replaced the missing syllable with another sound, altering its intensity, frequency, location and time.
Using ecologically relevant stimuli for the study is a novel departure from other research that used either pure tones or white noise.
“Those sounds aren’t important to animals,” says Dent. “The songs we used are presumably very important to the animals.”
The intensity of that missing syllable was significant. When played softly, the birds heard a broken song, but increasing the intensity caused the birds to hear a whole song. Playing the syllables from different locations, like hearing Do-Re-Mi from three different places, was also recognized as broken.
“The birds are using spatial cues and intensity cues to distinguish whole songs from broken songs.” she says.
To determine the relevance of pitch, researchers played the missing syllable with half of the frequency content missing. Deleting the high end didn’t matter, but deleting the bottom half changed the percept to a broken song.
“This suggests they’re following the lowest contour of the frequency when they’re listening to song,” says Dent.
While intensity, location and frequency affect stream segregation, time appeared to be the least important cue for the birds. Changing the amount of time between each syllable was not important.
Although these laboratory observations do not necessarily equate to the natural environment, the research is an important foundation for future study of sound segregation in animals, says Dent.