May 5 2017
Vocalization plays a significant role in social communication across species such as speech by humans and song by birds. Male mice produce ultrasonic vocalizations in the presence of females and both sexes sing during friendly social encounters. Mice have been genetically well characterized and used extensively for research on autism as well as in other areas, but until now there have been limitations to studying their ultrasonic vocalizations. A team of investigators, led by Pat Levitt, PhD, of The Saban Research Institute of Children's Hospital Los Angeles, have developed and demonstrated a novel signal-processing tool that enables unbiased, data-driven analysis of these sounds. The study was published in the journal Neuron on May 3.
Research into the underlying neurobiological basis and heritable nature of vocalizations in humans and animals has identified promising genes and neural networks involved in vocal production, auditory processing and social communication. "Understanding the complicated vocalizations of mice -- and how they relate to their social behavior -- will be crucial to advancing vocal and social communication research, including understanding how genes that affect vocal communication relate to children with developmental disorders including autism," said Levitt, who is also WM Keck Provost Professor in Neurogenetics at the Keck School of Medicine at USC.
The team of investigators developed and demonstrated a signal-processing tool that provides rapid, automated, unsupervised and time/date stamped analysis of the ultrasonic vocalizations of mice. Because of the time and date stamp attached to the vocalizations, the investigators expect that this tool will be useful in correlating vocalizations with video recorded behavioral interactions, allowing additional information to be mined from mouse models relevant to the social deficits experienced by persons with autism.
According to Allison Knoll, PhD, of CHLA, first co-author on the study, researchers in the field have been aware of and working to interpret the meaning of mouse vocalization by categorizing the sounds using a syllable classification system – with discrete sounds defined as syllables. Because there is such a wide variation in the types of ultrasonic vocalizations made by mice, in order to analyze the information researchers have had to develop ways of categorizing and combining sounds they perceived to be similar using manual or semi-automated techniques.
"This tool removes bias by fully automating the processing of vocalizations using signal-processing methods employed in human speech and language analysis," said Knoll. The signal-processing tool, called Mouse Ultrasonic Profile ExTraction (MUPET), is available through open-access software.