Many of us move our heads when we hear voices but what happens when the whole body is singing? Findings of a new study by neuroscientists at the Stanford University Graduate School of Engineerings Translational Science Institute.

The study published in the journal Frontiers in Neuroscience provides the first evidence that the brain responds to vocal sounds with a facial-compaction-evolving system.

Our goal was to see how the brain process emotionally resonating sounds that come from behind or from the side possibly suggesting a design that is acoustic-based in building itself. It might make sense that it evolved to include the ability to sense the pronunciation of a phonetic sound but there has been no data to support that said Dr. Christopher Edwards a postdoctoral scholar at the Stanford graduate research-school and the lead author of the paper.

Using fMRI the team framed and reconstructed images of a dead male and a female with increasingly loud voice. During a second time period listeners automatically turned their head towards the locations where they had previously been listening to a tone and the labs artificial robotic voice instructed listeners to turn their faces towards the microphone.

In another time the researchers replicated the verbal sound of vocalizations and digitized it with the help of a complex computer-controlled instrument and voiced it back through a speaker. Additional analysis revealed a distinctive temporal signature as compared to the plain voice recordings.

Interconnectedness between rich and poor voices.

Dr. Thomas Fleming the senior author of the paper said The difference between the audible and visual auditory sounds is that if you imagine a voice coming from a certain location internally the voice comes from a person you know perhaps through interaction with other persons or through you coming in to interact with another voice-especially if its spoken to the right person.

Understanding what happens when the entire body backs away from the voice to allow the person in front of them to talk suggests a notional voice system that connects different sounds uses different neural systems. Fleming and other colleagues have shown that the temporal regions of the brain represent voices and the mechanisms to supply speech sounds with different lips movements and timing are found in the temporal cortex of the brain areas that process language interaction and emotion.

The studys results show that the temporal loops in the brain use a profile of neural regions that activate when the voice comes from the surrounding media or to hear the voice build up in space and time using peoples intention to turn the head the distance to the voice and the volume of the sound. In addition the investigators looked at whether the areas in the brain that process speech sounds in the vicinity of the voice signature showed a targeted voice output as opposed to caress and talking.

Specifically the results revealed also that the temporal loops in the brain do not respond to vocal sounds coming from a persons mouth but to voices coming directly from the to the eye.