The Neuroscience of: Music
Laura Mtewele explores the neurological processes that underscore our love for music.
For someone with huge appreciation for music, but absolutely no talent for it, I’m quite an audiophile. It has always been a mystery for me how a cacophony of sound, when arranged just right, can be so endearing for the human ear. Our appreciation for melodies is universal and perhaps as old as humanity itself, but where exactly does it come from? What is universal is that we all appreciate some kind of music, regardless of language, culture or profession. The right sequence of sound will trigger sadness, joy, or happiness in both the British banker and the Australian aborigine. This universality led even Darwin to believe that music must be an adaptive function of evolution, perhaps even the forefather of language. He argued that music is part of our sexual toolkit, based on analogies with the rich courtship culture of songbirds. But analogy isn’t enough, scientists must seek out proof.
Music is clearly a useful tool for a variety of reasons. Since it is an artistic tool able to convey a wide variety of emotional signals, music is suited to reduce anxiety and enhance general wellbeing as well as to communicate our inner state to others. To give an illustration, we know that children as young as four become more cooperative when the task involves listening to music. Not only does music lead to social cohesion but it even enhances memory in certain cases. A long-forgotten song has the ability to remind of the past, a perfect example of music as an effective contextual cue that is strongly associated with memory. Another reason to appreciate the adaptive value of music is the crucial role of the collective emotional experience it offers to a community. Simply gathering and playing music together is a tool to reinforce archetypal responses to fundamental life events and emotions every individual is bound to experience – death of a loved one, conflict, love.
But in what way did our ancestors enjoy music? Why did it develop in the first place? There is evidence that music might be a byproduct of another uniquely human feature: language. Our ability to acquire and speak a language is almost a miracle considering the engineering problems evolution had to overcome to get there. The fact that both music and language manipulate and process qualities of sound – namely rhythm, tone and frequency – suggest a link in their origins. According to cognitive scientist Steven Pinker, music plays only a supporting role to language. On the other hand, compelling evidence also suggests that early communication systems served as a common basis for both language and music, which developed in tandem.
Looking at areas shared by music and language processing in the left and right hemisphere prove useful in assessing the degree of their relatedness. Correlational evidence of musical training predicting language learning abilities demonstrates a close relationship of modalities related to language and music, given that the effect is not limited to predominantly musical qualities like pitch. Yet, there is mounting evidence challenging the idea that music and language rely on the same or even interdependent systems. The mere fact that aphasic patients can often sing but not speak suggests that the overlap may be limited. All things considered, the jury is still out on the nature of the language-music relationship, with the narrative shifting towards the view that the two are independent but closely related.
Having said that, we can glean a great deal of insight from cases of brain damage and developmental disorder, using neuroimaging to investigate how the brain processes complex phenomena like music. Since these disorders are often associated with musical deficits like musical hallucinations, semantic dementia, selective loss of emotional response to music or enhanced craving of music, they inform us about the systems that process music. Recent research on impairments like amusia, a disorder affecting both musical abilities and socio-emotional cognition suggests a strong link between emotion processing and communication. Patients of amusia often show difficulties in these fields, which points in the direction of a completely new model for music. The findings support the pre-existing theory that music evolved from call signals of our ancestors, as a low-cost code to turn emotional mental states into social signals in scenarios such as threatening predators, infant bonding or mate selection. Provided that music and language developed from the same communication system it is likely that they serve slightly different evolutionary purposes. Language is traditionally viewed as the system more suited to convey meaning, while music communicates emotions. Neither language nor music hold a monopoly over emotional or referential information, the evidence merely demonstrates that they each excel in one arena.
At this point the question naturally arises: Is there any practical use to understanding why music is such a crucial part of human existence? Language therapy, a discipline focusing on patients suffering from movement disorders, dementia or autism demonstrates most effectively the significance of the field. The use of rhythm for improving fine motor movements, the application of music to revive old memory traces or capturing the interest of children struggling with language are only a few examples illustrating the immense success of music theory. Besides the obvious clinical gains, these findings also encourage a more holistic approach to music perception research, combining its playful, social and developmental aspects. Not only is the new model consistent with previous research but it brings us closer to a satisfying answer for the chicken-egg problem of language and music, suggesting that the two are closely interlinked but separate systems.