It seems that as the evenings are about socialising, the time for these daily posts is morning. This hopefully means that what I write is better because it has been filtered through one night of sleep, but it could of course only mean that these are already out-of-date.
The second day started with two keynote addresses. The first was a shared presentation by Lars Ole Bonde and Tony Wigram, two music therapists who have been central in establishing music therapy as an evidence-based discipline. They have been involved in a number of studies that adhere to the strict standards of medical research and review. Their talk was on Music Dynamics and Emotion in Therapy: Theory and Applied Research.
For a morning session starting at 8.30, their topic was a dangerous one. They played a lot of examples of relaxing music and even asked people to close their eyes and sit back while listening to it… I didn’t dare, because I think I would have fallen asleep, due to not sleeping so much during the night.
The second keynote was by Dr. Aniruddh Patel and his topic was Music, Neuroscience and Evolution. There’s a lot of debate on the evolutionary origins of music, mainly on whether music is an evolutionary adaptation or not. Like all debates, this has been polarised to two opposing views. According to one, music is biologically important, it is selected for in evolution. The other, perhaps epitomized in Steven Pinker’s formulation of it being auditory cheesecake, it is biologically unimportant, and just something we do, purely for pleasure.
Ani’s approach was to step back and look at this debate and this phenomenon from a neural (and perhaps neutral) point of view. I liked his approach, that we should start with the null hypothesis that music is NOT an evolutionary adaptation. He says that for language we can reject such a hypothesis, but that he is not prepared to do so with music.
As evidence, he offered findings from neuroscience that seemed to suggest that while there are also indications that music and language, for instance, are in some ways independent in the brains (there are stroke victims, for instance, that have lost one ability while preserving the other), he argued that the core functions, such as syntax processing in both music and language are shared.
He discussed tonality and entrainment as examples of processes that underlie music. As I said in the tweets yesterday, I don’t quite agree with his reasoning. Or, I agree with the logic, but not the conclusions, and this is because I have a somewhat different interpretation of the results of the research that he presented. There are a number of problems, but mainly I think that for these neuroscientific studies, massively complex phenomena, like “music”, are reduced to impoverished, one-dimensional, artificial projections, in order to be able to conduct research within the tight constraints of neuroimaging, for instance.
So, the evidence applies to certain aspects of musical abilities, but while looking at these parts, I think he was missing the whole.
Also, what I missed in the talk was a more profound analysis of what music is for, what are its functions. This is of course a key issue when discussing whether music has a biological purpose. Currently, there is a gap between what music does in the world, and what it is studied as in the laboratories. Music is a social activity, a means of communication, a vehicle for emotions, and playground, source of solace and an art form, pleasure technology and a number of other things. I agree that in order to study it, we need to break it into parts, we need to look for brain correlates of those parts and do reductions and projections in order to make sense of the complex phenomenon. I identify myself as a scientist, and am willing to contort music to fit my experimental designs. However, one has to be careful when drawing conclusions about things such as universality (see David Huron’s talk yesterday) and especially when all the evidence comes from within one musical culture.
I did like the way in which he demolished the false dichotomy where cognitive faculties are either innate and biologically important or learned and biologically insignificant. He says that music is a transformational tool that is invented, learned, just like fire use, but as it shapes the brain, it is also biologically important, but only ontogenetically, not phylogenetically. While it is good to remember that the question of nature or nurture is not dichotomous, I’m not sure if what he says about transformational tools fits to music (or, of course depends how you define music). However, saying “music” is invented is in conflict with evidence from developmental studies and how first interaction between newborn babies and their caregivers is “musical” in nature, for example.
But, with all this criticism, I must say that Ani Patel has the talent for clear argumentation. He can explain things and bring clarity to issues that are otherwise murky and difficult to understand. The experiments themselves are of very high quality and I think Patel’s SSIRH-model has a lot of attractive qualities, and could well be that the syntax-engine in the brain is shared by language and music – the problem of course being, what syntax is. This is not as clear for music as it might be for language.
Patel’s keynote did the job of a keynote very well – it started a discussion and I’m sure his way of structuring the argumentation will be influential. I hope he continues to work with these questions.
(Pic: Conference venue Agora, © University of Jyväskylä)