Neuroscience of music
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
The cognitive neuroscience of music represents a significant branch of music psychology, and is distinguished from related fields such as cognitive musicology in its reliance on direct observations of the brain and use of brain imaging techniques like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).
Elements of music[edit]
Pitch[edit]
Sounds consist of waves of air molecules that vibrate at different frequencies. These waves travel to the basilar membrane in the cochlea of the inner ear. Different frequencies of sound will cause vibrations in different locations of the basilar membrane. We are able to hear different pitches because each sound wave with a unique frequency is correlated to a different location along the basilar membrane. This spatial arrangement of sounds and their respective frequencies being processed in the basilar membrane is known as tonotopy.
When the hair cells on the basilar membrane move back and forth due to the vibrating sound waves, they release neurotransmitters and cause action potentials to occur down the auditory nerve. The auditory nerve then leads to several layers of synapses at numerous clusters of neurons, or nuclei, in the auditory brainstem. These nuclei are also tonotopically organized, and the process of achieving this tonotopy after the cochlea is not well understood.[1] This tonotopy is in general maintained up to primary auditory cortex in mammals.[2]
A widely postulated mechanism for pitch processing in the early central auditory system is the phase-locking and mode-locking of action potentials to frequencies in a stimulus. Phase-locking to stimulus frequencies has been shown in the auditory nerve,[3][4] the cochlear nucleus,[3][5] the inferior colliculus,[6] and the auditory thalamus.[7] By phase- and mode-locking in this way, the auditory brainstem is known to preserve a good deal of the temporal and low-passed frequency information from the original sound; this is evident by measuring the auditory brainstem response using EEG.[8] This temporal preservation is one way to argue directly for the temporal theory of pitch perception, and to argue indirectly against the place theory of pitch perception.
Music production and performance[edit]
Motor control functions[edit]
Musical performance usually involves at least three elementary motor control functions: timing, sequencing, and spatial organization of motor movements. Accuracy in timing of movements is related to musical rhythm. Rhythm, the pattern of temporal intervals within a musical measure or phrase, in turn creates the perception of stronger and weaker beats.[21] Sequencing and spatial organization relate to the expression of individual notes on a musical instrument.
These functions and their neural mechanisms have been investigated separately in many studies, but little is known about their combined interaction in producing a complex musical performance.[21] The study of music requires examining them together.
Gender differences[edit]
Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences.[66] Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls.[67] This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.
Handedness differences[edit]
It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch.[68][69] It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion[70][71] and the Scale illusion.[72][73]
Attention[edit]
Treder et al.[86] identified neural correlates of attention when listening to simplified polyphonic music patterns. In a musical oddball experiment, they had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one or several notes deviating from an otherwise repetitive pattern. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument could be classified offline with high accuracy. This indicates that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for building more ergonomic music-listing based brain-computer interfaces.[86]
Development[edit]
Musical four-year-olds have been found to have one greater left hemisphere intrahemispheric coherence.[84] Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992. This was confirmed by a study by Schlaug et al. in 1995 that found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven, and a great increase in the size of the corpus callosum.[84] These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brain and the linguistical processing of the left brain. This large relaying across many different areas of the brain might contribute to music's ability to aid in memory function.