Extracting the Neural Representation of Tone Onsets for Separate Voices of Ensemble Music Using Multivariate EEG Analysis

    loading  Checking for direct PDF access through Ovid

Abstract

When listening to ensemble music, even nonmusicians can follow single instruments effortlessly. Electrophysiological indices for neural sensory encoding of separate streams have been described using oddball paradigms that utilize brain reactions to sound events that deviate from a repeating standard pattern. Obviously, these paradigms put constraints on the compositional complexity of the musical stimulus. Here, we apply a regression-based method of multivariate electroencephalogram (EEG) analysis in order to reveal the neural encoding of separate voices of naturalistic ensemble music that is based on cortical responses to tone onsets, such as N1/P2 event-related potential components. Music clips (resembling minimalistic electro-pop) were presented to 11 subjects, either in an ensemble version (drums, bass, keyboard) or in the corresponding 3 solo versions. For each instrument, we train a spatiotemporal regression filter that optimizes the correlation between EEG and a target function that represents the sequence of note onsets in the audio signal of the respective solo voice. This filter extracts an EEG projection that reflects the brain’s reaction to note onsets with enhanced sensitivity. We apply these instrument-specific filters to 61-channel EEG recorded during the presentations of the ensemble version and assess by means of correlation measures how strongly the voice of each solo instrument is reflected in the EEG. Our results show that the reflection of the melody instrument keyboard in the EEG exceeds that of the other instruments by far, suggesting a high-voice superiority effect in the neural representation of note onsets. Moreover, the results indicated that focusing attention on a particular instrument can enhance this reflection. We conclude that the voice-discriminating neural representation of tone onsets at the level of early auditory event-related potentials parallels the perceptual segregation of multivoiced music.

Related Topics

    loading  Loading Related Articles