In everyday life, we process mixtures of a variety of sounds. This processing involves the segregation of auditory input and the attentive selection of the stream that is most relevant to current goals. For natural scenes with multiple irrelevant sounds, however, it is unclear how the human auditory system represents all the unattended sounds. In particular, it remains elusive whether the sensory input to the human auditory cortex of unattended sounds biases the cortical integration/segregation of these sounds in a similar way as for attended sounds.
In this study, we tested this by asking participants to selectively listen to one of two speakers or music in an ongoing 1-min sound mixture while their cortical neural activity was measured with EEG. Using a stimulus reconstruction approach, we find better reconstruction of mixed unattended sounds compared to individual unattended sounds at two early cortical stages (70ms and 150ms) of the auditory processing hierarchy. Crucially, at the earlier processing stage (70ms), this cortical bias to represent unattended sounds as integrated rather than segregated increases with increasing similarity of the unattended sounds.
Our results reveal an important role of acoustical properties for the cortical segregation of unattended auditory streams in natural listening situations. They further corroborate the notion that selective attention contributes functionally to cortical stream segregation. These findings highlight that a common, acoustics-based grouping principle governs the cortical representation of auditory streams not only inside but also outside the listener's focus of attention.