Mechanisms for detecting auditory temporal and spectral deviations operate over similar time windows but are divided differently between the two hemispheres

    loading  Checking for direct PDF access through Ovid


In order to keep track of potentially relevant information in the acoustic environment, the human brain processes sounds to a high extent even when they are not attended: it extracts basic features, encodes regularities, and detects deviances. Here, we deliver evidence that the initial 300 ms of a sound contribute more to this preattentive processing than the sound's later parts. We directly compared the influence of the temporal distance relative to sound onset on the processing of the sound's duration and frequency information. The mismatch negativity (MMN), an event-related potential indicator for preattentive feature encoding and deviance detection, was measured for infrequent duration deviants and frequency modulation deviants. The onset of either deviancy was at 100, 200, 300, or 400 ms relative to sound onset. MMN was only elicited for deviations occurring within the first 300 ms after sound onset for both types of deviants. Its neural sources were localized in supra-temporal cortices with source current density analyses (SCD) and variable resolution electromagnetic tomography (VARETA), revealing a right-hemispheric preponderance for frequency modulations but not for duration shortenings. This suggests that preattentive deviance detection is based upon partly diverging functional memory registers for temporal and dynamic spectral information. The influence of temporal distance on MMN in both conditions supports the view that temporal and spectral sound properties are integrated into an auditory object representation prior to preattentive deviance detection. Importantly, the decline of MMN to unattended sounds with larger temporal distance suggests that parts beyond 300 ms are less important for preattentive auditory object representation.

Related Topics

    loading  Loading Related Articles