|| Checking for direct PDF access through Ovid
Timbre, or sound quality, is a crucial but poorly understood dimension of auditory perception that is important in describing speech, music, and environmental sounds. The present study investigates the cortical representation of different timbral dimensions. Encoding models have typically incorporated the physical characteristics of sounds as features when attempting to understand their neural representation with functional MRI. Here we test an encoding model that is based on five subjectively derived dimensions of timbre to predict cortical responses to natural orchestral sounds. Results show that this timbre model can outperform other models based on spectral characteristics, and can perform as well as a complex joint spectrotemporal modulation model. In cortical regions at the medial border of Heschl's gyrus, bilaterally, and regions at its posterior adjacency in the right hemisphere, the timbre model outperforms even the complex joint spectrotemporal modulation model. These findings suggest that the responses of cortical neuronal populations in auditory cortex may reflect the encoding of perceptual timbre dimensions.MRI encoding is used to investigate the cortical representation of sound timbre.We compare a subjective timbre model to spectral/spectrotemporal modulation models.The timbre model outperforms spectral, but not spectrotemporal modulation models.The timbre model outperforms all other models in parts of early auditory cortex.Results support a distributed encoding of timbre dimensions in auditory cortex.