|| Checking for direct PDF access through Ovid
Affective sounds are decoded in distributed cortical and subcortical networks.Psychiatric disorders predict decoding impairments at specific processing levels.Different brain lesions lead to impairments, with a primary role for basal ganglia.Some functional activity and connectivity patterns are common for several disorders.Decoding affective meaning from sensory information is central to accurate and adaptive behavior in many natural and social contexts. Human vocalizations (speech and non-speech), environmental sounds (e.g. thunder, noise, or animal sounds) and human-produced sounds (e.g. technical sounds or music) can carry a wealth of important aversive, threatening, appealing, or pleasurable affective information that sometimes implicitly influences and guides our behavior. A deficit in processing such affective information is detrimental to adaptive environmental behavior, psychological well-being, and social interactive abilities. These deficits can originate from a diversity of psychiatric and neurological disorders, and are associated with neural dysfunctions across largely distributed brain networks. Recent neuroimaging studies in psychiatric and neurological patients outline the cortical and subcortical neurocircuitry of the complimentary and differential functional roles for affective sound processing. This points to and confirms a recently proposed distributed network rather than a single brain region underlying affective sound processing, and highlights the notion of a multi-functional process that can be differentially impaired in clinical disorders.