Decoding Near-Threshold Perception of Fear from Distributed Single-Trial Brain Activation

    loading  Checking for direct PDF access through Ovid

Abstract

Instead of contrasting functional magnetic resonance imaging (fMRI) signals associated with 2 conditions, as customarily done in neuroimaging, we reversed the direction of analysis and probed whether brain signals could be used to “predict” perceptual states. We probed the neural correlates of perceptual decisions by “decoding” brain states during near-threshold fear detection. Decoding was attempted by using support vector machines and other related techniques. Although previous decoding studies have employed relatively “blocked” data, our objective was to probe how the “moment-to-moment” fluctuation in fMRI signals across a population of voxels reflected the participant's perceptual decision. Accuracy increased from when 1 region was considered (∼64%) to when 10 regions were used (∼78%). When the best classifications per subject were averaged, accuracy levels ranged between 74% and 86% correct. An information theoretic analysis revealed that the information carried by pairs of regions reliably exceeded the sum of the information carried by individual regions, suggesting that information was combined “synergistically” across regions. Our results indicate that the representation of behavioral choice is “distributed” across several brain regions. Such distributed encoding may help prepare the organism to appropriately handle emotional stimuli and regulate the associated emotional response upon the conscious decision that a fearful face is present. In addition, the results show that challenging brain states can be decoded with high accuracy even when “single-trial” data are employed and suggest that multivariate analysis strategies have considerable potential in helping to elucidate the neural correlates of visual awareness and the encoding of perceptual decisions.

Related Topics

    loading  Loading Related Articles