Probabilistic language models in cognitive neuroscience: Promises and pitfalls


    loading  Checking for direct PDF access through Ovid

Abstract

HighlightsAspects of incremental language comprehension can be modeled with probabilistic language models.Formal linguistic information content is used to build predictors of cognitive processing.We review fMRI and M/EEG studies that analyzed comprehension data with language models.We discuss advantages, potential pitfalls and future challenges of the approach.Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research.

    loading  Loading Related Articles