Model-free and model-based reward prediction errors in EEG

    loading  Checking for direct PDF access through Ovid


Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain.HighlightsA reinforcement learning task was employed which permitted both model-free and model-based learning.A computational model was used to generate prediction error estimates for the two learning variants.Regression of model estimates against scalp voltage revealed both model-free and model-based prediction error activity.Traditional formal models of reinforcement learning may not accurately describe activity in the brain.

    loading  Loading Related Articles