|| Checking for direct PDF access through Ovid
We explore a method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative model's latent space from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct structural and some semantic features of a proportion of the natural images sets. A behavioural test showed that subjects were capable of identifying a reconstruction of the original stimulus in 67.2% and 66.4% of the cases in a pairwise comparison for the two natural image datasets respectively. Our approach does not require end-to-end training of a large generative model on limited neuroimaging data. Rapid advances in generative modeling promise further improvements in reconstruction performance.A generative adversarial network (DCGAN) is used for reconstructing visual percepts.Minimizing image loss, a linear model learns to predict the latent space from BOLD.With a GAN limited to 6 handwritten characters, detailed features can be retrieved.Reconstructions of arbitrary natural images are identifiable by human raters.The specific GAN is a component and replaceable by advanced deterministic generators.