Generative adversarial networks for reconstructing natural images from brain activity

2 min read Original article ↗

New Results

doi: https://doi.org/10.1101/226688

Loading

Abstract

We explore a straightforward method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative model’s latent random vector z from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct natural images, but not to an equal extent for all images with the same model. A behavioral test showed that subjects were capable of identifying a reconstruction of the original stimulus in 67.6% and 64.4% of the cases in a pairwise test for the two natural image datasets respectively. Our approach does not require end-to-end training of a large generative model on limited neuroimaging data. As the particular GAN model can be replaced with a more powerful variant, the current advances in generative modeling promise further improvements in reconstruction performance.

Copyright 

The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.