Researchers at ATR and Kyoto University in Japan say they have created an AI that can read human brain waves. They built a neural network that can not only read, but reconstruct your thoughts.

Specifically, according to ZME Science, “the team created an algorithm that can interpret and accurately reproduce the image a person has seen or imagined.”



Structure diagram of depth image reconstruction. The pixel values of the input image are optimized so that the DNN features of the image are similar to those decoded in fMRI activity. A depth generator network (DGN) can optionally be combined with DNN to produce natural images, where optimization is performed in the input space of the DGN. Credit: bioRxiv (2017). DOI: 10.1101/240317

The research paper, titled “Deep Image Reconstruction from Human Brain Activity,” said researchers were able to replicate an image based on the scene a person was observing. The images created by the AI don’t look exactly like what people actually see, just a vague representation of the human mind. However, the AI was able to reconstruct these images using brain waves.

Although it may be decades away from practical use, the technology brings us closer to creating systems that can read and understand the human mind.




Deep image reconstruction: Natural image

Trying to tame a computer to decode mental images is not a new idea. In fact, this research has been going on for years, and researchers have been trying to reconstruct brain images, such as movie clips, photos, and even dreams, since 2011. However, all previous systems were limited in scope and capability. Some can only process narrow areas like the shape of a face, and others can only be reconstructed from pre-designed images or categories (for example, “birds,” “cakes,” “people,” etc.). So far, all of these technologies have required pre-stored data; They work by matching the subjects’ brain activity to previously recorded brain activity while the person is looking at the image.

But the researchers say their new algorithm can generate new, recognizable images from scratch, even shapes that are simply imagined by the human brain.

It all started with functional magnetic resonance imaging (fMRI), a technique that measures blood flow to the brain and uses the results to determine neural activity. The team scanned the visual processing areas of three subjects at a resolution of 2mm. The scan was performed several times. In each scan, the three subjects were asked to look at more than 1,000 images, including a fish, an airplane and simple color shapes.



A new algorithm uses brain activity to reconstruct (bottom two lines) the images viewed (first line). Image credit: Kamitani Lab

The research team aims to understand the brain activity in response to images and eventually generate images that can produce a similar response in the brain through a computer program.

The team has recently begun to produce results. Instead of showing subjects images one by one until the computer got the right result, the researchers used a deep neural network (DNN) and several layers of simple processing elements.

Yukiyasu Kamitani, lead author of the study, said: “We believe that deep neural networks are a good representation of hierarchical processing in the brain.”

“Using DNN, we can extract information from different levels of the brain’s visual system, from simple light contrast to more meaningful content, such as faces.”

Using Decoder, the researchers created the brain’s response to the images in DNN. Then, they no longer needed fMRI imaging measurements, nor did they use DNN translation as a template.

This is followed by a repeated process in which the system creates images in an attempt to get DNN to react similarly to the desired template — whether it’s an animal or a stained glass window. It’s a trial and error process in which the program starts with a neutral image and slowly improves over 200 repetitions. To see how close it is to the desired image, the system compares the difference between the template and DNN’s response to the generated image. This calculation allows it to improve pixel-by-pixel towards the desired image.

To improve the accuracy of the final image, the team utilized a “deep Generator network” (DGN), a pre-trained algorithm that creates realistic images from raw input. In essence, DGN is about putting finished details on the image to make it look more natural.

After DGN had finished tinkering with the photos, a neutral human observer was asked to evaluate the work. He was shown two images to choose from and asked which one was reconstructed by the algorithm. Human observers were able to correctly select the images generated by the system 99% of the time, the authors write in their paper.

The next step is to combine all the work with the process of “mind reading”. They asked three subjects to recall images they had been shown and had their brains scanned. The process was a bit tricky, but the results were still exciting — this method didn’t work well for photos, but for shapes, the generator was able to create recognizable images 83% of the time.

Note that this work looks very clean and discreet. Their system works well, and perhaps the bottleneck is not the software, but our ability to measure brain activity. Maybe we need to wait for better fMRI and other brain imaging techniques to come along.


The original post was published on January 18, 2018

By Marvin

This article is from xinzhiyuan, a partner of the cloud community. For relevant information, you can follow the wechat public account “AI_era”

AI reads brain waves to recreate the human mind, according to new research from Kyoto University