Press "Enter" to skip to content

AI robot can draw what you’re thinking by reading your brain impulses

A team of scientists at the Research Center for Brain-Inspired Intelligence in Beijing, China recently developed a technique to decode perceived images by analyzing data from functional magnetic resonance imaging (fMRI) scans. To carry out the research, the scientists monitored the activity in the visual cortex. One challenge was to efficiently process fMRI data. The task involved mapping the activity in three-dimensional voxels inside the brain to two-dimensional pixels to project an image. The process proved to be difficult as brain scans were known to be too “noisy”, and that voxel activities were known to influence each other.

Dealing with these factors were previously regarded as computationally expensive. According to the research team, most approaches in mind-reading algorithms do not take voxel activities into account, which in turn significantly diminished the quality of image reconstructions being produced. In order to process fMRI data more efficiently, the researchers adapted deep-learning techniques that were more capable of handling nonlinear correlations between voxels. According to the research team, using deep-learning techniques resulted in better brain-image reconstruction.

How the brain scans were processed

As part of the study, researcher Changde Du and colleagues examined several sets of fMRI scans of the visual cortex. The scan featured a human subject looking at a simple image such as a single digit or letter. The data sets were made up of more than 1,800 fMRI scanned images and the original images. The research team used 90 percent of the data to train the new network, called Deep Generative Multiview Model (DGMM), to determine the correlation between the scanned and the original images. The experts then fed the DGMM the remaining 10 percent of the scans and prompted it to draw what it thought the people saw.

According to the research team, a big advantage of using the DGMM is that the network learned to determine which voxels to use in the reconstruction process. The network was also trained to learn how these voxels were correlated. The researchers then compared the efficacy of DGMM against a number of other brain image reconstruction techniques though standard image comparison methods. The researchers noted that the DGMM was able to accurately draw images perceived by humans.

“Overall, the images reconstructed by DGMM captured the essential features of the presented images. In particular, they showed fine reconstructions for handwritten digits and characters,” the researchers said in DailyMail.co.uk.

So far, the new network can only decipher simple symbols. However, the researchers claimed that the network could one day decipher videos or even dreams.

Potential implications in interrogation process

The latest development in mind-reading algorithms may have potential uses in interrogation. In fact, an archived article discussed another brain scan method that helped researchers look into a person’s brain and identify his intentions. To carry out the process, a team of researchers from the Max Planck Institute for Human Cognitive and Brain Sciences in Germany, University College London, and Oxford University used high-resolution brain scans to identify patterns of brain activity. The experts then translated these brain scans into meaningful thoughts, which readily revealed a person’s intentions in the near future.

“Using the scanner, we could look around the brain for this information and read out something that from the outside there’s no way you could possibly tell is in there. It’s like shining a torch around, looking for writing on a wall,” lead researcher John-Dylan Haynes reported in TheGuardian.com.