Damn. This new Japanese AI can read your mind and illustrate what you’re thinking about
The Artifical Intelligence system developed by Japanese computer scientsist scans blood flow and brain waves and decodes the collected data into images that resemble what the person has been thinking. https://t.co/Cy8AzSi4oY
— Interesting Engineering (@IntEngineering) February 26, 2018
On 25 February, the website Interesting Engineering published an article on what it claims is a
“new AI system that can visualise human thoughts. Yes! We are talking about a technology that can see the human thought or convert them into pictures. It’s always frightening to know that another person can read your thoughts. Imagine the case where technology can see what you are thinking!“
It links to a paper (PDF) called Deep image reconstruction from human brain activity, written by four Japanese scientists: Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani. In short, the paper claims that its writers have found a new reconstruction method for machine learning-based analysis of human functional magnetic resonance imagine (fMRI) patterns, enabling the visualisation of perceptual content. It works by optimising the pixel values of an image, to make its deep neural network (DNN) features similar to those decoded from human brain activity at multiple layers.
“We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images.“
The researchers claim that their research conclusively shows, from human judgement, that combining multiple DNN layers to enhance visual quality of generated images is effective. Results suggest, furthermore, that hierarchical visual information in the brain can be combined to reconstruct perceptual and subjective images. Researchers claim that their approach could
“provide a unique window into our internal world by translating brain activity into images via hierarchical visual features.“
Well, ain’t that something.