Google’s ARTIFICIAL intelligence robots take ordinary photographs and create hallucinatory images, on which engineers compare the imagery to dreams, dubbed Google’s image-making legacy “Inceptionism,” Call the generated code you use “Deep Dream.”

But many people have the same feeling after viewing these images: they are not from a dream world, but from a drug reaction.

The computer-generated images are filled with variations of color, twisted lines, elongated faces, floating eyes, and disturbing wavy shapes made of shadows and light. The computer looks like it’s hallucinating. It looks a little bit like a human. It’s weird.



Giiso, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, Giiso’s research and development products include editing robots, writing robots and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

The depth of the dream

The idea behind the project is to test how well a computer neural network can learn images of different animals and landscapes by ordering a machine to describe what it sees. Instead of showing a computer a picture of a tree and asking it to “tell me what this is,” the engineers told the machine to “zoom in on the image elements you see.”

This is the original for the computer

The output of the machine looks like this:

Google engineers say the effect is no different from what people might see in clouds of different shapes. When given recognisable images, humans — and computers — recognize and “over-interpret” the shapes of known things.

“The neural network [used by Google] is trained primarily on images of animals, so it naturally interprets the images as animals. But because the data is stored in a high degree of abstraction, so the result is that these interesting combination characteristic of machine learning, “Google engineers in the company’s official blog wrote,” according to the different images, the result is very different also, because of the characteristics of the input image can cause neural networks tend to form some interpretations. For example, the horizontal lines will be filled with the shape of pagodas, and rocks and trees will become buildings. Birds and insects appear in the image of leaves.”

Because neural networks analyze images as layers — described by colors, line types, shapes, and so on — the complexity of generating results depends on which layers the engineers ask the computer to zoom in on. The lowest level is contours — lines and shadows — and the highest level is more complex. ‘For example, lower layers tend to produce stroke lines or simple decorative patterns because they are more sensitive to basic features such as edge and orientation attributes.’ Google engineers wrote.

When these simple patterns are amplified by “deep dreams”, the results appear distorted and magical. But why the images turn out this way still leaves unanswered the question of why the “dream” images of computer neural networks reflect drug-induced hallucinations.



Visual experience

“It’s important to remember that all normal human sensory experience is an illusion limited by sensory input,” Says Lucas Sjulson, a research assistant professor at New York University’s Langone Neuroscience Institute. “Our hallucinations somehow mirror what’s really going on in the outside world. But perception is all formed inside.”

In other words, all human perception is formed in the brain, not in the real world, even if what you perceive is real. “People think of eyes as cameras, but they’re not,” explains Lucas Sjulson. Your eyes allow you to see, but your brain is the organ that actually interprets what you see — whether it’s the coffee cup sitting on the table or the kaleidoscope of fractal images projected into your mind.

When people use hallucinogenic drugs (LSD), it stimulates a part of the brain’s cortex “to produce these kinds of patterns.” So it’s not surprising that computers used to working with multiple layers of images get similar visual effects when they’re ordered to zoom in on one layer. “I think this is probably an example of a similar phenomenon. If you look at the workings of the mind, it engages in long-term problem solving, and it does so in a highly optimized way. Humans have evolved to learn by visual experience as well.



Similarities between ARTIFICIAL intelligence and the human brain

Visual experience is also how humans train computer vision. The way neural networks perceive images may be more helpful in answering earlier questions than the way computers “see” specific images. That is, what Google’s engineers originally wanted to explore.

“We actually ‘see’ things that aren’t there all the time,” says Jeffrey Guss, a psychologist at New York University who studies how hallucinogenic substances found in some poisonous mushrooms can help cancer patients. “Our visual cortex — not our eyes — is dedicated to picking up recognisable patterns to observe the information provided by our eyes. There are many psychological experiments that show that we often see what we expect to see and what we are told we will see, rather than what actually exists.”



Although hallucinogenic experiences are often associated with drug culture, people do consistently experience exotic visual experiences, even when they are unaffected. In his book Hallucinations, the late Oliver Sacks, a neuroscientist, argues that the experience is more common than many realise. “Whereas in some other cultures hallucinations are seen as gifts from God or the Muse, in contemporary society they have an ominous public meaning, as well as in medicine, and are often considered signs of serious mental or neurological disorders,” As he wrote in the New York Times in 2012, “For many people, having hallucinations is a horrible secret — there are millions of them — that they never talk about, and it’s hard to admit, but there’s nothing unusual about it.”

In humans, weird image perception has been linked to eye, head and other conditions: migraines, fevers and cramps, for example. Similar patterns in computers’ brains suggest that artificial intelligence is more human than it looks. The fact that Google’s “dream style” seems like a drug-induced hallucinogenic experience to humans suggests that our brains function in a way that is “at a deep level” similar to computer neural networks.