Wheeled polychromatic chickens hover beside a dog-headed purple serpent presiding over a jumble of picturesque foreign rooftops. No, you’re not looking at a Dali painting or a rendering of a particularly memorable acid trip. You’re in fact seeing the imagination of a computer program.
To a computer, an image is just a big list of numbers indicating the position and color of its component pixels. But for a long time, getting a computer to interpret these numbers as an abstract version of something in the world—like a dog or a stove—seemed like a pipe dream. For example, verification techniques likeCAPTCHA ask you to extract meaning from images (something that robots historically couldn’t do) in order to prove that you’re human. But advances in computer science are making that dream come true—and sometimes, the emergent reality is stranger than anything its creators dreamed of.
One way to get a computer to extract meaning from a mess of pixels is to have it learn from experience. A team at Google is working on classifying images using a tool called an “artificial neural network.” A neural network works kind of like your own brain, in which interconnected neurons use a complicated recipe to turn input into answers. For example, your eyes take in a bunch of green, stationary blobs, and your brain concludes that you’re looking at a forest. The “neurons” in the artificial network, known as “layers,” take some input (a bunch of pixels) and pass it around according to some recipe to get an answer (an English-language description of the picture). The lower layers deal with basic concepts like lines or shapes, and the higher layers deal with more abstract ideas like animals. Computer scientists give the artificial network millions of examples in order to master these recipes.
But the inner workings of neural networks are often opaque. The Google researchers wondered what their neural network would come up with if they asked it to generate an image of a specific object, like a banana. What would the neural network think a banana looks like? As the researchers investigated this, they tried asking an individual layer to enhance whatever it recognized. The researchers then instruct the neural network to repeat this process in a kind of computer introspection.
As the Google research blog describes it:
This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.
Starting with just random pixels, the researchers set up the feedback loop, which generated animals, architectural elements, and more, to produce this startling Inceptionism Gallery . (“Inceptionism” is a neural network approach named after the “we need to go deeper” meme inspired by the film Inception.) The technique doesn’t just make psychedelic images—it’s already been incorporated into Google’s photo search tool. The creators suppose this could even help artists discover new ideas.
The images evoke Surrealism in more than appearance alone. The stated goal of Surrealism is to “resolve the previously contradictory conditions of dream and reality.” So in tackling the difficult task of teaching computers to connect images with ideas, Google’s project has been a surrealist enterprise all along, even before it started tossing out pictures that resemble something Escher might have sketched at a Phish concert.