Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

After Machine Inteligence: Machine Creativity - Can Machines do Art?

The question "Can machines think?" is rapidly becoming obsolete. I have no way of knowing whether you "think". We cannot enter someone else's brain and find out if that person has feelings, emotions, thoughts, etc. All we know about other people's inner lives is that it generates a behavior very similar to our own, and therefore we conclude that other people too must have the same kind of inner lives that we have (feelings, emotions, thoughts, etc). Since we cannot even determine with absolute certainty the consciousness of other people, it sounds a bit useless to discuss whether machines can be conscious. Can machines think? Maybe, but we'll never find out for sure, just like we'll never find out for sure if all humans think.

The question "Can machines be creative?" is much more interesting. Humans have always thought of themselves as creative beings, but always failed to explain what that really means. The humble spider can make a very beautiful spider-web. Some birds create spectacular nests. Bees perform intricate dances. Most of humans don't think that the individual spider or the individual bird is a "creative being". Humans assume that something in its genes made it do what it did, no matter how complex and brilliant. But what exactly is different with Shakespeare, Michelangelo and Beethoven?

Humans use tools to make art (if nothing else, a pen). But the border between artist and tool has gotten blurred since Harold Cohen conceived AARON in 1973, a painting machine. Cohen asked: "What are the minimum conditions under which a set of marks functions as an image?" I would rephrase it as "What are the minimum conditions under which a set of signs functions as art?" Even Marcel Duchamp's "Fountain" (1917), which is simply a urinal, is considered "art" by the majority of art critics. Abstract art is mostly about abstract signs. Why are Piet Mondrian's or Wassily Kandinsky's simple lines considered art? Most paintings by Vincent Van Gogh and Pablo Picasso are just "wrong" representations of the subject: why are they art, and, in fact, great art?

During the 1990s and 2000s several experiments further blurred that line: Ken Goldberg's painting machine "Power and Water" at the University of South California (1992); Matthew Stein's PumaPaint at Wilkes University (1998), an online robot that allows Internet users to create original artwork; Jurg Lehni's graffiti-spraying machine Hektor in Switzerland (2002); David Cope's program Emily Howell for music composition, that was conceived in 2003 and went on to release the albums "From Darkness Light" (2009) and "Breathless" (2012); the painting robots developed since 2006 by Washington-based software engineer Pindar Van Arman; and Vangobot (2008) (pronounced "Van Gogh bot"), a robot built by Nebraska-based artists Luke Kelly and Doug Marx that renders images according to preprogrammed artistic styles. After a Kickstarter campaign in 2010, Chicago-based artist Harvey Moon built drawing machines, set their "aesthetic" rules, and let them do the actual drawing. In 2013 Oliver Deussen's team at the University of Konstanz in Germany demonstrated e-David (Drawing Apparatus for Vivid Interactive Display), a robot capable of painting with real colors on a real canvas. In 2013 the Galerie Oberkampf in Paris showed paintings produced over a number of years by a computer program, "The Painting Fool", designed by Simon Colton at Goldsmiths College in London. The Living Machines exhibition of 2013 at London's Natural History Museum and Science Museum featured "Paul", a creative robot capable of sketching a portrait, developed by French inventor Patrick Tresset since 2011, and BNJMN (pronounced "Benjamin"), a robot capable of generating images built for the occasion by Travis Purrington and Danilo Wanner from the Basel Academy of Art and Design.

While each of these systems caused headlines in the press, none was autonomous and the "trick" was easy to detect.

Then deep learning happened. Deep learning consists in a multi-layer network that is trained to recognize an object. The training consists in showing the network many instances of that object (say, many cats). Andrew Zisserman's team at Oxford University was probably the first to think of asking a neural network to show what it was learning during this training ("Deep Inside Convolutional Networks", 2014). Basically, they used the neural network to generate the image of the object being learned (say, what the neural network has learned a cat to be like).

In May 2015 a Russian engineer at Google's Swiss labs, Alexander Mordvintsev, used that idea to make a neural network produce psychedelic images. One month later he posted a paper titled "Inceptionism" (jointly with Christopher Olah, an intern at Jeff Dean's Google Brain team in Silicon Valley, and with Mike Tyka, an artist working for Google in Seattle) that sort of coined a new art movement. Neural nets trained to recognize images can be run in reverse so that they instead generate images. More importantly, the networks can be asked to identify objects that actually don't exist, like when you see a face in a cloud. By feeding back this "optical illusion" into the network over and over again, the network eventually displays a detailed image, which is basically the machine's equivalent of a human hallucination. For example, a neural network trained to recognize animals will identify inexistent animals in a cloudy sky.

In August 2015 two students (Leon Gatys and Alexander Ecker) of Matthias Bethge's lab at the University of Tubingen in Germany taught a neural network to capture an artistic style and then applied the artistic style to any picture ("A Neural Algorithm of Artistic Style", 2015). Neural networks can imitate the style of any maestro. A neural network trained to recognize an object tends to separate content and style, and the "style" side of it can be applied to other objects, therefore obtaining a version of those objects in the style that the network learned. In other words, the neural network captures an artistic style and then applies the artistic style to any picture, turning it into a painting in that artistic style.

In September 2015, at the International Computer Music Conference, Donya Quick, a composer working at Paul Hudak's lab at Yale University, presented a computer program called Kulitta for automated music composition. In February 2016 she published on Soundcloud a playlist of Kulitta-made pieces.

In February 2016 Google staged an auction of 29 paintings made by its artificial intelligence at the Grand Theater in San Francisco in collaboration with the Gray Area Foundation for the Arts ("DeepDream: The Art of Neural Networks").

In March 2016, a 20-year-old Princeton University student, Ji-Sung Kim, and his friend Evan Chow created a neural network that can improvise like a jazz musician on Pat Metheny's "And Then I Knew" (1995).

In April 2016 a new Rembrandt portrait was unveiled in Amsterdam, 347 years after the painter's death: Joris Dik at Delft University of Technology created this 3D-printed fake Rembrandt consisting of more than 148 million pixels based on 168,263 fragments from 346 of Rembrandt's paintings. (To be fair, a similar feat had been achieved in 2014 by Jeroen van der Most whose computer program had generated a "lost Van Gogh" after analyzing statistically 129 real paintings of the master).

In May 2016 Daniel Rockmore at Dartmouth College organized the first Neukom Institute Prizes in Computational Arts (soon nicknamed the "Turing Tests in the Creative Arts"), that included three contests to build computer programs that can create respectively a short story, a sonnet, and a DJ set. Spanish students Jaume Parera and Pritish Chandna won the prize for the DJ set, while three students of Kevin Knight's lab at the University of Southern California won the prize for the sonnet ("And from the other side of my apartment/ An empty room behind the inner wall/ A thousand pictures on the kitchen floor/ Talked about a hundred years or more").

In May 2016 the TED crowd got to hear a talk by Blaise Aguera y Arcas, principal scientist at Google, titled "We're on the edge of a new frontier in art and creativity and it's not human".

In July 2016 a Bay Area software engineer, Karmel Allison, launched CuratedAI, an online magazine of poems and prose written by A.I. programs.

The standard objection to machine art is that the artwork was not produced by the machine: a human being designed the machine and programmed it to do what it did, hence the machine should get no credit for its "artwork". Because of their nonlinearity, neural networks distance the programmer from the working of the program, but ultimately the same objection holds.

However, if you are painting, it means that a complex architecture of neural processes in your brain made you paint, and those processes are due to the joint work of a genetic program and of environmental forces. Why should you get credit for your artwork?

If what a human brain does is art, then what a machine does is also art.

A skeptic friend, who is a distinguished art scholar at UC Berkeley, told me: "I haven't seen anything I'd take seriously as art". But that's a weak argument: many people cannot take seriously as art the objects exhibited in museums of contemporary art, not to mention performance art, body art and dissonant music. How does humankind decide what qualifies as art?

The Turing Test of art is simple. We are biased when they tell us "this was done by a computer". But what if they show us the art piece and tell us it was done by an Indonesian artist named Namur Saldakan? I bet there will be at least one influential art critic ready to write a lengthy analysis of how Saldakan's art reflects the traditions of Indonesia in the context of globalization etc etc.

In fact, the way that a neural network can be "hijacked" to do art may help understand the brain of the artist. It could lead to a conceptual breakthrough by neuroscientists. After all, nobody ever came up with a decent scientific theory of creativity. Maybe those who thought of playing the neural net in reverse told us something important about what "creativity" is.

This machine art poses other interesting questions for the art world.

What did the art collectors buy at the Google auction? The output of a neural network is a digital file, which can be copied in a split second: why would you pay for something of which an unlimited number of copies can be made? In order to guarantee that no other copies will ever be made, we need to physically destroy the machine or to re-train the neural network so it will never generate those images again.

Who appreciates human art? Humans. We even have professionals called "art critics" who spend their entire life doing just that. Who appreciates machine art? The same humans. That is where the notion of art diverges. Human art is for humans. It will influence humans. It is part of human history. Faced with machine art, we try to fit machine art into the human narrative. This introduces an asymmetry between human art and machine art. To have full symmetry, it is not enough to have a machine that produces art. You also need machines that can appreciate that art and that can place it in a historical and social context; otherwise it is still missing something that human art has.

Machine art shows that it is not difficult to be creative, but it is difficult to be creative in a way that matters.

Back to the Table of Contents

Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact