Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Semantics

Misconceptions in discussing machine intelligence

In private conversations about "machine intelligence" i like to quip that it is not intelligent to talk about intelligent machines: whatever they do is not what we do, and, therefore, is neither "intelligent" nor "stupid" (attributes invented to define human behavior). Talking about the intelligence of a machine is like talking about the leaves of a person: trees have leaves, people don't. "Intelligence" and "stupidity" are not properties of machines: they are properties of humans. Machines don't think, they do something else. Machine intelligence is as much an oxymoron as human furniture. Machines have a life of their own, but that "life" is not human life.

We apply to machines many words invented for humans simply because we don't have a vocabulary for the states of machines. For example, we buy "memory" for our computer, but that is not memory at all: it doesn't remember (it simply stores) and it doesn't even forget, the two defining properties of (biological) memory. We call it "memory" for lack of a better word. We talk about the "speed" of a processor but it is not the "speed" at which a human being runs or drives. We don't have the vocabulary for machine behavior. We borrow words from the vocabulary of human behavior. It is a mistake to assume that, because we use the same word to name them, then they are the same thing. If i see a new kind of fruit and call it "cherry" because there is no word in my language for it, it doesn't mean it is a cherry. A computer does not "learn": what it does when it refines its data representation is something else (that we don't do).

It is not just semantics. Data storage is not memory. Announcements of exponentially increasing data storage miss the point: that statistical fact is as relevant to intelligence as the exponential increase in credit card debt. Just because a certain sequence of zeroes and ones happens to match a sequence of zeroes and ones from the past it does not mean that the machine "remembered" something. Remembering implies a lot more than simply finding a match in data storage. Memory does not store data. In fact, you cannot retell a story accurately (without missing and possibly distorting tons of details) and you cannot retell it twice with the same words (each time you will use slightly different words). Ask someone what her job is, something that she has been asked a thousand times, and she'll answer the question every time with a different sequence of words, even if she tries to use the same words she used five minutes earlier. Memory is "reconstructive", the crucial insight that Frederic Bartlett had in 1932. We memorize events in a very convoluted manner, and we retrieve them in an equally convoluted manner. We don't just "remember" one thing: we remember our entire life whenever we remember something. It's all tangled together. You understand something not when you repeat it word by word like a parrot (parrots can do that, and tape recorders can do that) but when you summarize it in your own words, different words than the ones you read or heard: that is what we call "intelligence". I am always fascinated, when i write something, to read how readers rewrite it in their own words, sometimes using completely different words, and sometimes saying it better than i did.

It is incredible how bad our memory is. A friend recommended an article by David Carr about Silicon Valley published in the New York Times "a few weeks ago". It took several email interactions to figure out that a) the article was written by George Packer, b) it was published by the New Yorker, c) it came out one year earlier. And, still, it is amazing how good our memory is: it took only a few sentences during a casual conversation for my friend to relate my views on the culture of Silicon Valley to an article that she had read one year earlier. Her memory has more than just a summary of that article: it has a virtually infinite number of attributes linked to that article such that she can find relevant commonalities with the handful of sentences she heard from me. It took her a split second to make the connection between some sentences of mine (presumably ungrammatical and incoherent sentences because we were in a coffee house and i wasn't really trying to compose a speech) and one of the thousands of articles that she has read in her life.

All forms of intelligence that we have found so far use memory, not data storage. I suspect that, in order to build an artificial intelligence that can compete with the simplest living organism, we will first need to create artificial memory (not data storage). Data storage alone will never get you there, no matter how many terabytes it will pack in a millimeter.
Human memory confuses things easily, and even forgets easily. In fact, human memory is extremely good at forgetting. If we forgot important facts, we would be dead, but we instead forget only those facts that improve our memory (not just the storage, but the whole ability to retrieve and associate knowledge). For example, Blake Richards and Paul Frankland of the Canadian Institute for Advanced Research (CIFAR) have showed that forgetting is as important for human memory as remembering ("The Persistence and Transience of Memory", 2016). Machines that don't forget don't "remember" either ("remember" the way human memory does, in relation to everything else).

What computers do is called "savant syndrome" in the scientific literature: idiots (very low intelligence quotient) with a prodigious memory.

I am not advocating that machines should be as forgetful and slow as us. I am simply saying that we shouldn't be carried away by a faulty and misleading vocabulary.

Data is not knowledge either: having amassed all the data about the human genome does not mean that we know how human genes work. We know a tiny fraction of what they do even though we have the complete data.

I was asking a friend how the self-driving car works in heavy traffic and he said "the car knows which other cars are around". I object that the car does not "know" it. There is a system of sensors that continuously relay information to a computer that, in turn, calculates the trajectory and feeds it into the motor controlling the steering wheel. This is not what we mean when we say that we "know" something. The car does not "know" that there are other cars around, and it does not "know" that cars exist, and it doesn't even "know" that it is a car. It is certainly doing something, but what it is doing is something else than "knowing". This is not just semantics: because the car does not "know" that it is a car surrounded by other cars driving on a road, it also lacks all the common sense or general knowledge that comes with that knowledge. If an elephant fell from the sky, a human driver would be at least surprised (and probably worried of stranger things ahead), whereas a car would simply interpret it as an object parked in the middle of the highway.

When (in 2013) Stanford researchers trained a robot to take the elevator, they realized that there was a non-trivial problem: the robot stopped in front of the glass doors of the elevator interpreting its own reflection into it as another robot. The robot does not "know" that the thing is a glass door otherwise it would easily realize that there is no approaching robot, just a reflection getting bigger like all reflections do when you walk towards a mirror.

It is easy to claim that, thanks to Moore's law, today's computers are one million times faster than the computers of the 1980s, and that a smartphone is thousands of times faster than the fastest computer of the 1960s. But faster at what? Your smartphone is still slower than a snail at walking (it doesn't move, does it?) and slower than television at streaming videos. Plenty of million-year old artifacts are faster than the fastest computer at all sorts of biological processes. And plenty of analog devices (like television) are still faster than digital devices at what they do. If Moore's law applies to the next 20 years, there will be another million times improvement in processing speed and storage capacity. But that may not increase at all the speed at which a computer can summarize a film. The movie player that i use today on my laptop is slower (and a lot less accurate) in rewinding a few scenes of the film than the old videotape player of twenty years ago, no matter how "fast" the processor of my laptop is.

We tend to use cognitive terms only for machines that include a computer, and this habit started way back when computers were invented (the "electronic brains"!). Thus the cognitive vocabulary tempts people to attribute "states of mind" to those machines. We don't usually do this to other machines. A washing machine washes clothes. If a washing machine is introduced that washes tons of clothes in a split second, consumers will be ecstatic, but presumably nobody would talk about it as if it related to a human or superhuman intelligence. And note that appliances do some pretty amazing things. There's even a machine called "television set" that shows you what is happening somewhere else, a feat that no intelligent being can do. We don't attribute cognitive states to a television set even though the television set can do something that requires more than human intelligence.

Take happiness instead of intelligence. One of the fundamental states of human beings is "happiness". When is a machine "happy"? The question is meaningless: it's like asking when does a human being need to be watered? You water plants, not humans. Happiness is a meaningless word for machines. Some day we may start using the word "happy" to mean, for example, that the machine has achieved its goal or that it has enough electricity; but it would simply be a linguistic expedient. The fact that we may call it "happiness" does not mean that it "is" happiness. If you call me Peter because you can't spell my name, it does not mean that my name is Peter.

Semantics is important to understand what robots really do. Pieter Abbeel at UC Berkeley has a fantastic robotic arm that can fold towels with super-human dexterity. But what it does is "not" what a human does when she folds towels. Abbeel's robot picks up a towel, shakes it, turns, and folds it on a table. And it does it over and over again, implacably, without erring. What any maid in a hotel does is different. She picks up a towel and folds it unless the towel is still wet, or has a hole, or needs to be washed again, or That is what "folding towels" really means. The robot is not folding towel: it is performing a mechanical movement that results in folded towels, but sometimes also results in a pile of folded towels that is useless because it contains unusable towels. The maid is not paid to do what the robot does. She is paid to fold towels, not to "fold towels" (the quotes make the difference).

Cleaning up a table is not just about throwing away all the objects that are lying on the table, but about recognizing which objects are garbage and which are not, which ones must be moved elsewhere and which ones belong there (e.g. a vase full of flowers, but not a vase full of withered flowers).

It is true that machines can now recognize faces, and even scenes, but they have no clue what those scenes mean. We will soon have machines that can recognize the scene "someone picked up an object in a store", but when will we have a machine that can recognize "someone STOLE an object from a store?" A human being understands the meaning of this sentence because humans understand the context: some of those objects are for sale, or belong to the shop, and a person walking away with those objects is a thief, which is very different from being a store clerk arranging the goods on the shelves or a customer bringing the object to the counter and paying for it. We can train neural networks to recognize a lot of things, but not to understand what those things mean.

And the day when we manage to build a machine that can recognize that someone is stealing someone else's wallet, we will still have a higher level of understanding to analyze: that scene could be a prank, as indicated by the fact that one of the two people is smiling, and that we know that they are old friends. In that case you don't call the police but simply wait for the fun to begin. And if we build a machine that can even recognize a prank, we will still have go up one more level of abstraction higher to consider the case in which this is happening in a movie, not in reality. And so on and on. The human mind can easily grasp these situations: the same scene can mean so many different things.

The automatic translation software that you use from Chinese to English doesn't have a clue what those Chinese words mean nor what those English words mean. If the sentence says "Oh my god there's a bomb!" the automatic translation software simply translates it into another language. A human interpreter would shout "everybody get out!", call the emergency number and... run!

Intelligence is not about the error rate in recognizing an action. Intelligence is about "recognizing" the action for what it is. Mistakes are actually fine. We make mistakes all the time. Sometimes we think we recognized an old friend and instead it turns out to be a complete stranger. We laugh and move on. And that's another way to ponder the difference in semantics. We laugh out loud when we see the kind of mistakes that a computer makes when it is trying to recognize a scene. I just searched for images related to "St Augustine what is time" and the most famous search engine returned a page of pizza images. An image search for "Popper logic scientific discovery" returns a page with the correct images of that philosophy classic, but when i clicked on the image of the original edition i got a group of "related images" that were about a porno star. (Yes, the same result showed up on all computers, not just mine). This is what humans do: humans laugh out loud when someone (or something) makes such silly mistakes. The real Turing Test is this: when will we have a computer that laughs out loud at silly mistakes made by other computers or by itself?

Even the word "mistake" needs to be properly defined. It has been widely publicized that machines can now recognize images with more accuracy than humans can; that machines make fewer mistakes. But that is a misleading statement. The mistakes that we make when we misjudge an image are important. When hiking in the forest, for example, a hiker may mistake a tree for a bear. That is probably a mistake that a machine would not make: trained to recognize trees, it would recognize a tree as a tree, not as a bear. Sometimes we even mistake a tree for a person, or for a fellow hiker; and a boulder behind the tree for a tent. A machine would probably never mistake a rock for a tent. But these mistakes are actually important: there "could" be a bear in the forest, and there "could" be a fellow hiker, and so on. These are not details. These are important facts. Our survival depends on these "mistakes". There is a big misunderstanding in what constitutes a mistake: a mistake is to think that there are no bears in the forest. Our brain is expecting bears, just like it is expecting other hikers (and many other possible encounters). A machine that does not expect bears and hikers will obviously never be mistaken, but that's a machine that knows nothing about the environment. As a tool for hikers in the wilderness, it would be a dangerous tool. I will trust the machine the day that it mistakes a tree for a bear. Then i will feel confident that the machine is ready to go into the wilderness.

Using human semantics, the most intelligent machines ever built, such as IBM's "Watson" and Google's "AlphaGo", are incredibly stupid. They can't even cook an omelette, they cannot sort out my clothes in the drawers, they cannot sit on the sidewalk and gossip about the neighborhood (i am thinking of human activities that we normally don't consider "very intelligent"). A very dumb human being can do a lot more than the smartest machines ever built, and that's probably because there is a fundamental misunderstanding about what "intelligent" means.

AlphaGo did not "win" a game of weiqi/go. AlphaGo never learned how to play weiqi. AlphaGo cannot answer any question, but, if it could, it would not know the answer to the question "what are the rules of weiqi?" AlphaGo never learned to play weiqi. AlphaGo simply calculates the most likely good move based on moves made by thousands of go masters in similar situations. AlphaGo has no idea that it is playing go, playing a game, playing with humans, etc. Therefore it does not "win" or "lose".

There is also talk of "evolving A.I. systems", which projects the image of machines getting more and more intelligent. This can mean different things: a) a software that devises a better technique to solve problems; b) a software that has improved itself through learning from human behavior; c) a software that has improved itself through self-playing. None of this is what we mean when we say that a species evolved in nature. Evolution in nature means that a population makes children that are all slightly different and then natural selection rewards the ones that are the best fit in the environment. After thousands of generations, the population will evolve into a different species that will not mate with the original one. There is nothing wrong with software programs that get better at doing what they do, but calling it "evolution" evokes a metaphor (and an emotional reaction) that just does not apply to today's software. There is no software that evolves. And even if you really want to call it "evolution", you should realize that the software program has "evolved" because of the software engineer who programmed it. If tomorrow beavers start building better dams, do you talk about the evolution of dams or the evolution of beavers?

The mother of all misunderstandings is the fact that we classify some technologies under the general label "Artificial Intelligence", which automatically implies that machines equipped with those technologies will soon become as intelligent as humans. There are many technologies that have made, are making and will make machines more intelligent. For example, the escapement and the gyroscope made several machines more intelligent, from clocks to motion-sensing devices, but people are not alarmed that escapements and gyroscopes might take over the world and kill us all. Monte Carlo methods have been widely used in simulation since Stanislaw Ulam published the first paper in 1949. They are usually classified under "Numerical Analysis" and sometimes under "Statistical Analysis", and don't scare anybody. Mathematically speaking, they apply statistical methods to find a solution to problems that are described by mathematical functions with no known solution. Sounds boring, right? But the Monte-Carlo tree search is a Monte Carlo method used by AlphaGo for determining the best move in a game. Now it doesn't sound boring anymore, right? If we now classify the Monte Carlo method under "Artificial Intelligence", we suddenly turn a harmless statistical technique into some kind of dangerous intelligent agent, and the media will start writing articles about how this technique will create super-intelligent machines. That is precisely what happened with "neural networks". When in 1958 the psychologist Frank Rosenblatt built the first "neural network", his aim was indeed to model how the human brain works. Today we know that the similarities are vague at best. It is like comparing a car to a horse because the car was originally called the "horseless carriage" (we still measure a car's power in horsepower!) Progress in neural networks has not been based on neuroscience but on computational mathematics: we need mathematical functions that can be implemented in computers and that can yield solutions in a finite time. Calling them "neural networks" makes people think of brains, and turns them into ideas for Hollywood movies. If we called them "constraint propagation" (which is what they are), they would only make people think of the algebra that they hated in high school.
The very notion of what is supposed to be as "intelligent" as humans is frequently flawed. When they tell us that an A.I. system (a neural network) has reached or surpassed human-level performance, what are they really comparing? They are comparing a human being, which is a body equipped with many organs including eyes, mouth and arms, and encapsulating a brain, with a piece of software that takes as input some data and produces as output some other data. The human "performance" in, for example, recognizing an object consists in our eyes seeing the object and sending signals to the brain that then recognizes the object. This process is completely different from the process of computing zeroes and ones inside a neural network that has no eyes. The neural network does not "see" the object and therefore what it is doing is not "recognizing" it. A fair comparison would be between a robot whose "brain" is such a neural network and which is fed not data files but natural images in the world. For example, let the human and the robot walk around the city and then compare the performance in recognizing objects. A blind person does not recognize an image. A neural network without a body does not "recognize" an image either: what it does is some mathematical computation that eventually outputs a result that one can compare with the results coming out of other machines. We humans have no brain that can enter a competition with an A.I. system that doesn't have a brain. We cannot pull out our brain from the skull and feed it data files to see if it is as good as an A.I. system at naming objects represented by those data.

Incidentally, that's why sometimes i quip that A.I. is not a science but an art. If i draw two small circles, a straight vertical line between them and a curved line under the straight line, most people "recognize" a face. But obviously those signs are not a face at all: they are just signs on a piece of paper. Show Picasso's portrait of Ambroise Vollard and ask "what is it?" and most people would probably reply "a face". But that is not a face at all either. Such a face never existed. It is just an imaginary object that looks like someone's face. Now show the "Mona Lisa": what is it? A face? No, even that is not a face, even if this time the painter was very faithful to the real face of a real person. It is still not a face: it is a painting of someone's face. A face is the thing, made of flesh, in the front of my head. A painting of it is not a face, no matter how accurate it is.

"Machine learning" is the buzzword of 2017. Countless programs are being re-branded as "machine learning". Many of them do exactly the same thing that they were doing before being promoted to "machine learning". They do statistical analysis. They classify data. Call it "data classification" and nobody has visions of Hollywood monsters attacking humankind. But call it "machine learning" and you trigger a debate on whether these machines threaten humankind. Data classification is just one of the things that computers can do faster than humans, and that's because it can be reduced to computation using fairly old statistical methods. Neural networks are a new way of performing data classification. You can use a properly annotated set of male and female faces to train a neural network, and then the neural network will be able to classify a face as male or female. If want it to classify young and old faces, you need to train it again with a set of faces annotated in a different way. They can classify data much faster than any human the same way that they can compute much faster than any human. Whether "data classification" is all there is to "learning" is a different story, but calling it "machine learning" implies just that.

Is the hammer a conscious being? No, of course, it is a hammer; at best, a tool that can be used in many circumstances. But not a conscious being like us. The name "artificial intelligence" is misleading. Call a hammer "artificial creator" and people will start asking "can it become conscious"? Call a neural network what it is ("image recognition", "speech recognition", etc) and people will be more likely to ask "how much does it cost" than "will it become conscious"?

The reason that sometimes i am skeptic that we will ever get machines of any significant degree of intelligence is that futurists use a definition of "intelligence" that has nothing to do with the definition of intelligence used by ordinary people. When the computer displayed "Would you like to download new updates?" live on the giant screen of the Stanford auditorium so that 200 people could see it and laugh out loud while the elderly physicist was focused on explaining the exciting new findings of the particle accelerator, it was obviously an incredibly stupid moment by an incredibly stupid machine, but futurists would instead point out the number of "logical operations" (ironically abbreviated as FLOPs) that this cheap portable computer can perform in a split second. This will not help build a better machine, just to flop harder (sorry for the pun).

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact