Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")


How not to Build an Artificial General Intelligence - Part 1: The Many-task Mind

In April 2013 i saw a presentation at Stanford's Artificial Intelligence Lab by the team of Kenneth Salisbury in collaboration with the Willow Garage about a robot that can take the elevator and walk upstairs to buy a cup of coffee. This implies operations that are trivial for humans: recognizing that a transparent glass door is a door (not just a hole in the wall and never mind the reflection of the robot itself in the glass), identifying the right type of door (rotating, sliding or automatic), finding the handle to open the door, realizing that it's a spring-loaded door so it doesn't open as easily as regular doors, finding the elevator door, pressing the button to call the elevator, entering the elevator, finding the buttons for the floors inside an elevator whose walls are reflective glass (therefore the robot keeps seeing reflections of itself), pressing the button to go upstairs, locating the counter where to place the order, paying, picking up the coffee, and all the time dealing with humans (people coming out of the door, sharing the space in the elevator, waiting in line) and avoiding unpredictable obstacles; if instructions are posted, read the instructions, understand what they mean (e.g. this elevator is out of order or the coffee shop is closed) and change plan accordingly. Eventually, the robot got it right. It took the robot 40 minutes to return with the cup of coffee. It is not impossible. It is certainly coming. I'll let the experts estimate how many years it will take to have a robot that can go and buy a cup of coffee upstairs in all circumstances (not just those programmed by the engineer) and do it in 5 minutes like humans do. The fundamental question, however, is whether this robot can be considered an intelligent being because it can go and buy a cup of coffee or it is simply another kind of appliance.

It will take time (probably much longer than the optimists claim) but some kind of "artificial intelligence" is indeed coming. How soon depends on your definition of artificial intelligence. One of the last things that John McCarthy wrote before dying was: "We cannot yet characterize in general what kinds of computational procedures we want to call intelligent" (2007).

Nick Bostrom wrote that the reason A.I. scientists have failed so badly in predicting the future of their own field is that the technical difficulties have been greater than they expected. I don't think so. I think those scientists had a good understanding of what they were trying to build. The reason why "the expected arrival date [of artificial intelligence] has been receding at a rate of one year per year" (Bostrom's estimate) is that we keep changing the definition. There never was a proper definition of what we mean by "artificial intelligence" and there still isn't. No wonder that the original A.I. scientists were not concerned with safety or ethical concerns: of course, the machines that they had in mind were chess players and theorem provers. That's what "artificial intelligence" originally meant. Being poor philosophers and poor historians, they did not realize that they belonged to the centuries-old history of automation, leading to greater and greater automata. And they couldn't foresee that, within a few decades, all these automata would become millions of times faster, billions of times cheaper, and would be massively interconnected. The real progress has not been in A.I. but in miniaturization. Miniaturization has made it possible to use thousands of tiny cheap processors and to connect them massively. The resulting "intelligence" is still rather poor, but its consequences are much more intimidating.

To start with, it is wise to make a distinction between an artificial intelligence and an A.G.I. (artificial general intelligence). Artificial intelligence is coming very soon if you don't make a big deal of it, and it might already be here: we are just using a quasi-religious term for "automation", a process that started with the waterwheels of ancient Greece if not earlier. Search engines (using very old fashioned algorithms and a huge number of very modern computers housed in "server farms") will find an answer to any question you may have. Robots (thanks to progress in manufacturing and to rapidly declining prices) will become pervasive in all fields, and become household items, just like washing machines and toilets; and eventually some robots will become multifunctional (just like today's smartphones combine the functions of yesterday's watches, cameras, phones, etc; and, even before smartphones, cars acquired a radio and an air conditioning unit, and planes acquired all sorts of sophisticated instruments).

Millions of jobs will be created to take care of the infrastructure required to build robots, and to build robots that build robots, required to build robots, and to build robots that build robots, and ditto for search engines, websites and whatever comes next. Some robots will come sooner, some will take centuries. And miniaturization will make them smaller and smaller, cheaper and cheaper. At some point we will be surrounded for real by Neil Stephenson's "intelligent dust" (see his novel "Diamond Age"), i.e. by countless tiny robots each performing one function that used to be exclusive to humans. If you want to call these one-function programs "artificial intelligence", suit yourself.

We wouldn't call "intelligent" a human being whose brain can do only one thing.

An AGI, instead, would be more like us: maybe none of us does anything well, but we do many things and we are capable of doing many things that we will never do. An AGI would not be limited to one or two or twenty tasks: it would be able to perform ALL the tasks that human beings perform, although not necessarily excel at any of them.

Making predictions about the coming of an AGI without having a clear definition of what constitutes an AGI is as scientific as making predictions about the coming of Jesus. An AGI could be implemented as a collection of one-function programs, each one specialized in performing one specific task. In this case someone has to tell the A.I. specialist which tasks we expect from an AGI. Someone has to list whether AGI requires being able to ride a bus in Zambia and to exchange money in Haiti or whether it only requires the ability to sort out huge amounts of data at lightning speed or what else. Once we have that list, we can ask the world's specialists to make reasonable estimates and predictions on how long it will take to achieve each of the functions that constitutes the AGI.

This is an old debate. Many decades ago the founders of computational mathematics (Alan Turing, Claude Shannon, Norbert Wiener, John von Neumann and so forth) discussed which tasks can become "mechanic", i.e. performed by a computing machine, i.e. what can and what cannot be computed, i.e. what can be outsourced to a machine and what kind of machine it has to be. Today's computers that perform today's deep-learning algorithms, such as playing go/weichi, are still Universal Turing Machines, subject to the theorems proven for those classes of machines. Therefore, Alan Turing's original work still applies. The whole point of inventing (conceptually) the Turing Machine in 1936 was to prove whether a general algorithm to solve the "halting problem" for all possible program-input pairs exists, and the answer was a resounding "no": there is always at least one program that cannot be "decided", i.e. that will never halt. And in 1951 Henry Gordon Rice generalized this conclusion with an even more formidable statement, "Rice's Theorem": any nontrivial property about the behavior of a Turing machine is undecidable, a much more general statement about the undecidability of Turing machines. In other words, it is proven that there is a limit to what machines can "understand", no matter how much progress there will be, if they are Universal Turing Machines (as virtually all of today's computers are).

Nonetheless, by employing thousands of these machines, the "brute force" approach has achieved sensational feats such as machines that can beat go/weichi champions and recognize cats. So you might be tempted to accept that an AGI will be created by sheer "brute force": creating a one-function program for each possible task and then somehow putting them all together in one machine that will then be able to carry out any human function.

Some of us doubt that the human mind works that way. We have seen no neurological evidence that the human brain is a collection of one-function programs. We have seen evidence of the opposite: that the human mind is capable of applying the skills of one function to a different function, and sometimes without even being told to do so. We are AGIs because our brain can approach new tasks and find a way to perform them even if nobody trained us to carry out such new tasks.

VanGogh and Nietzsche went mad in the same year, 1888, one year after Emile Berliner invented the gramophone (that records sounds) and in the same year in which Kodak introduced the first consumer camera (that records images). What about it? Your brain is already at work to find some connection, right? Nobody programmed your brain to find out why a painter and a philosopher went mad in the same year that an invention hit the market, but you can do it effortlessly. And, yes, it's probably a useless and pointless exercise. Nonetheless, that's what a multi-tasking intelligence can do, and does all the time.

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact