Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

The Importance (or Redundance?) of Definitions

The founders of Artificial Intelligence never really defined it. In 1956 computers were considered "giant brains" because it was already obvious that, potentially, they were able to do things that humans couldn't do (for example, complex calculations). But it was also obvious that the computers of 1956 were not designed to do things that humans naturally do, even when they are not particularly intelligent: speaking, listening, seeing, recognizing objects and people, walking, grasping objects, and learning. The Turing Test was soon taken more seriously than Turing had asked for. In order to pass the Turing Test, a computer needs to be able to speak, listen, see, recognize objects, etc. So that became the indirect definition of Artificial Intelligence. Nobody came up with a proper definition, but it was understood that all those fields were difficult for traditional programs and something new was needed in the design of the computer or in the programming of the computer.

What is "intelligence"? Ask one thousand psychologists and you'll probably get one thousand definitions (don't ask philosophers because you'll get a 10-week class on the myriad theories of mind from "dualism" to "dual aspect monism"). What is "artificial"? Even that is not really obvious: is a spiderweb artificial or natural? If it's natural, then why would a computer be artificial? What makes something artificial instead of natural? One day John McCarthy defined Artificial Intelligence as "the science and engineering of making intelligent machines." Marvin Minsky defined it once as the "things that would require intelligence if done by men" (1968) and another time as "the ability to solve hard problems" ("Communication with Alien Intelligence", 1985). Socrates, who thought that conversations were pointless without proper definitions, must have turned in his grave. In 1982 Greg Chaitin added one last line to a to-do list in his essay "Goedel's Theorem and Information" and it reads: "Develop formal definitions of intelligence". A.I. practitioners became a little more articulate in the 2000s. Ray Kurzweil in his book "The Age of Spiritual Machines" (2000): "Intelligence is the ability to use optimally limited resources - including time - to achieve goals". Nils Nilsson in his book "Quest for Artificial Intelligence" (2009): "Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment."

One of the last things that John McCarthy wrote before dying was: "We cannot yet characterize in general what kinds of computational procedures we want to call intelligent" (2007).

Where is the limit? There are many humans who don't speak, can't hear, cannot walk, etc, but we still consider them human beings. Philosophers love Artificial Intelligence because you can argue forever what constitutes "intelligence" and what does not. Today the term is so popular that it is applied to everything, even to programs that were already possible before 1956. I really don't know what is the difference between Computer Science and Artificial Intelligence: when does an algorithm become an A.I. algorithm? If A.I. is not popular, then no algorithm is called AI. If AI is popular (like today), then every algorithm is called AI! For example, ten years ago nobody called AI the statistical methods used in thousands of data-analysis programs. Now they are called "machine learning".

David Chapman, who has been working since 2010 on the hypertext book "Meaningness", is one of the people who fears that the discipline of Artificial intelligence is "doomed to recapitulate its own history, rediscovering the same dead ends over and over." However, the difference this time is that we are changing the definition of what Artificial Intelligence is. If the cuckoo clock were invented today, it would be tagged as "Artificial Intelligence".

But definitions do matter, because you may call the wrong expert. If you call "plumbing" the electrical wires of your house, you will call a plumber to fix an electrical "short". If all your firm needs is a some statistical analysis and you call it "A.I.", you will hire an expert in so-called "deep learning" who will probably not solve your problem but charge you a lot of money.

On the other hand, we who grew up with A.I. may be too "puritanical". In September 2017 i had a meeting with Chinese venture capitalists who were using the "A.I." buzzword repeatedly. For the first 20 or 30 minutes i kept objecting "that's not A.I.!" but then i gave up: it was pretty obvious that they had a simple definition of A.I., i.e. A.I. is everything that is done by a machine. There is human intelligence, and there is machine intelligence (and maybe animal intelligence). The West has been fixated with a (non) definition of intelligence that should apply equally to humans and machines, but these Chinese investors, who had missed 60 years of A.I., had come up with a much simpler and cleaner definition of A.I.: whatever chore is done by machines. They even mentioned the options/preferences tab of many applications as A.I.: the machine "learns" your preferences. I objected, of course, that the machine simply stores the preferences that i select; but they were hardly interested in splitting the hair. The distinction between "learning" and "storing" is philosophical. The net result is the same: some information is now available inside the machine that was not available before, and it is in a format that can be used for some practical application. That qualifies as "learning". This attitude is rapidly spreading among the new generations: they too missed the first 60 years of A.I.

I think that the motivation to start a whole new discipline came from the realization that the computer was ill-equipped for two of the most common of human activities: reasoning and recognizing. Computers were told what to do (were "programmmed"), and they were very good at obeying the instructions. But they were not good at doing those two things that all humans do all the time. Therefore those became the two pillars of A.I. research: how to build computers that can "reason" about a situation (typically, plan an action) and that can "recognize" things (typically, images and sounds). If you can't recognize patterns, you can't speak. If you can't reason, you can't answer a serious question. De facto, A.I. was the science of automated reasoners (e.g., expert systems, natural language processing and robotic arms) and recognizers (e.g., speech recognition and vision). I didn't say that reasoning and recognizing is all that humans do, but that may be all that is required to pass the Turing Test.

Just for argument's sake, let me tell you my rough definition of A.I. I honestly don't have a definition of "artificial intelligence", and that's because i don't have a definition of "intelligence" to start with, and, furthermore, because i never liked the term itself (if it's "artificial intelligence" than by definition it is not "intelligence" just like an "artificial leg" is not a "leg" although, hopefully, it works just as well as a leg); but i will try. A.I. is not automation and it is not Hollywood movies, it is something in between. A repetitive task does not require A.I. If it is repetitive, a simple algorithm can implement it. A philosopher can easily analyze that statement of mine and argue that every human behavior is repetitive if you look carefully enough, so even this concept can become a lengthy debate, but i show philosophers two simple cases so that they can come up with a better concept.. Who is the president of Russia? Vladimir Putin. Answering this question is only about information: a database contains the answer. Who will be the next president of Russia? This question is different: it requires in-depth knowledge of Russia, the ability to reason about it, and the answer won't be 100% certain. Or think of the questions "Where is Rome?" and "Where is Atlantis?" One is about information that can be found in a database of cities, whereas the other one requires knowledge of archeology and geology, and the answer won't be 100% certain. You don't need to be an expert to find out that Rome is in Italy, but only experts can tell you why the island of Thera (aka Santorini) is the likely location of Atlantis.

Philosophers make a big deal of the distinction between information and knowledge, but let's take the intuitive meaning of these two words: information is something that i can look up in an encyclopedia; knowledge is something that (currently) is only in the mind of experts and is really difficult to express in the formal language of computers. A.I. is not about processing information, but knowledge, so that a machine can answer questions such as "Who will be the next president of Russia?" and "Where is Atlantis?"

We can turn information into knowledge in two basic ways: we can elicit the knowledge from human experts who have acquired it over the years, or we can simulate the way a human expert creates knowledge: by trial and error. The knowledge-driven approach simulates the way the human mind thinks and the data-driven approach simulates the way the brain is structured. They sound like the same thing, but these two approaches lead to develop wildly different technologies. One deals with symbols (just like human language is made of words), the other one deals with numbers (just like the human brain is made of electrochemical signals).

Consciously or subconsciously, the discipline of Artificial Intelligence came to encompass all the technologies needed for a machine to pass the Turing Test that back then seemed impossible to program in the Von Neumann way (instruction by instruction): speech recognition, computer vision, natural language processing, reasoning, learning, common sense. There was an entire range of applications that didn't seem feasible with the Von Neumann architecture. This included tasks for which the algorithm does not exist, i.e. "expert" tasks (a medical encyclopedia is not equivalent to an experienced physician); tasks for which the algorithm exists but it is useless because "heuristics" prevails (you don't need to calculate the effects of hot temperature on your skin in order to conclude that you shouldn't touch boiling water"); tasks for which a deterministic algorithm is not possible because they involve uncertain quantities (e.g. "Brazil will win the next world cup"); tasks for which the algorithm would be too complicated (e.g., designing a cruise ship). In order to tackle these categories of problems, the new discipline had to invent new techniques to program computers in a different way. With hindsight, these categories of problem can be divided in two groups: one can be solved using logical (exact) reasoning and the other is usually approached with probabilistic reasoning. For example, we can calculate logically the best move of chess, whereas we can only be reasonably certain (not absolutely certain) that the object in the room is an apple (nobody would be shocked if it turned out to be a painted baseball or a giant strawberry). For example, we can determine what action is the best when we have all the facts, whereas we can only guess what the speaker intended to say when the speaker has a strong foreign accent or if we couldn't hear clearly all the words that were uttered. In most cases the latter prevails: we settle for a plausible explanation of what happened, aware that it may not be what we initially thought it was. Sometimes our thinking is exact (logical), but in most cases it is probabilistic. Speech, vision, language and common sense fall into the latter category. Time would tell that the aspects related to exact reasoning were not the problem: the real problem were all the aspects of human behavior that belong to inexact reasoning. Most of what we do daily is based on inexact reasoning: language is ambiguous (otherwise philosopher and lawyers would not have a job), vision is tentative (especially at a distance and in conditions of poor visibility), judgment is almost always probabilistic, and even "infallible experts" make mistakes. The human world is not an exact world. It is, in fact, a wildly inexact world in which "plausible" reasoning prevails over mathematical reasoning.

Today A.I. (as defined above) encompasses four main classes of technologies: computer vision (object detection, face recognition, scene analysis), natural language processing (speech recognition, automatic translation, discourse analysis, sentiment analysis), reasoning (exact and inexact inference, common sense, planning), and learning (which ranges from mere classifiction to theory formation).

There is progress in all of them, as long as we don't expect miracles, just like there is progress in every field of technology: lithium batteries, dishwashers, bicycle tyres, water heaters, nail clippers...

In retrospect, the funny thing about A.I. is that it was not obvious at all to establish what was difficult and what was easy. Originally many A.I. scientists thought that the ultimate sign of "intelligence" was the ability to solve mathematical problems (especially since most of them were mathematicians). But anything that can be automated will be automated. So, given that logic had been largely automated during the 20th century, it was actually not difficult for a computer to solve logical problems, such as mathematical theorems or chess. However, "reasoning" in general is not trivial, and is still an unsolved problem, because most of our "reasoning" is not logical at all. Most of our reasoning is better called "guessing". Our daily lives are not about proving mathematical theorems but about guessing what to wear, what to do about a contract, whom to call to get help, whether it is best to visit mother this weekend or next month, whether it is cheaper to buy a car this year or next year, and so on. Reasoning in the real world involves so many uncontrollable and unpredictable factors that is inherently imprecise and uncertain. We mostly engage in "plausible" reasoning, not mathematical reasoning.

And, amazingly, out of this messy inexact reasoning we also learn. We are capable of a virtually infinite number of tasks. The steam engine cannot be programmed: it can do only one thing. The digital electronic computer can be programmed and it can therefore do many things, but needs a program for each task. In order to match human intelligence, A.I. also needs to build machines that can program themselves.

To start with, we don't have a viable definition of "intelligence". The concept of a general intelligence was formalized by British psychologist Charles Spearman in his book "The Abilities of Man, their Nature and Measurement" (1927), but instead Louis Thurstone of the University of Chicago in his book "Primary Mental Abilities" (1938) criticized the notion of a general intelligence. This theory of multiple intelligences was embraced decades later by the psychologist Howard Gardner, then at Boston's Veterans Administration Hospital, in his book "Frames of Mind" (1983). Anecdote: when i asked a psychologist "did any of these definitions succeed?" he asked me to define "succeed"... The Turing Test has been used to define "intelligence" as "what humans do", but that is hardly a definition because it depends on which human you use as a model of intelligence (and which human you use as the judge). Nor do we have a viable definition of intelligence that is applicable to all kinds of systems: humans, animals, machines (and maybe other natural systems).

But the lack of a definition often benefits the field, because success and failure are measured against the definition: the least precise the definition, the easier to claim success. My guess is that one of these days the singularists will simply change the definition of "singularity" and declare that it is already here, even ahead of time (except that obnoxious skeptics like me will then show that, based on the new definition, it has always been here).

My Chinese friends couldn't care less about my definition of A.I. They only want to know if the program (call it A.I. or not) is useful. As Xiaoping Deng famously said when he launched the capitalist reforms in China, it doesn't matter whether the cat is white or black as long as it catches the mouse.

Back to the Table of Contents

Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact