Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

A Look at the Evidence: A Comparative History of Accelerating Progress

A postulate at the basis of many contemporary books by futurists and self-congratulating technologists is that we live in an age of unprecedented rapid change and progress. But look closer and our age won't look so unique anymore.

As i wrote in the chapter titled "Regress" of my book "Synthesis", this perception that we live in an age of rapid progress is mostly based on the fact that we know the present much better than we know the past. One century ago, within a relatively short period of time, the world adopted the car, the airplane, the telephone, the radio and the record, while at the same time the visual arts went through Impressionism, Cubism and Expressionism. Science was revolutionized by Quantum Mechanics and Relativity. Office machines (cash registers, adding machines, typewriters) and electrical appliances (dishwasher, refrigerator, air conditioning) dramatically changed the way people worked and lived. Debussy, Schoenberg, Stravinsky and Varese changed the concept of music. These all happened in one generation. By comparison, the years since World War II have witnessed innovation that has been mostly gradual and incremental. We still drive cars (invented in 1886) and make phone calls (invented in 1876), we still fly on airplanes (invented in 1903) and use washing machines (invented in 1908), etc. Cars still have four wheels and planes still have two wings. We still listen to the radio and watch television. While the computer and Genetics have introduced powerful new concepts, and computers have certainly changed daily lives, i wonder if any of these "changes" compare with the notion of humans flying in the sky and of humans located in different cities talking to each other. There has been rapid and dramatic change before.

Does the revolution in computer science compare with the revolutions in electricity of a century ago? The smartphone and the Web have certainly changed the lives of millions of people, but didn't the light bulb, the phonograph, the radio and kitchen appliances change the world at least as much if not much more?

A history of private life in the last 50 years would be fairly disappointing: we wear pretty much the same clothes (notably T-shirts and blue jeans), listen to the same music (rock and soul were invented in the 1950s), run in the same shoes (sneakers date from the 1920s), and ride, drive and fly in the same kinds of vehicles (yes, even electric ones: Detroit Electric began manufacturing electric cars in 1907). Public transportation is still pretty much what it was a century ago: trams, buses, trains, subways. New types of transportation have been rare and have not spread widely: the monorail (that became reality with the Tokyo Monorail in 1964), the supersonic airplane (the Concorde debuted in 1976 but was retired in 2003), the magnetic levitation train (the Birmingham Maglev debuted in 1984, followed by Berlin's M-Bahn in 1991, but in practice the Shanghai Maglev Train built in 2004 is the only real high-speed magnetic levitation line in service). The "bullet train" (widely available in Western Europe and the Far East since Japan's Shinkansen of 1964) is probably the only means of transportation that has significantly increased the speed at which people travel long distances in the last 50 years.

We chronically underestimate progress in previous centuries because most of us are ignorant about those eras. Historians, however, can point at the spectacular progress that took place in Europe during the Golden Century (the 13th century) when novelties such as spectacles, the hourglass, the cannon, the loom, the blast furnace, paper, the mechanical clock, the compass, the watermill, the trebuchet and the stirrup changed the lives of millions of people within a few generations; or the late 15th century when (among other things) the printing press enabled an explosive multiplication of books and when long-distance voyages to America and Asia created a whole new world.

The expression "exponential growth" is often used to describe our age, but trouble is that it has been used to describe just about every age since the invention of exponentials. In every age, there are always some things that grow exponentially, but others don't. For every technological innovation there was a moment when it spread "exponentially", whether it was church clocks or windmills, reading glasses or steam engines; and their "quality" improved exponentially for a while, until the industry matured or a new technology took over. Moore's law is nothing special: similar exponential laws can be found for many of the old inventions. Think how quickly radio receivers spread: in the USA there were only five radio stations in 1921 but already 525 in 1923. Cars? The USA produced 11,200 in 1903, but already 1.5 million in 1916. By 1917 a whopping 40% of households had a telephone in the USA up from 5% in 1900. There were fewer than one million subscribers to cable television in 1984, but more than 50 million by 1989. The Wright brothers flew the first airplane in 1903: during World War i (1915-18) France built 67,987 airplanes, Britain 58,144, Germany 48,537, Italy 20,000 and the USA 15,000, for a grand total of almost 200 thousand airplanes; after just 15 years of its invention. In 1876 there were only 3,000 telephones: 23 years later there were more than a million. Neil Armstrong stepped on the Moon in 1969, barely eight years after Yuri Gagarin had become the first human to leave the Earth's atmosphere.

Most of these fields then slowed down dramatically. And 47 years after the Moon landing we still haven't sent a human being to any planet and we haven't even returned to the Moon since the Apollo 17 in 1972. Similar statistics of "exponential growth" can be found for other old inventions, all the way back to the invention of writing. Perhaps each of those ages thought that growth in those fields would continue at the same pace forever. The wisest, though, must have foreseen that eventually growth starts declining in every field. Energy production increased 13-fold in the 20th century and freshwater consumption increased 9-fold, but today there are many more experts worried about a decline (relative to demand) than experts who believe in one more century of similar growth rates.

Furthermore, there should be a difference between "change" and "progress". Change for the sake of change is not necessarily "progress". Most "updates" in my software applications have negative, not positive effects, and we all know what it means when our bank announces "changes" in policies. If i randomly change all the cells in your body, i may boast of "very rapid and dramatic change" but not necessarily of "very rapid progress". Assuming that any change equates with progress is not only optimism: it is the recipe for ending up with exactly the opposite of progress. Out of the virtually infinite set of possible changes, only a tiny minority of them, a tiny subset, constitute progress.

There has certainly been progress in telecommunications; but what difference does it make for ordinary people whether a message is sent in a split second or in two split seconds? In 1775 it took 40 days for the English public to learn that a revolution had started in the American colonies. Seven decades later, thanks to the telegraph, it took minutes for the news of the Mexican War to travel to Washington. That is real progress: from 40 days to a few minutes. The telegraph did indeed represent "exponential" progress. Email, texting and chatting have revolutionized the way people communicate over long distances, but it is debatable whether that is (quantitatively and qualitatively) the same kind of revolution that the telegraph and the telephone caused.

There are many "simpler" fields in which we never accomplished what we set out to accomplish originally, and pretty much abandoned the fight after the initial enthusiasm. We simply became used to the failure and forgot our initial enthusiasm. For example, domestic lighting progressed dramatically from gas lighting to Edison's light bulbs and Brush's arc lights of the 1880s and the first tungsten light-bulbs and then to the light-bulbs of the 1930s, but since then there has been very little progress: as everybody whose eyesight is aging knows too well, we still don't have artificial lighting that compares with natural sunlight, and so we need to wear reading glasses in the evening to read the same book that we can easily read during the day. A century of scientific and technological progress has not given us artificial lighting that matches sunlight.

I can name many examples of "change" that is often equated to "progress" when in fact it is not clear what kind of progress it is bringing. The number of sexual partners that a person has over a lifetime has greatly increased, and social networking software allows one to have thousands of friends all over the world, but i am not sure that these changes (that qualify as "progress" from a strictly numerical point of view) result in happier lives. I am not sure that emails and text messages create the same bonds among people than the phone conversation, the letter on paper, the postcard and the neighbor's visit did.

One can actually argue that there is a lot of "regress", not "progress". We now listen to lo-fi music on computers and digital music players, as opposed to the expensive hi-fi stereos that were commonplace a generation ago. Mobile phone conversations are frequently of poor quality compared with the old land lines. We have access to all sorts of food 24 hours a day but the quality of that food is dubious. Not to mention "progress" in automated customer support, which increasingly means "search for the answer by yourself on the Web" (especially from high-tech software giants like Microsoft, Google and Facebook) as opposed to "call this number and an expert will assist you".

In the early days of the Internet (1980s) it was not easy to use the available tools but any piece of information on the Internet was written by very competent people. Basically, the Internet only contained reliable information written by experts. Today there might be a lot more data available, but the vast majority of what travels on the internet is: a) disinformation, b) advertising. It is not true that in the age of search engines it has become easier to search for information. Just the opposite: the huge amount of irrelevant and misleading data is making it more difficult to find the one webpage that has been written by the one great expert on the topic. In the old days her webpage was the only one that existed. (For a discussion on Wikipedia see the appendix).

Does the Internet itself represent true progress for human civilization if it causes the death of all the great magazines, newspapers, radio and television programs, the extinction of bookstores and record stores, and if it will make it much rarer and harder to read and listen to the voices of the great intellectuals of the era? while at the same time massively increasing the power of corporations (via targeted advertising) and of governments (via systemic surveillance)? From the Pew Research Center's "State of the News Media 2013" report: "Estimates for newspaper newsroom cutbacks in 2012 put the industry down 30% since its peak in 2000. On CNN, the cable channel that has branded itself around deep reporting, produced story packages were cut nearly in half from 2007 to 2012. Across the three cable channels, coverage of live events during the day, which often require a crew and correspondent, fell 30% from 2007 to 2012... Time magazine is the only major print news weekly left standing".

Even the idea that complexity is increasing relies on a weak definition of "complexity". The complexity of using the many features of a smartphone is a luxury and cannot be compared with the complexity of defending yourself from wild animals in the jungle or even with the complexity of dealing with weather, parasites and predators when growing food in a farm. The whole history of human civilization is a history of trying to reduce the complexity of the world. Civilization is about creating stable and simple lives in a stable and simple environment. By definition, what we call "progress" is a reduction in complexity, although to each generation it appears as an increase in complexity because of the new tools and the new rules that come with those tools. Overall, living has become simpler (not more complicated) than it was in the stone age. If you don't believe me, go and camp in the wilderness by yourself with no food and only stone tools.

In a sense, today's Singularity prophets assume that machine "intelligence" is the one field in which growth will never slow down, in fact it will keep accelerating forever.

Again, i would argue that it is not so much "intelligence" that has accelerated in machines (their intelligence is the same that Alan Turing gave them when he invented his "universal machine") but miniaturization. Moore's law (which was indeed exponential while it lasted) had nothing to do with machine intelligence, but simply with how many transistors one can squeeze on a tiny integrated circuit. There is very little (in terms of intelligent tasks) that machines can do today that they could not have done in 1950 when Turing published his paper on machine intelligence. What has truly changed is that today we have extremely powerful computers squeezed into a palm-size smartphone at a fraction of the cost. That's miniaturization. Equating miniaturization to intelligence is like equating an improved wallet to wealth.

Which progress really matters for Artificial Intelligence: hardware or software? There has certainly been rapid progress in hardware technology (and in the science of materials in general) but the real question to me is whether there has been any real progress in software technology since the invention of binary logic and of programming languages. And a cunning software engineer would argue that even that question is not correct: there is a difference between software engineering (that simply finds ways to implement algorithms in programming languages) and algorithms. The computer is a machine that executes algorithms. Anybody trying to create an intelligent machine using a computer is trying to find the algorithm or set of algorithms that will match or surpass human intelligence. Therefore it is neither progress in hardware not progress in software that really matters (those are simply enabling technologies) but progress in Computational Mathematics.

Ray Kurzweil's book used a diagram titled "Exponential Growth in Computing", but i would argue that it is bogus because it starts with the electromechanical tabulators of a century ago: it is like comparing the power of a windmill with the power of a horse. Sure there is an exponential increase in power, but it doesn't mean that windmills will keep improving forever by the difference between horsepower and windpower. And it doesn't distinguish between progress in hardware and progress in software, nor between progress in software and progress in algorithms. What we would like to see is a diagram titled "Exponential Growth in Computational Math". As i am writing, most A.I. practitioners are looking for abstract algorithms that improve automatic learning techniques.

Others believe that the correct way to achieve artificial intelligence should be to simulate the brain's structure and its neural processes, a strategy that greatly reduces the set of interesting algorithms. In that case, one would also want to see a diagram titled "Exponential Growth in Brain Simulation". Alas, any neurologist can tell you how far we are from understanding how the brain performs even the simplest daily tasks. Current brain simulation projects are modeling only a small fraction of the structure of the brain, and provide only a simplified binary facsimile of it: neuronal states are represented as binary states, the variety of neurotransmitters is reduced to just one kind, the emphasis is on feed-forward rather than on feedback connections, and, last but not least, there is usually no connection to a body. No laboratory has yet been able to duplicate the simplest brain we know, the brain of the 300-neuron roundworm: where's the exponential progress that would lead to a simulation of the 86 billion-neuron brain of Homo Sapiens (with its 100 trillion connections)? Since 1963 (when Sydney Brenner first proposed it), scientists worldwide have been trying to map the neural connections of the simplest roundworm, the Caenorhabditis Elegans, thus jump-starting a new discipline called Connectomics. So far they have been able to map only subsets of the worm's brain responsible for specific behaviors.

If you believe that an accurate simulation of brain processes will yield artificial intelligence (whatever your definition is of "artificial intelligence"), how accurate has that simulation to be? This is what neuroscientist Paul Nunez has called the "blueprint problem". Where does that simulation terminate? Does it terminate at the computational level, i.e. at simulating the exchanges of information within the brain? Does it terminate at the molecular level, i.e. simulating the neurotransmitters and the very flesh of the brain? Does it terminate at the electrochemical level, i.e. simulating electromagnetic equations and chemical reactions? Does it terminate at the quantum level, i.e. taking into consideration subatomic effects?

Ray Kurzweil's "Law of Accelerating Returns" is nothing but the usual enthusiastic projection of the present into the future, a mistake made by millions of people all the time. Alas, millions of people buy homes when home values are going up believing that they would go up forever. Historically, most technologies grew quickly for a while, then stabilized and continued to grow at a much slower pace until they became obsolete.

We may even overestimate the role of technology. Some increase in productivity is certainly due to technology, but in my opinion other contributions have been neglected too quickly. For example, Luis Bettencourt and Geoffrey West of the Santa Fe Institute have shown that doubling the population of a city causes on average an increase of 130% in its productivity ( "A Unified Theory of Urban Living", 2010). This has nothing to do with technological progress but simply with urbanization. The rapid increase in productivity of the last 50 years may have more to do with the rapid urbanization of the world than with Moore's law: in 1950 only 28.8% of the world's population lived in urban areas but in 2008 for the first time in history more than half of the world's population lived in cities (82% in North America, the most urbanized region in the world).

Predictions about future exponential trends have almost always been wrong. Remember the prediction that the world's population would "grow exponentially"? In 1960 Heinz von Foerster predicted that population growth would become infinite by Friday the 13th of November 2026. Now we are beginning to fear that it will actually start shrinking (it already is in Japan and Italy). Or the prediction that energy consumption in the West will grow exponentially? It peaked a decade ago; and, as a percentage of GDP, it is actually declining rapidly. Life expectancy? It rose rapidly in the West between 1900 and 1980 but since then it has barely moved. War casualties were supposed to grow exponentially with the invention of nuclear weapons: since the invention of nuclear weapons the world has experienced the lowest number of casualties ever (see Steven Pinker's book "The Better Angels of Our Nature"), and places like Western Europe, that had been at war nonstop for 1500 year, have not had a major war since 1945.

There is one field in which i have witnessed rapid (if not exponential) progress: Genetics. This discipline has come a long way in just 70 years, since Oswald Avery and others identified DNA as the genetic material (1944) and James Watson and Francis Crick discovered the double-helix structure of DNA (1953). Frederick Sanger produced the first full genome of a living being in 1977, Kary Banks Mullis developed the polymerase chain reaction in 1983, Applied Biosystems introduced the first fully automated sequencing machine in 1987, William French Anderson performed the first procedure of gene therapy in 1990, Ian Wilmut cloned a sheep in 1997, the sequencing of the human genome was achieved by 2003, and Craig Venter and Hamilton Smith reprogrammed a bacterium's DNA in 2010. The reason that there has been such dramatic progress in this field is that a genuine breakthrough happened with the discovery of the structure of DNA. I don't believe that there has been an equivalent discovery in the field of Artificial Intelligence.

Economists would love to hear that progress is accelerating because it has an impact on productivity, which is one of the two factors driving GDP growth. GDP growth is basically due to population growth plus productivity increase. Population growth is coming to a standstill in all developing countries (and declining even in countries like Iran and Bangladesh) and, anyway, in the 20th century the biggest contributor to workforce growth was actually women, which came to the workplace by the millions, but now that number has stabilized.

If progress were accelerating, you'd expect productivity growth to accelerate. Instead, despite all the hoopla about computers and the Internet, productivity growth of the last 30 years has averaged 1.3% compared with 1.8% in the previous 40 years. Economists like Jeremy Grantham now predict a future of zero growth ("On The Road To Zero Growth," 2012). Not just deceleration but a shrieking halt.

Whenever i meet someone who strongly believes that machine intelligence is accelerating under our nose, i ask him/her a simple question: "What can machines do today that they could not do five years ago?" If their skills are "accelerating" and within 20-30 years they will have surpassed human intelligence, it shouldn't be difficult to answer that question. So far the answers to that question have consistently been about incremental refinements (e.g., the new release of a popular smartphone that can take pictures at higher resolution) and/or factually false ("they can recognize cats", which is not true because in the majority of cases these apps still fail, despite the results of the ImageNet Competitions).

In 1939 at the World's Fair in New York the General Motors Futurama exhibit showed how life would be in 1960 thanks to technological progress: the landscape was full of driverless cars. The voiceover said: "Does it seem strange? Unbelievable? Remember, this is the world of 1960!" Twentyone years later the world of 1960 turned out to be much more similar to the world of 1939 than to the futuristic world of that exhibit.

On the 3rd of April 1988 the Los Angeles Times Magazine ran a piece titled "L.A. 2013" in which experts predicted how life would look like in 2013 (the year i am writing this piece). They were comfortable predicting that the average middle-class family would have two robots to carry out all household chores including cooking and washing; that kitchen appliances would be capable of intelligent tasks; and that people would commute to work in self-driving cars. How many robots do you have in your home and how often do you travel in a self-driving car?

In 1964 Isaac Asimov wrote an article in the New York Times (August 16) titled "Visit to the World's Fair of 2014" in which he predicted what the Earth would look like in 2014. He envisioned that by 2014 there would be Moon colonies and all appliances would be cordless.

I am told that you must mention at least one Hollywood movie in a book on A.I. The only one that deserves to be mentioned is "2001: A Space Odyssey" (1968) by Stanley Kubrick. It is based on a book by Arthur Clarke. It features the most famous artificial intelligence of all times, HAL 9000. In the book HAL was born in 1997. 1997 came and went with no machines even remotely capable of what HAL does in that film (and so did 2007 and so will 2017).

The future is mostly disappointing. As Benjamin Bratton wrote in December 2013: "Little of the future promised in TED talks actually happens".

People who think that progress has been dramatic are just not aware of how fast progress was happening before they were born and of how high the expectations were and of how badly those expectations have been missed by current technology. Otherwise they would be more cautious about predicting future progress.

Back to the Table of Contents

Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact