Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

What this Book is about

Writers, inventors and entrepreneurs, impressed by progress in several scientific fields and notably in Artificial Intelligence, are debating whether we may be heading for a "singularity" in which machines with super-human intelligence will arise and multiply. At the same time, enthusiastic coverage in the media has widely publicized machines performing sophisticated tasks, from beating masters of go/weichi to driving a car, from recognizing cats in videos to outperforming human experts on TV quiz shows. These stories have re-ignited interest in Artificial Intelligence, whose goal is to create machines that are as intelligent as humans, and generated fears in the public that these intelligent machines might caused harm to humans, or at least steal our jobs.

First of all, this book provides a "reality check" of sorts on Artificial Intelligence. I show that, in a society driven by news media that desperately need sensational news to make money and in an academic world increasingly driven by the desire to translate academic research into Silicon Valley start-ups, technological progress in general, and progress in computer science in particular, is often overrated. I wanted to dispel some notions and misconceptions, and my version of the facts may sound controversial until you read my explanations. I think that (real) progress in (real) Artificial Intelligence since its founding has been negligible, and one reason is, ironically, that computers have become so much more (computationally) powerful.

In general,  we tend to exaggerate the uniqueness of our age, just as previous generations have done. The very premise of the singularity theory is that progress is accelerating like never before. I argue that there have been other eras of accelerating progress, and it is debatable if ours is truly so special.  The less you know about the past the more likely you are to be amazed by the present.

There is certainly a lot of change in our era. But change is not necessarily progress, or, at least, it is not necessarily progress for everybody. Disruptive innovation is frequently more about disruption than about innovation because disruption creates huge new markets for the consumer electronics industry. This has nothing to do with machine intelligence, and sometimes not even much to do with innovation.

There is also an excessive metaphysical belief that human intelligence is some sort of an evolutionary climax. Maybe so, but it is worth cautioning that non-human intelligence is already among us, and is multiplying rapidly, but it is not a machine: countless animals are capable of feats that elude the smartest humans. For a long time we have also had machines capable of performing “superhuman” tasks. Think of the clock, invented almost 1,000 years ago, that can do something that no human can do: it can tell how many hours, minutes and seconds elapse between two events.

Once we realize that non-human intelligence has always been around, and that we were already building super-human machines centuries ago, the discussion about super-intelligent machines can be reframed in more historically and biologically meaningful terms.

The last generation or two missed out on the debates of the previous decades (the "Turing test", the "ghost in the machine", the "Chinese room", etc). Therefore it is much easier for new A.I. practitioners to impress the younger generations. I have summarized the various philosophical arguments in favor of and against the feasibility of machine intelligence in my book "Thinking about Thought" and i won't repeat them here. I will, however, at least caution the new generations that i "grew up" (as far as cognitive science goes) at a time when the term "intelligence" was not "cool" at all: too vague, too unscientific, too abused in popular literature to lend itself to scientific investigation. It is regrettable that it is being abused again, and, just like back then, without a proper definition of what we mean by “intelligence”. Ask one hundred psychologists and you will get one hundred different definitions. Ask philosophers and you will get thick tomes written in a cryptic language. Ask neurobiologists and they may simply ignore you.

This is the mother of all problems in the debate on the “singularity”: "singularity" and "superhuman intelligence" are non-scientific terms based on non-scientific coffee-house chatting.

The term "artificial intelligence" is even more confusing, a veritable moving target.  In this book i capitalize Artificial Intelligence when i am referring to the discipline, while using lowercase "artificial intelligence" to refer to an intelligent machine or an intelligent software.  A.I. practitioners also use the term "Artificial General Intelligence" (A.G.I.) to refer to a machine that will exhibit human-level intelligence, not just one intelligent skill.

I also feel that any discussion on machine intelligence should be complemented with an important (more important?) discussion about the changes in human intelligence due to the increased "intelligence" of machines. This change in human intelligence may have a stronger impact on the future of human civilization than the improvements in machine intelligence. To wit: the program of turning machines into humans is not very successful yet, but the program of turning humans into machines (via an endless repertory of rules and regulations) is very successful.

My perspective is a little different from the perspective of the many writers who have written, or are writing, books on Artificial Intelligence: i am a historian, not a futurist. I may not know the future, but at least i  know the past.

I am intrigued by another sociological/anthropological aspect of this discussion: humans seem to have a genetic propensity to believe in higher forms of intelligence (gods, saints, UFOs, ...) and the Singularity (capitalized “S”) could simply be its latest manifestation in our post-religious 21st century.

However, most people don’t really care for how we call it: they are afraid not of some electromechanical monster that will kill the human race, but, quite simply, of losing their job to smarter and smarter machines. This too seems to me a wild exaggeration. New machines have always created new jobs, and better-paid ones. I fail to see why this time should be different. Remove all the sensational hyperboles, and it should be obvious that smarter machines will create more jobs, and better-paid jobs.

All of this explains why i am not afraid of Artificial Intelligence: 1. A reality check shows that most of its achievements are not that impressive; 2. Most of the “intelligence” displayed by machines is actually due to the structured environment that humans build for them; 3. The accelerating progress that we perceive is not unique in history; 4. We have always been surrounded by super-human (or, better, non-human) intelligence; 5. I am more concerned about the future of human intelligence than about the future of machine intelligence.

We actually need intelligent machines. Technological progress has solved many problems, but there are still people dying of diseases and dangerous jobs, and we will soon have an aging society that will need even more help from technology. I am not afraid that “intelligent” machines are coming. I am afraid that they will not come soon enough.

This book was started in September 2013 and this revised edition was completed in June 2016.

 

piero scaruffi

 

P.S.: Yes, i don't like to capitalize the first person pronoun "i".

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact