(These are excerpts from my book "Intelligence is not Artificial")
The Case for Superhuman Intelligence and against it
The case for the coming of an artificial intelligence, of an artificial general intelligence and then of the Singularity rests on the simple assumptions that A.I. is making dramatic progress and that progress is accelerating. If you believe these two statements, then you probably believe that we will soon have machines that can have a philosophical conversation with us and write books like this one.
That is precisely the conclusion that Moravec and Kurzweil reached. Hans Moravec, the author of "Mind Children" (1988) and "Robot - Mere Machine to Transcendent Mind" (1998), predicted that machines will become smarter than humans by 2050. Ray Kurzweil, the author of "The Singularity is Near" (2005), predicted that machine intelligence will surpass human intelligence by 2045.
Moravec and Kurzweil were not the first futurists to put those two assumptions together. In 1957 Herbert Simon, one of the founders of A.I., had said: "there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly".
The case against A.I. (and therefore the Singularity) dates from the 1970s, when philosophers started looking into the ambitious statements coming out of the A.I. world.
The first philosopher to look into these claims was Hubert Dreyfus, who wrote in "Alchemy and Artificial Intelligence" (1965): "Significant developments in Artificial Intelligence must await an entirely different sort of computer. The only existing prototype for it is the little-understood human brain."
Mortimer Taube, author of "Computers and Common Sense" (1961), and John Lucas, author of "Minds, Machines and Gödel" (1959), had already pointed out that full machine intelligence is incompatible with Kurt Gödel's incompleteness theorem. In 1935 Alonso Church proved a theorem, basically an extension of Gödel's incompleteness theorem to computation: that first-order logic is "undecidable". Similarly, in 1936 Alan Turing proved that the "halting problem" is undecidable for Universal Turing Machines. What these two theorems say is basically that it cannot be proven whether a computer will always find a solution to every problem; and that is a consequence of Gödel's theorem, a highly respected mathematical proof. Several thinkers have used similar arguments based on Gödel's theorem, notably Roger Penrose in "The Emperor's New Mind" (1989).
The most famous critique of Artificial Intelligence was contained in John Searle's article "Minds, Brains and Programs" (1980) and came to be known as the "Chinese room" argument. If you give a person the set of all instructions needed to translate Chinese into English and lock that person in a room, someone standing outside the room would be fooled into thinking that the person inside knows Chinese, when in fact that person is mechanically following instructions that are meaningless to her to manipulate symbols that are also meaningless. She has no clue what the Chinese sentence says, but she produces the correct translation into English. Endless papers have been written by philosophers to discuss the validity of Searle's argument. However, Searle was not attacking the feasibility of machine intelligence but simply whether an intelligent machine would be also conscious.
Today's computers, including the superfast GPUs used by AlphaGo, are Turing Machines. Critics of A.I. need to prove that Turing machines cannot match human intelligence. Many have written books along those lines, but what precisely Turing Machines can't do is a conveniently movable target. To my knowledge, nobody spelled out what it is that Turing machines will never do better than us.
I suspect that here another kind of "religion" plays a role in the opposite direction, in the direction of making us reluctant to accept that machines can become as intelligent as us and even more intelligent. Astrophysics showed that there is nothing special about the location of the Earth, Biology showed that there is nothing special about human life, Neuroscience is showing that there is nothing special about the human brain, and now Artificial Intelligence might show that there is nothing special about our intelligence. Each of these revelations seems to make humankind less relevant, more insignificant.
But i have seen no convincing proof that machines can never reach human-level intelligence. Hence: why not?
Assuming that some day we will have fully intelligent machines, will they evolve into a superior level of intelligence that is unattainable by humans? That is a different question. I have seen no proof that machine intelligence inevitably leads to machines becoming more intelligent than humans.
Let me use a metaphor. Just because we built a ladder it doesn't mean that we can fly: it only means that we can build taller and taller ladders, and maybe those ladders will help us climb on the roof and fix a leak; but the technology to fly is different from the technology of climbing ladders, and therefore virtually no progress towards flying will be achieved by building better and better ladders. And I doubt that ladders will spontaneously evolve into flying beings. Both the ladder and the bird have to do with "heights" and naive media may conclude that one leads to the other, but people who build ladders should know better.
What exactly are the things that a superhuman intelligence can do and no human being can ever do? If the answer is "we cannot even conceive them", then we are back to the belief that angels exist and miracles happen, something that eventually gave rise to organized religions. If instead there is a simple, rational definition of what a superhuman intelligence can do that no human can ever do, i have not seen it; or, better, i have seen one but also the opposing view. I will briefly discuss these two opposite views.
On one hand, superhuman intelligence should exist because of the "cognitive closure", a concept popularized by Colin McGinn in "The Problem Of Consciousness" (1991). The general idea is that every cognitive system (e.g., every living being) has a "cognitive closure": a limit to what it can know. A fly or a snake cannot see the world the way we see it because they do not have the same visual system that humans have. In turn, we can never know how it feels to be a fly or a snake. A blind person can never know what "red" is even after studying everything that there is to be studied about it. According to this idea, each brain (including the human brain) has a limit to what it can possibly think, understand, and know. In particular, the human brain has a limit that will preclude humans from understanding some of the ultimate truths of the universe. These may include spacetime, the meaning of life, and consciousness itself. There is a limit to how "intelligent" we humans can be. According to this view, there should exist cognitive systems that are "superhuman", i.e. they don't have the limitations that our cognition has.
However, i am not sure if we (humans) can intentionally build a cognitive system whose cognitive closure is larger than ours, i.e., a cognitive system that can "think" concepts that we cannot think. It sounds a bit of an oxymoron that a lower form of intelligence can intentionally build a highest form of intelligence. However, it is not a contradiction that a lower form of intelligence can accidentally (by sheer luck) create a higher form of intelligence.
That is the argument in favor of the feasibility of superhuman intelligence. A brilliant argument against such feasibility is indirectly presented in David Deutsch's "The Beginning of Infinity" (2011). Deutsch argues that there is nothing in our universe that the human mind cannot understand, as long as the universe is driven by universal laws. I tend to agree with Colin McGinn that there is a "cognitive closure" for any kind of brain, that any kind of brain can only do certain things, and that our cognitive closure will keep us from ever understanding some things about the world (perhaps the nature of consciousness is one of them); but in general i also agree with Deutsch: if something can be expressed in formulas, then we humans will eventually "discover" it and "understand" it; and, if everything in nature can be expressed in formulas, then we (human intelligence) will eventually "understand" everything, i.e. we are the highest form of intelligence that can possibly exist. So the only superhuman machine that would be too intelligent for humans to understand is a machine that does not obey the laws of nature, i.e. that is not a machine.
If you lean towards the "cognitive closure" argument, you also have to show that we haven't reached it yet. The progress of the human mind did not necessarily end with you. If human intelligence hasn't reached the cognitive closure yet, then there is still room for improvement in human intelligence. I see no evidence that the human mind may have reached a maximum of creativity and will never go any further. We build machines based on today's knowledge and creativity. Maybe, some day, those machines will be able to do everything that we do today; but why should we assume that, by then, the human mind will not have progressed to new levels of knowledge and creativity? By then, humans may be thinking in different ways and may invent things of a different kind. Today's electronic machines may continue to exist and evolve for a while, just like windmills existed and evolved and did a much better job than humans at what they were doing; but some day electronic machines may look as archaic as windmills look today. I suspect that there is still a long way to go for human creativity. The Singularity crowd cannot imagine the future of human intelligence the same way that someone in 1904 could not imagine Relativity and Quantum Mechanics.
Some day the Singularity might come, but i wouldn't panic. Mono-cellular organisms were neither destroyed nor marginalized by the advent of multicellular organisms. Bacteria are still around, and probably more numerous than any other form of life in our part of the universe. The forms of life that came after bacteria were presumably inconceivable by bacteria but, precisely because they were on a different plane, they hardly interact. We kill bacteria when they harm us but we also rely on many of them to work for us (our body has more bacterial cells than human cells). In fact, some argue that a superhuman intelligence already exists, and it's the planet as a whole, Gaia, of which we are just one of the many components.
In some cases we are "afraid" of a machine simply because we can't imagine the consequences. Imagine the day when machines will be able to understand natural language. A human can read only a few books a week. Such a machine, instead, will be able to read in a few seconds all the texts ever produced and digitized by the human race. It is hard to imagine what this implies.
In theory, an artificial intelligence that talks to another artificial intelligence could learn a lot faster than us. We humans need to relocate ourselves to places called universities and take lengthy classes to learn just a fraction of what the experts know. An artificial intelligence could learn in just a few seconds everything that another artificial intelligence knows (with a single "memory dump"). In fact, some day (if computer speed keeps improving) an artificial intelligence could learn everything that EVERY artificial intelligence knows. Imagine if you could learn in a few seconds everything that all humans know.
The way our bodies and brains are built by nature makes it impossible for us to do the same. One possibility is that Nature couldn't do any better. The other possibility is that, maybe, over millions of years of natural selection, Nature figured out that it is better that way.
Critics of A.I. cannot tell us what exactly machines will never be able to do that humans can do. Believers in the Singularity cannot tell us what exactly humans will never be able to do that machines will do. My tentative conclusion is that machines as intelligent as humans are possible (the question is not "if" but "when") whereas machines more intelligent than humans are not possible. Alas, this conclusion hinges on a very vague definition of "intelligence".
Back to the Table of Contents
Purchase "Intelligence is not Artificial"
|