Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")


Can an A.I. system win a Nobel Prize?

In 2014 there were more than 28,000 peer-reviewed journals. In 2009 the University of Ottawa calculated that 50 million papers had been published since 1665, and 2.5 million new scientific papers are published every year. In 2014 John Ioannidis at Stanford University estimated that between 1996 and 2011 the number of scientists who published a paper was about 15 million, of which about 1% (or 150,608) publish one paper per year. In 2016 more than 1.2 million papers were published in life science journals alone, on top of the 25 million already in print. On the other hand, a survey of scientists conducted by Carol Tenopir and Donald King at the University of Tennessee found that on average a scientist reads about 264 papers per year. That means that a biomedical scientist, over a normal career of about 50 years, will have read 13,200 papers out of 50-100 million, a tiny fraction. Meanwhile, a new article is being published every 30 seconds. For many biomedical topics a researcher can find tens of thousands of citations. For example, more than 70,000 papers have been published on the tumor suppressor p53. The need for tools that can scan all those papers, analyze data, carry out experiments and formulate hypotheses (the job of a scientist) is greater than ever.

In a 2003 talk at the American Association for Artificial Intelligence, Lawrence Hunter of the University of Colorado proposed a revised Turing Test that would use the publication of a paper in a peer-reviewed scientific journal as a better test for human-level intelligence.

In October 2016 Hiroaki Kitano of Sony, president of the Systems Biology Institute (and founder of the Robocup competition), delivered a talk titled "Artificial Intelligence to Win the Nobel Prize and Beyond" in which he proposed a new grand challenge for AI: to develop an A.I. system capable of scientific research in the biomedical sciences that can discover something worthy of the Nobel Prize. He asked: "What is the single most significant capability that Artificial Intelligence can deliver?" He argued that the future of humankind depends on scientific discovery just like in the past. Therefore that's the field in which new technology can provide the biggest benefits.

A kinder challenge for Artificial Intelligence would be to build a system that can discover something worthy not of the Nobel Prize but simply of a patent: build a program that will discover or invent something for which the patent office will accept the patent application. In 2015 almost three million people worldwide filed a patent for an invention. It can't be that difficult.

Nature has a unique way to hide herself from the human brain, but the human brain occasionally can find her out: the human brain collects data about her behavior, inspects the correlations among the data, and then occasionally devises a theory that explains the data. The job of the scientist is not trivial. As John Banville puts it in his novel ""Kepler": "He was a blind man who must reconstruct a smooth and infinitely complex design out of a few scattered prominences that gave themselves up, with deceptive innocence, under his fingertips". What Kepler did was to renounce a postulate that had stood for thousands of years: that the planets move at a constant speed and make circles. Nicolaus Copernicus failed to explain the motion of the planets, and so did Galileo Galilei and so did Tycho Brahe. They all stood by the postulate that the planets must move at a constant speed and in circles. Kepler used data, lots of data, his own tables plus Tycho Brahe's tables, the work of decades of observations and calculations; but those data were not simply used to recognize patterns: Kepler realized that the dogma of uniform circular motion was false, that planets don't move in circles and that their speed changes as they move along their ellipse. Newton and Einstein did something similar: they threw out postulates that had been accepted by everybody for centuries, for example that the natural state of things must be rest and that space must be flat.

A popular joke in Silicon Valley is: what the contract says and what the engineer does are never the same. But that turns out to be a good thing. If humans simply did what the contract requires, and nothing more, there would be no progress. Luckily the engineer does something that was not in the contract, and ends up inventing the microprocessor or the World-wide Web. Indian classical music (such as the raga genre) is not written down because it is not about composing but about understanding, assimilating, and reinterpreting (what Westerners normally summarize as "improvising"). Two performances of the same raga are never the same, even if performed by the same musicians. Technological innovation is a similar process. The whole secret of evolution is to make variations, not exact copies.

"The worst labyrinth is not that intricate form that can entrap us forever, but a single and precise straight line" (Jorge-Luis Borges)

Attempts at creating programs that can do what a scientist does, a program capable of scientific discovery, date back from the early days of A.I., to at least 1962, when Saul Amarel at RCA Laboratories in New Jersey published the paper "An Approach to Automatic Theory Formation" (1962). In the same year Thomas Kuhn, a philosopher at UC Berkeley, published "The Structure of Scientific Revolutions" (1962) in which he argued that the history of science is a history of "paradigm shifts", of sudden realizations that old theories were wrong and a whole new way of thinking is required, a conceptual restructuring. Human intelligence came to rule the world because it is capable of these paradigm shifts, whereas other animals keep repeating the same way of thinking. A debate was soon raging on this topic, started by the Austrian philosopher Karl Popper at the University of London with his book "The Logic of Scientific Discovery" (1965), which basically claimed that there is no logic of scientific discovery, to which Herbert Simon at Carnegie Mellon University responded with "Scientific Discovery and the Psychology of Problem Solving" (1966). In 1977 his student Pat Langley unveiled Bacon, an "expert system" to discover scientific laws.

Simon was intrigued by the cognitive process that leads to novel, creative ideas. This process involves the discovery of natural laws from experimental data, i.e. the formation of theories that explain those data, and then the design of experiments to confirm those theories. His former student Edward Feigenbaum started work on Dendral (DENDRitic ALgorithm) in 1965 at Stanford to help chemists identify organic molecules (a collaboration with Nobel laureate Joshua Lederberg, founder of Stanford's department of genetics). This was a system that reasoned about knowledge provided by human experts. Then in 1970 Feigenbaum and Bruce Buchanan conceived Meta-Dendral, a system to learn the knowledge that Dendral needed to do its job. This was the first system for hypothesis formation. Meta-Dendral's machine-learning algorithms were generalized by Tom Mitchell in his 1978 dissertation and became his "version spaces". In 1978 Feigenbaum and Buchanan also launched Molgen, a system to plan experiments in molecular genetics, developed by their students Peter Friedland and Mark Stefik.

Again, not much came out of these attempts, and probably because, surprisingly, we don't really know how we come up with bright ideas.

"The real voyage of discovery consists not in seeking new landscapes, but in having new eyes" (Marcel Proust)

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact