(These are excerpts from my book "Intelligence is not Artificial")
What this Book is About - 2019 Edition
(I apologize for the long preface, but, paraphrasing Mark Twain, "I didn't have time to write a short one, so I wrote a long one instead").
This book contains a boring history of Artificial Intelligence but also something more interesting. This book, basically, deals with three fascinating sociological and anthropological phenomena: 1. an old-fashioned apocalyptic religion, according to which the robots are coming and will kill us all (or, at least, steal our jobs and make us irrelevant); 2. an old-fashioned prophetic religion, according to which a superhuman entity (the singularity) is coming and will save us all (i.e. will make us immortal); and 3. a new kind of society, the "vast algorithmic bureaucracy", in which everything that is not forbidden is mandatory (which, i feel, is the real news, not just an irrational belief like the other two).
Since the publication of the first edition of this book (the "philosophical" edition), it has become difficult to say something intelligent about Artificial Intelligence. The market is flooded with books that alternatively predict the apocalypse or the panacea to all problems.
Luckily, an increasing number of scholars are coming out to speak against the hype, the unreasonable expectations and the exaggerated fears that have been created by the not-too-well-informed media and by not-too-honest writers and publishers.
I originally wrote the book "Intelligence is not Artificial" in 2013 when the media started reporting stunning progress in Artificial Intelligence that in my opinion was wildly exaggerated and when discussions about the Singularity were becoming, in my opinion, a bit ridiculous. My 2013 book was more philosophical than technical.
Now the term "Artificial Intelligence" has become so popular that i literally don't know anymore what we are talking about: just about everything is being tagged "Artificial Intelligence". The devices of the so-called "Internet of Things" are regularly marketed as "Artificial Intelligence" as are all of data science and most of statistics. This fad was almost singlehandedly created by one corporation's press releases, hailing humble experiments in neural networks (often based on very old theory) as steps towards a technological, social and economic revolution. Countless firms are rushing to reprint and restyle their marketing material to include popular terms such as "machine learning" and "bot" (short for "robot", typically a software robot). The term "Artificial Intelligence" is so abused that i wonder why a light switch is not called "Artificial Intelligence": after all it does something that is short of miraculous, it turns a dark room into a bright room. When i asked a startup founder why he was calling his device "Artificial Intelligence" but not his TV set, he couldn't come up with a good explanation. A TV set uses sophisticated algorithms to "learn" what the original image was, and the "app" is pretty spectacular: i press a button and i see someone who is in another city. I can keep pressing buttons and see people in different cities. It looks like a pretty amazing application to me, certainly more amazing than that startup's wearable device that checks some bodily data and displays a warning if they are too high or too low.
Much has changed since i first published this book in 2013. The main change is not in technological progress, but in the definition of Artificial Intelligence. What a departure from the 1990s, when the expression "Artificial Intelligence" was ridiculed. "Artificial Intelligence" is rapidly becoming synonymous with "automation". All automation is now "Artificial Intelligence". For example, the A.I. community never considered a factory robot that simply repeats the same movement all the time as "intelligent", but now it is. Some of these robots replace the dumbest of human activities, but they are now routinely classified into the "Artificial Intelligence" category to the point that old-school A.I. researchers literally don't know what "automation" means anymore: is there anything that is "automation" but not "Artificial Intelligence"?
The railway or airplane seat booking systems are sophisticated computerized systems that we never called A.I. but it looks like today they would be called A.I. They certainly do something much more useful than playing a board game, and they serve billions of passengers. If they weren't decades old, the various weather forecast systems (some of the most challenging simulation programs in the world) would also be classified as A.I. And what about the various "malware" that infiltrate millions of computers worldwide? Aren't those A.I.? In the 1960s, A.I. scientists had some dignity and never claimed that the Apollo mission guidance system was A.I. but today more trivial guidance systems for drones are routinely called A.I.
John McCarthy is credited with saying: "As soon as it works, no one calls it A.I. anymore" (incidentally, i have never been able to prove that he really said it). Today we are rapidly moving towards the opposite bias: "If it works, everybody calls it A.I."
We have seen this movie before. In the 1980s the most reputable names in business studies were counting billions of dollars of investment in A.I. simply because everything was being tagged A.I. It was popular to be an A.I. company, group, researcher (i was one of them, the founding director of Olivetti's Artificial Intelligence Center in California). In the 2010s, we are witnessing a similar craze. A few months after Bloomberg estimated the total 2015 investment in A.I. startups at $128 million (a decline by 50% over the previous year), VentureScanner estimated $2.2 billion, 20 times more. How could Bloomberg be so wrong? It all depends on what you count as A.I. What happened in those few months between one and the other study was that just about every startup rebranded itself as being an A.I. company or at least having an A.I. component. A similar phenomenon is spreading through the corporate world, rebranding old projects and products as A.I.-based. Artificial Intelligence will soon encompass every software on your mobile phone.
I have been using a messaging application since 2014. Recently i noticed that they changed the top line of their website. It now boasts "Voice Calls: Secure, Crystal-Clear, AI-Powered", but it is the exact same app of 2014. The camera feature that Canon dubbed "image stabilization" is now routinely marketed as an "intelligent" feature. Canon introduced it in 1995 with the EF 75-300/4-5.6 IS zoom lens, and the other camera manufacturers followed suit (Nikon called it "vibration reduction" but now the acronym VR is being monopolized by virtual reality). It is based on simple optical formulas, and in 1995 nobody would have dreamed of relating it to Artificial Intelligence. In 1992 Mattel released a talking Barbie doll which spoke a few sentences such as "Wanna have a pizza party?" Today this would probably be hailed as another feat of A.I. (In a prelude to all the controversies that would arise in the age of chatbots, this talking doll was parodied on the TV show "The Simpsons" and some dolls were later recalled because accused of being sexist by the American Association of University Women). When (in 1999) Tim Westergren and Will Glaser wrote the algorithm (the Music Genome Project) to classify musical compositions based on a few hundred features, and later (in 2000) launched the application Pandora that picks music for you based on your taste, they didn't call it "Artificial Intelligence". But that's what it is now. Now it would be silly not to call it A.I., given that much simpler algorithms are marketed as A.I. I suspect that today Akihiro Yokoi's Tamagotchi pets, released in 1996, whose life story depends on the actions of the owner, would be marketed as A.I. And certainly Eyepet, the Sony PlayStation3 game of 2009 developed by Playlogic in the Netherlands (that was a top-selling game) should qualify as A.I.: this virtual pet (augmented reality before it became fashionable) reacts to objects and people. In 2017 Huawei introduced the smartphone Mate 10 Pro equipped with a "neural processing unit" (NPU) that reportedly accelerates Microsoft's translation software: it is just a faster processor. In Hangzhou i was told that they are building an "A.I. hotel". I asked what is an "A.I. hotel" and they told me it's a hotel where guests use a card to enter the front door and to register themselves at booths. There is no reception. I used similar hotels twice, in Sweden and France, but back then nobody thought of calling them "A.I. hotels". Your dishwasher will soon be called A.I. In fact, you don't know it, but your house is already full of A.I. machines: the manufacturers are changing the brochures while you are using them. The "smartphone" somehow contributed to the misunderstanding. Ericsson was the first brand to coin the phrase "smartphone", with the release of its GS88 in 1997 and the term "smartphone" took off around 1999-2002, especially after the launch of Research in Motion's first BlackBerry phone (the 5810) in 2002. By accident, Apple launched the iPhone in 2007 just when "deep learning" was invented. The two events had nothing in common, but, one being called "smart" and the other one being called "intelligent", some confusion was inevitable. And, by accident, 2012 (the annus mirabilis of deep learning) was the year when the world went mobile: smartphone sales skyrocketed to 680 million units, up 30% yearly (according to Gartner Group), a feat repeated the following year and never repeated again. The peak of the smartphone frenzy was the fourth quarter of 2012: 208 million smartphones were sold, a 38% increase over the fourth quarter of the previous year. By coincidence, that's exactly when the A.I. world was shaken by deep learning's spectacular success in the Large Scale Visual Recognition Challenge (ILSVRC), whose competition results were announced in December. "Smart" and "intelligent" became mandatory adjectives for just about anything. The word "intelligent" is being applied to all sorts of features in all sorts of appliances, gadgets and devices, but the founders of Artificial Intelligence would turn in their graves if they were told what features now qualify as "intelligent".
The prediction that "A.I. will be pervasive" is becoming a self-fulfilling prophecy: if we call everything "A.I.", then, yes, A.I. will be pervasive. Just like if we called everything Nonsense, then Nonsense would be pervasive.
My feeling today is that 99% of the research in A.I. is not used for practical commercial applications, and that 99% of the commercial applications that are being marketed as A.I. have little or nothing in common with research in A.I. The market is coming up with a definition of A.I. that the founders and assorted philosophers would never endorse.
Technically speaking, renaming all Computer Science as Artificial Intelligence is not completely wrong because the very first computers were publicized as the "electronic brains", hence every software ever written is the by-product of an Artificial Intelligence program that started with the very first computer. The border between A.I. and plain Computer Science has always been blurred.
Whatever the current definition, it is important to understand that A.I. is not magic: the border between A.I. and magic is NOT blurred! A.I. is just computational mathematics applied to automation.
Therefore this book tries to explain what A.I. scientists do. Whatever your theoretical definition of A.I. is, and whatever your theoretical definition of "intelligence" is, there is a history of working on very interesting mathematics. Artificial Intelligence practitioners are more like artisans than scientists: the artisan doesn't care what the scientists proved, the artisan keeps doing what he thinks can be done.
While the hype was growing, i had the opposite problem. I kept wondering why algorithms are so stupid. We are increasingly surrounded by incredibly stupid algorithms that want to turn us into dumb robots. I got tempted to write a book subtitled "A manual on how to cope with the age of hyper-stupid machines". I am not a dumb algorithm but i increasingly have no way to tell the dumb algorithm that i am not a dumb algorithm: the dumb algorithm insists in treating me like a dumb algorithm. I feel like shouting "I am intelligent!" to a crowd of incredibly stupid algorithms that are closing in on me.
As we lower the degree of intelligence that is expected from humans, it becomes natural to see machines as "intelligent". The lower your intelligence, the more intelligent the machines around you will look. How can we tell if machines are getting more intelligent or we are getting less intelligent? Everything is relative, as Albert Einstein said, and you see the train as moving forward but someone on the train sees you as moving backwards.
I have a vision of a world increasingly dominated by "vast algorithmic bureaucracies". That's the real dystopia. The algorithm is a consequence, not a cause. Those bureaucracies are created by humans to get society organized. Initially the algorithm is performed by a human being. You can still see human algorithms when you order a sandwich at one of those fast-food chains: you pick the kind of sandwich, the kind of bread, the kind of vegetables, and so on, and then the kids follow a number of repetitive steps to prepare your sandwich; you move down the line and pay at the register. Once you turn a service into an algorithm, it is trivial to replace the human being with a computer. But it is important to realize that we are the ones replacing human interaction with algorithms. A "smart city" is a city where everything has been turned into an efficient algorithm connected to all other algorithms. The problem, of course, is that cities are not just buildings, streets and cars. There are also people. The term "smart city" makes you think of an intelligent city working for its citizens when in fact a "smart city" is simply a high-tech concentration camp where citizens are treated like numbers.
Humanity is not at risk because very intelligent machines will take over. Humanity is at risk because it is increasingly forced to coexist with very stupid machines in these vast algorithmic bureaucracies. The risk is that we will end up creating not superhuman technology but subhuman societies.
This book now is many books in one. It is an introduction to the methods of Artificial Intelligence, and probably one of the most extensive histories of the field ever published. It is also a book on the risk of declining human intelligence when human minds are constantly surrounded by incredibly stupid machines. It is a book about these "vast algorithmic bureaucracies". And it is the original book, a book on the emerging religion of the 21st century, a religion that replaces even the God of monotheistic religions with an algorithm. The ultimate thesis of this book is perhaps more sociological than technological.
Alas, A.I. has not solved the mystery of the mind at all, and is not even remotely close to doing so. We understand so little of how our brain works.
In fact, what we understand is not enough to understand why we understand it.
A Preface that was Originally Omitted
I published my first two books in the late 1980s: the first one was a book on Artificial Intelligence and the second one, almost at the same time, was a history of rock music. Since that year, i have published six books on A.I. and seven on rock music. A friend once asked me what A.I. and rock music have in common. It took me almost 30 years to find out the answer: the potential and the hype.
Both rock music and A.I. had and have a huge potential to have an impact on the world (respectively, on music and on science).
But both rock music and A.I. suffer from poor historiography and even worse commentary. There is a popular website that "aggregates" reviews of popular music: metacritic.com. Judging from the average rating on that website, popular music is blessed with an incredible number of Beethovens who are constantly and effortlessly producing scores of masterpieces every year. The truth? There are few albums of popular music that are worth your time, let alone your money.
At the same time, there are scores of articles published in all sorts of magazines and newspapers, describing the amazing feats of "intelligent" machines. Every year it looks like machines are about to get smarter than the smartest humans, and that we are doomed to become their slaves if not to extinction. The truth? Most machines "beep". That is the best that they can do. The most intelligent of them are incapable of doing what even the least intelligent of all animals do every single day: survive.
Alas, my detractors think that i too fit the pattern of A.I. and rock music: i have become somewhat famous for my skepticism, and my detractors think that the word "hype" applies to me too.
Coincidentally, A.I. and rock music were born on the same year. (Coincidentally, that's also the year that i was born).
Dedication
Two hundred years ago, in 1818, the novel "Frankenstein" was published anonymously in Britain. Its author was a teenage woman, Mary Godwin, daughter of the pioneering anarchist William Godwin and of the pioneering feminist Mary Wollstonecraft, and already married to the poet Percy Shelley. Her mother had attempted suicide twice and her husband's first wife had commited suicide. The novel was written during the year of extreme climate instability due to the eruption of Mt Tambora in Indonesia. Among her inspirations, Mary Shelley credited the Italian physician Luigi Galvani, whose "Commentary on the Effects of Electricity on Muscular Motion" (1791) had just established a correlation between the nervous system and electricity, and can therefore be considered the beginning of neuroscience.
This expanded edition was completed in 2018
Back to the Table of Contents
Purchase "Intelligence is not Artificial")
|