Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

Why we need A.I. or The Robots are Coming- Part 2: The Near Future of A.I. or Don't be Afraid of the Machine

The media are promising a myriad applications of A.I. in all sectors of the economy. So far we have seen very little compared with what it has been promised. In 2016 Bloomberg estimated 2,600 startups working on A.I. technology, but IDC calculated that sales for all companies selling A.I. software barely totaled $1 billion in 2015. There is a lot of talk, but, so far, very few actual products that people are willing to pay for.

The number-one application of A.I. is and will remain drum roll making you buy things that you don't need. All major websites employ some simple form of A.I. to follow you, study you, understand you and then sell you something. Your private life is a business opportunity for them and A.I. helps them figure out how to monetize it. The founders of A.I. are probably turning in their graves.

And sometimes these "things" can even kill you (the case of Wei Zexi in 2016, who was induced by an advert posted on Baidu to buy the cancer treatment that killed him).

Mark Weiser famously wrote: "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it" ("The Computer for the 21st Century", 1991). Unfortunately, it turned out to be a prophecy about the ubiquitous "intelligent" agents that make us buy things.

Perhaps the most sophisticated (or, at least, widely used) A.I. system since 2014 is Facebook's machine-learning system FBLearner Flow, designed by Hussein Mehanna's team, that runs on a cluster of thousands of machines. It is used in every part of Facebook for quickly training and deploying neural networks. Neural networks can be fine-tuned by playing with several parameters. Optimizing these parameters is not trivial. It requires a lot of "trial and error". But even just a 1% improvement in machine-learning accuracy can mean billions of dollars of additional revenues for Facebook. So Facebook is now developing Asimo, that performs thousands of tests to find the best parameters for each neural network. In other words, Asimo does the job that is normally done by the engineers who build the deep-learning system.

While Jeff Hammerbacher's lament remains true, we must recognize that progress in deep learning has been driven by funding from companies like Google and Facebook whose main business interest is to convince people to buy things. If the world banned advertising from the Web, the discipline of deep learning would probably return to the obscure laboratories of the universities where it came from.

Remember Marshall McLuhan's comment in "Understanding Media" (1964) that "Far more thought and care go into the composition of any prominent ad in a newspaper or magazine than go into the writing of their features and editorials"? The same can be said today: far more thought and care has been invested in designing algorithms that make you buy things when you are reading something on the Web than in the writing that you are reading.

Speech recognition (e.g., Apple's Siri) and image recognition (e.g., Facebook's Deep Face and Microsoft's CaptionBot) will benefit from the progress in neural networks. For example, Apple's Siri, that used speech-recognition technology developed at Nuance before deep learning matured, and that is mainly used to check the weather, will probably benefit from the acquisition of VocalIQ, a spinoff of Cambridge University with experience in deep learning. First demonstrated in 2014, Microsoft's Skype Translate, capable of translating speech in real-time, went live in 2016. In 2016 Google made available Cloud Speech API to the open-source community, so that any developer can power its app with Google's speech recognition.

The next generation of "conversational" agents will be able to access a broader range of information and of apps, and therefore provide the answer to more complicated questions; but they are not conversational at all: they simply query databases and return the result in your language. They add a speech-recognition systems and a speech-generation system to the traditional database management system.

There are actually "dream" applications for deep learning. Health care is always at the top of the list because its impact on ordinary people can be significant. The medical world produces millions of images every year: X-Rays, MRIs, Computed Tomography (CT) scans, etc. In 2016 Philips Health Care estimated that it manages 135 billion medical images, and it adds 2 million new images every week. These images are typically viewed by only one physician, the physician who ordered them; and only once. This physician may not realize that the image contains valuable information about something outside the specific disease for which it was ordered. There might be scientific discoveries that affect millions of those images, but there is nobody checking them against the latest scientific announcements. First of all, we would like deep learning to help radiology, cardiology and oncology departments to understand all their images in real time. And then we would like to see the equivalent of a Googlebot (the "crawler" that Google uses to scan all the webpages of the world) for medical images. Imagine a Googlebot for medical images that continuously scans Philips' database and carries out a thorough analysis of each medical image utilizing the latest updates on medical science. Enlitic in San Francisco, Stanford's spinoff Arterys, and Israel's Zebra Medical Vision are the pioneers, but their solutions are very ad-hoc. A medical artificial intelligence would know your laboratory tests of 20 years ago and would know the lab tests of millions of other people, and would be able to draw inferences that no doctor can draw.

In 2015 the USA launched the Precision Medicine Initiative that consists in collecting and studying the genomes of one million people and then matching those genetic data with their health, so that physicians can deliver the right medicines in the right dose to each individual. This project will be virtually impossible without the use of machines that can identify patterns in that vast database.

There are also disturbing applications of the same technology that are likely to spread. The smartphone app FindFace, developed by two Russian kids in their 20s, Artem Kukharenko and Alexander Kabakov, identifies strangers in pictures by searching pictures posted on social media. If you have a presence on social media, the user of something like FindFace can find out who you are by simply taking a picture of you. In 2016 Apple acquired Emotient, a spinoff of UC San Diego, that is working on software to detect your mood based on your facial expression.

An example of unreasonable expectations is Google's self-driving car. The project was launched in 2009 by Sebastian Thrun, the Stanford scientist who had won the DARPA "Grand Challenge" of 2005, a 212-km race across the Nevada desert. Thrun quit in 2013 and was replaced by Chris Urmson, formerly a Carnegie Mellon University student who in 2007 had worked on William Whittaker's victorious team for the DARPA "Urban Challenge" held at George Air Force Base near Los Angeles. (For the record, Chris Urmson left Google in 2016, as had done most of the original team).

The self-driving car may never fully materialize, but the "driver assistant" is coming soon. Mobileye, the Israeli company founded in 1999 that is widely considered the leader in machine-vision technology (and that does not use deep learning) has a much more realistic strategy based on incremental steps to introduce Advanced Driver Assistance Systems (ADAS) that can assist (not replace) drivrs. Otto, founded by one of the engineers who worked on Google's self-driving car, Anthony Levandowski, does not plan to replace the truck driver but to assist the truck driver, especially on long highway drives. Otto, which in 2016 was acquired by Uber, does not plan to build a brand new kind of truck, but to provide a piece of equipment that can be installed on every truck. In 2014 a total of 3,660 people died in the USA in accidents that involved large trucks.

The need for robots is even greater. There are dangerous jobs in construction and steel work that kill thousands of workers every year. According to the International Labor Organization, mining accidents kill more than 10,000 miners every year; and that number does not include all the miners whose life expectancy is greatly reduced by their job conditions.

Robots and drones need eyes to see and avoid obstacles. There will be a market for computer-vision chips that you can install in your home-made drone, and there will be a market for collision-avoidance technology to install in existing cars. Israel's Mobileye and Ireland's Movidius have been selling computer-vision add-ons for machines for more than a decade.

We also need machines to take care of an increasingly elderly population. The combination of rising life expectancy and declining fertility rates is completely reshaping society. The most pressing problem of every country used to be the well-being and the education of children. That was when the median age was 25 or even lower. Ethiopia has a median age of about 19 like most of tropical Africa. Pakistan has a median age of 21. But the median age in Japan and Germany is 46. This means that there are as many people over 46 as there are under 46. Remove teenagers and children: Japan and Germany don't have enough people to take care of people over 46. That number goes up every year. There are more than one million people in Japan who are 90 years old or older, of which 60,000 are centenarians. In 2014, already 18% of the population of the European Union was over 65 years old, almost ten million people. We don't have enough young people to take care of so many elderly people, and it would be economically senseless to use too many young people on such an unproductive task. We need robots to help elderly people do their exercise, to remind them of taking medicines, to pick up packages at the front door for them, etc.

I am not afraid of robots. I am afraid that robots will not come soon enough.

The robots that we have today can hardly help. Using an IDC report of 2015, we estimated that about 63% of all robots are industrial robots, with robotic assistants (mostly for surgery), military robots and home appliances (like Roomba) sharing the rest in roughly equal slices. The main robot manufacturers, like ABB (Switzerland), Kuka (Germany, being acquired by China's Midea in 2016) and the four big Japanese companies (Fanuc, Yaskawa, Epson and Kawasaki), are selling mostly or only industrial robots, and not very intelligent ones. Robots that don't work on the assembly line are a rarity. Mobile robots are a rarity. Robots with computer vision are a rarity. Robots with speech recognition are a rarity. In other words, it is virtually impossible today to buy an autonomous robot that can help humans in any significant way other than inside the very controlled environment of the factory or of the warehouse. Nao (developed by Bruno Maisonnier's Aldebaran in France and first released in 2008), RoboThespian (developed by Will Jackson's Engineered Arts in Britain since 2005, and originally designed to be an actor), the open-source iCub (developed by the Italian Institute of Technology and first released in 2008), Pepper (developed by Aldebaran for Japan's SoftBank and first demonstrated in 2014) and the autonomous robots of the Willow Garage "diaspora" (Savioke, Suitable, Simbe, etc) are the vanguard of the "service robot" that can welcome you in a hotel or serve you a meal at the restaurant: "user-friendly" humanoid robots for social interaction, communication and entertainment at public events. In 2016 Knightscope's K5 robot security guard worked in the garage of the Stanford Shopping Center; Savioke's Botlr delivered items to guests at the Aloft hotel in Cupertino; Lowe's superstore in Sunnyvale employed an inventory checker robot built by Bossa Nova Robotics; and Simbe's Tally checked shelves of a Target store in San Francisco. But these are closer to novelty toys than to artificial intelligence. A dog is still a much more useful companion for an elderly person than the most sophisticated robot ever built.

The most used robot in the home is iRoomba, a small cylindrical box that vacuums floors. Not exactly the tentacular monster depicted in Hollywood movies. Unfortunately, it will also vacuum money if you drop it on the floor: we cannot trust machines with no common sense, even for the most trivial of tasks.

An industry that stands to benefit greatly from the "rise of the robots" is the toy industry. In 2016 San Francisco-based startup Anki introduced Cozmo, a robot with "character and personality". That's the future of toys, especially in countries like China where the one-child policy has created a generation of lonely children. In fact, we have already been invaded by robots: there are millions of Robosapien robots. The humanoid Robosapien robot was designed by Mark Tilden, a highly respected inventor who used to work at the Los Alamos National Laboratory, and introduced in 2004 by Hong Kong-based WowWee (a company founded in the 1980s by two Canadian immigrants). Most robots will be an evolution of Pinocchio, not of Shakey.

If you consider them robots, the exoskeletons are a success story. These are basically robots that you can wear. The technology was originally developed by the DARPA to help soldiers carry heavy loads, but it is now used to help victims of brain injuries and spinal-cord injuries in several rehabilitation clinics.

ReWalk, founded by an Israeli quadriplegic (Amit Goffer), Ekso Bionics and Suitx (two UC Berkeley spinoffs) and SuperFlex (an SRI spinoff) already helped paraplegics or seniors walk. Panasonic's ActiveLink has announced an exoskeleton that will help weak nerdy people like me with manual labor that requires physical strength. The cost is still prohibitively high, but one can envision a not-too-distant future in which we will be able to rent an exoskeleton at the hardware story to carry out gardening and home-improvement projects. After you wear it, you can lift weights and hammer with full strength.

In a more distant future, robots may take advantage of projects such as OpenEase, a platform for machines to share knowledge; or RoboHow (2012), that will enable robots to learn new tasks; or RoboBrain (2014), that learns new tasks from human demonstrations and advice.

But first we will need to build robotic arms whose dexterity matches at least the dexterity of a squirrel.

Our hand has dozens of degrees of freedom. Let's say that it has ten (it actually has many more). I can plan the movement of my hand easily ten steps ahead: that's 10 to the 10th to the 10th to... a very huge number. And i can do it without thinking, in a split second. For a robot this is a colossal computational problem. In 2016 Sergey Levine's team at Google Brain trained robots to pick up things that they had never seen before, and to pick up soft and hard objects in different ways. Two groups had already applied deep learning to improving the dexterity of robots: Abhinav Gupta at Carnegie Mellon University and Ashutosh Saxena (a former pupil of Andrew Ng at Stanford and the brain behind RoboBrain) at Cornell University. But the real issue is dexterity, not deep learning. "High-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources" (Erik Brynjolfsson)

Earlier in the book i mentioned that two of the motivations for doing A.I. were: a business opportunity and the ideal of improving the lives of ordinary people. Both motivations are at work in these projects. Unfortunately, the technology is still primitive. Don't even think for a second that this very limited technology can create an evil race of robots any time soon.

"Nothing in life is to be feared, it is only to be understood" (Marie Curie).

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact