Humankind 2.0

a book in progress...
Meditations on the future of technology and society... be published in China in 2016

These are raw notes taken during and after conversations between piero scaruffi and Jinxia Niu of Shezhang Magazine (Hangzhou, China). Jinxia will publish the full interviews in Chinese in her magazine. I thought of posting on my website the English notes that, while incomplete, contain most of the ideas that we discussed.
(Copyright © 2016 Piero Scaruffi | Terms of use )

Back to the Table of Contents

(For A.I. Jinxia interviewed piero 4 times, hence there are 4 parts).

Artificial Intelligence/ Take 1: History, Trends and Future

(See also the slide presentation)

Narnia: Are you afraid of AI?


I am not scared of AI. I am scared that it will not arrive soon enough. Great people like Elon Musk, Bill Gates and even Stephen Hawking, as well as great AI scientists like Stuart Russell, have created this fear that AI is a danger for humankind, but i am more afraid of the opposite. (For the record, they are all descendants of the great computer scientist Bill Joy, who in 2000 wrote an article titled "The Future doesn't need us" warning against the threat posed by robotics, genetics and nanotech). Let's take robots, which are a physical manifestation of AI (a very primitive one, as i will tell you later). Imagine a world without robots. Imagine a world in which all products are manufactured by human beings. For that world to function, salaries have to be very low, which means that people have to be very poor. If salaries increase, products become too expensive, and the economy goes into inflation. If it is an export economy, the economy collapses because the products cannot be sold abroad anymore. Either way, an economy without robots needs millions of poor people. That's why i keep saying that China needs hundreds of millions of poor people as long as it remains an export economy with little automation. Every country in the world that depends on people to make things and that relies on exporting those things needs millions of very poor people, people who are willing to work very hard for very little. This is true of every country, but especially of countries that mainly export goods abroad: they compete in price, and the price has to be low otherwise they suffer terribly. Human beings are particularly "expensive" when it comes to manufacturing complex equipment that is vital for improving lives, like medical equipment or cars. In a world without robots, only rich people would have a car and only rich hospitals would havemedical equipment.

The truth is that robots have allowed countries like Germany and Japan to pay high salaries to workers while remaining very competitive. Germany and Japan have more robots per capita than any other nation. They are the third and fourth economies in the world. Germany has the lowest unemployment in Europe. Japan has the lowest unemployment in Asia. Their people enjoy a high standard of life. That happens because they have automated so much of their production. Because they have so many robots and so sophisticated ones, they can also manufacture complex, expensive equipment that are countries cannot even dream of making. Robots create millions of jobs in Germany and Japan. Now take Italy,a country that traditionally has succeeded in handmade goods like fashion and sport cars. Italy is still a leader in fashion (Versace, Trussardi, Armani, and so on) and sport cars (Ferrari, Lamborghini, etc) but these industries create very few jobs. These industries produce high-quality goods but too expensive to scale up. The result is that Italy has one of the highest unemployments in Europe even if it makes some of the most exclusive goods in the world.

And now think of the terribly working conditions of those who provide us with basic services, from mining coal to cleaning up the Fukushima nuclear disaster or disarming a suicide bomber or cleaning up landmines. Imagine a world in which these taks are carried out by human beings. A world without robots is a terrible world.

In the future, if we want to reduce climate change, we will need to produce more nuclear energy. That means that we will have many more nuclear power plants. It is difficult, expensive and dangerous for human beings to check that all the parts of a nuclear plant are working appropriately. Robots can do it 24 hours a day. In a sense, robots will help us save the planet because they will make nuclear power plants safer and cheaper, which will help replace fossil-fuel energy with nuclear energy, which is cleaner, which will reduce carbon emissions.

I think that robots suffer from poor marketing. Robots are always presented as big scary beasts. We should instead publicize the fact that some day every household will have little robots for chores such as unclogging the bathroom pipes. Little robots can travel inside the pipes of your bathroom and remove a clog with no need to call a plumber. Today these robots are too expensive, but they are already feasible. Publicize simple and useful applications like these, and people will be less afraid of robots. Wearable robots can help people carry heavy goods. Conor Walsh, the founder of Harvard University's Biodesign Lab, designs "exosuit" that soldiers can wear but that are actually very strong robots, so that the soldiers can easily and quickly carry very heavy material. The same technology will help people who have problems with their limbs or who cannot walk steadily (the "rehabilitation robot"). Walsh's robots constitute a major improvement over the old wearable robots, called "exoskeletons," because they are now made of flexible materials and can adapt to the body and harmoniously follow the person's movements. Unfortunately, we still need progress in batteries before we can make these robots affordable and really useful. A lot of progress happened since 2005, when Berkeley Bionics, founded by Homayoon Kazerooni at UC Berkeley, introduced Bleex, the first exoskeleton. There are medical startups everywhere: Utah-based Sarcos, Israel-based ReWalk Robotics, Italy-based Wearable Robotics, England-based Medexo Robotics, etc. In 2016 IBM announced that its machine-learning technology Watson will be used by the Pepper robot made by Japan's Softbank to analyze the myriad of data, text, images and videos that constitute the real world. The goal is to provide better customer service at the "information booths" that today are mostly a monitor with very limited functionalities. The quality of customer service has been declining for decades while humans where being replaced by machines. Today's customer service is often "good luck with our product" or "good luck finding the what you are looking for". In many cases we use the Web to find the information that we are looking for. It is self-service customer support. In the old days you were able to speak to a human being on the phone or even find one in person, and get some real help. Today you have to struggle with unfriendly kiosks. Pepper is a machine that could recreate the old-fashioned customer service. Instead of a dumb machine that basically asks you to find the information by yourself, you will have a well-behaved and friendly robot that answers questions such as "Do i have to stand in this line?", "Which agent handles my case in this government building?", "I forgot my watch on the airplane: what do i do now?" Customer service will become interactive again thanks to robot assistants.

A world without robots is a scary place. A world without robots is a world of very poor people, constantly fighting for resources and for markets, and people with permanent disabilities. It is not today's world in which the US and Chinese happily trade goods, and medicine keeps improving, but instead a world of wars and famine.

The concern is that robots are stealing jobs from humans. People have always been afraid of technology. History has proven them wrong over and over again. Every new technology has initially scared people but later created more jobs than it destroyed. The general rule is that technology increases productivity which increases employment Deloitte economists studied data about England and Wales from 1871 until today and found that technology has always created more jobs than it has destroyed. But in 2008-12 the Western world suffered one of the worst economic crises in a century and automation was an easy scapegoat; and in particular artificial intelligence looked terrifying. In 2013 a famous study by Carl Frey and Michael Osbourne published by Oxford University (titled "The future of employment") stated that 47% of workers in the USA was likely to be replaced by machines over the next 20 years The media kept quoting this report but more recent studies shows that it was just wrong. Unfortunately, books such as James Barrat's "Our Final Invention" (2013) and Martin Ford's "Rise of the Robots" (2015) have become very popular even if they contain little science. People get easily scared by what they don't understand. Fewer people read a little book titled "Race Against the Machine" (2012), written by MIT's Erik Brynjolfsson and Andrew McAfee, now expanded in "The Second Machine Age" (2014), which was more optimistic about the transition to the robot age. In 2015 many reports and articles started telling a different story: robots will destroy jobs, but also create many jobs. For example, the Association for Advancing Automation published a white paper titled "Robots Fuel the Next Wave of US Productivity and Job Growth" ( ) that shows how manufacturing companies have ADDED jobs while they were adding robots. Manufacturing has been declining in the USA for a long time. But in 2010-13 manufacturing added 646,000 net new jobs. And that was in the middle of a big economic crisis, and in the middle of the boom of robotics. A Robotenomics report ( ) shows that more than one million jobs were created between the end of 2009 and the end of 2014 by corporations that have deployed lots of robots. Towards the end of 2015 McKinsey published "Four fundamentals of workplace automation" that concluded "As the automation of physical and knowledge work advances, many jobs will be redefined rather than eliminated雾务at least in the short term." Jerry Kaplan, a famous Silicon Valley entrepreneur, had written the book "Humans Need Not Apply". In december 2015 i was at Xerox PARC when John Markoff of the New York Times asked him if the recent reports had changed his mind and Jerry Kaplan admitted yes. (The video of the entire interview will be published at ). In 2015 i also saw John Tamny's article "Why Robots Will Be The Biggest Job Creators In World History" in Forbes magazine.

But i have no doubt that Artificial Intelligence will be blamed again for the next economic crisis. The Great Recession of 2008 was caused by the banks. The "dotcom crash" of 2000 was caused by Wall Street speculator. The recession of 1991 was caused by high interest rates, a large budget deficit, the 1987 stock market crash, the 1989 Savings & Loan crisis and the spike in oil prices due to the 1990 invasion of Iraq... neither of which are due to automation. But every time the media blamed automation for the unemployment caused by those economic crises. The first reaction when a job is lost is always to blame the machine that still has its job or that has just taken someone's job.

It is always easy to imagine which jobs will be destroyed and very difficult to imagine the new jobs that technology will create. So we exaggerate the reality of the disappearing jobs and underestimate the reality of the new ones. The Deloitte study found that, in general, dangerous and stupid jobs have declined (what's wrong with this?) while many new jobs have been created because people have moremoney to spend. For example, people buy more appliances and spend more on entertainment. This means that more jobs are created in the industries of appliances and entertainment. People also buy more food and clothes, because they have become much cheaper. The Deloitte study showed that the number of hairdressers and barbers per capita has multiplied six times. Again, if people make more money and goods/services cost less, people can spend more money in new "luxuries" and this creates more jobs. The jobs that have been lost because of automation are regained in other fields because of higher incomes and lower prices.

What is certainly true is that many of today's jobs will disappear. The Department of Labor of the USA published a study according to which 65% of children currently in primary schools, when they grow up, will have jobs that do not exist today. I don't see that as a problem, though. Of course, it will be a problem for all the uneducated people who will not be able to learn a new job after they lose the old one, and the governments will have to provide a solution to make sure that these people are not left behind; but, in general, their children will have BETTER jobs. Gallup's CEO Jim Clifton published a book titled "The Coming Jobs War" (2011) in which he polled ordinary people about what they really want, and the #1 wish was "a better job". That wish ranked higher than democracy, peace, security, money and even food. Jessica Davis Pluess' study "Good Jobs in the Age of Automation" (2015) for the nonprofit organization BSR ( ) is the kind of report that young people should read to prepare for the future.

Instead of worrying about the jobs that may be "stolen" by machines let's worry about the jobs that we will soon need and for which we are not prepared. Taking care of elderly people should be a prime concern. If you look at the recent statistics of the World Bank, there is virtually no country where population growth is accelerating. In most countries of the world, population growth is decelerating. In some countries it is practically zero. In Russia and Japan the population started declining. In most of Europe the population is not declining only because of African immigrants. There is a general trend having fewer children and later in life. So the population is eventually destined to 1. age and 2. decrease. This will mean a lot of older people with fewer younger people who can take care of them. There is also a changing attitude towards children's duties. When older people died at 60, it was easy to ask their children to take care of their last years; but now older people live to be 90 or even 100, and it sounds a bit unfair to ask their children and grandchildren and greatgrandchildren to take care of them for so many years. Inevitably, older people will be left alone.

The big social revolution of the 21st century will be the boom of elderly people. In the Western world the 1950s and 1960s were the age of the "baby boom". The people born in those decades, roughly from Bill Clinton till Barack Obama, are called "the baby boomers". The rich world is now entering the age of the "elderly boomers". Too many people are still talking about the population explosion while the real problem will be the population "implosion".

People are afraid of robots, but who is going to take care of that aging population? We are approaching the aging apocalypse. Most of these aging people will not be able to afford human care. Nurses are just too expensive if you want them around 24/7. The solution is robots. Robots who can shop for you, clean the house for you, remind you to take your medicines, check your blood pressure, etc. And maybe even keep you company when you feel lonely. Robots can do all of these things day and night, no holidays and no illness, and for a fixed price: you pay only when you buy them, not on a daily basis. Your last friend on this planet will probably be a robot.

I am afraid that A.I. will not come soon enough and we will face the aging apocalypse.

I even want A.I. to provide a better definition of "good health". "Health care" in the USA is mostly a business. It is hard to believe, but a doctor gets rich if you are sick. You have to trust a person whose wealth, whose big car, whose seaside villa and whose exotic vacation depend on your being sick. Most doctors are honest but i think this system inevitably influences their decisions. The USA spends three trillion dollars in health care. It is "big business", not necessarily "big health". Doctors prescribe all sorts of medicines to their patients. These medicines don't seem to make people healthier; nor happier. In many cases i would trust a machine over a human being. The machine doesn't get rich if i remain sick. The machine can use the latest data to prescribe only the medicines that i really need. And the machine will know the latest medical studies right away. I am not sure how long it takes for a doctor to get informed that a medicine has been proven ineffective or even dangerous. And a machine will provide the same health care to everybody: rich or poor, Chinese or European, African or Arab.

Today robots like Luna (2011), Jibo (2014) and Pepper (2014) are a luxury, but tomorrow they will be a necessity. I want to see a lot of these domestic robots that can help elderly people, sick people, disabled people or just busy people.

We want all the people of the world to become rich like in the rich Western countries, but the truth is that any "rich" society needs poor people. Poor people take care of most of the chores that keep society working and that keep us alive. Those are humble and low-paid jobs that the richer people don't want to do. We need poor people in the USA to collect garbage and make sandwiches. The average person in the USA doesn't want to do those jobs. I want all eight billion people of the planet to have the same income that i have, but what happens when all eight billion people become rich enough that nobody wants to do those humble and low-paid jobs? Who is going to collect the garbage once a week, who is going to make sandwiches at the lunch cafeteria, who is going to clean the public bathrooms, who is going to wash the windows of the office buildings? We don't want to admit it, but today we rely on the existence of millions of poor people who are willing to do those jobs that we don't want to do. I hope that everybody gets rich enough to have much better jobs, but then who is going to collect the garbage, clean the restrooms, etc? I hope that we will solve the problem of poverty in 50 years or even less. But that means that we only have 50 years to invent robots that can do all the jobs that people will not want to do. I am not scared of robots, i am scared of what will happen in 50 years if we don't have intelligent robots to collect garbage, make sandwiches, clean bathrooms, etc.

Narnia: So you don't think there is enough progress in AI?


It depends. I wrote a book ( "Intelligence is not Artificial" ) to explain that most of the recent progress is really just refinement of old ideas and most of it is due to more powerful computers, not to great ideas. We are developing technologies that allow machines to "recognize" but not to "think". That's why machines lack common sense. There is real progress in the "agility" of robots. Prices for sensors and electronics have decline significantly, making robots more affordable but also making it possible to cram more sensors in one "arm" so that the dexterity of the robotic arm can be improved to almost human level. There is a conference called ICRA, organized by IEEE, that in 2015 was held in Seattle, during which a number of "robot challenges" are held. One is the "Humanitarian Robotics and Automation Technology Challenge", one is for Nano Robotics, etc, but the most interesting to me is the "Amazon Picking Challenge", whose prize went to the robot that could pick a single object out of a container of randomly arranged objects. In 2015 Gill Pratt, who was the manager of the DARPA Robotics Challenge, said that robotics might soon be headed for a "Cambrian Explosion." The Cambrian explosion is when suddenly thousands of new species appeared on Earth. Certainly Amazon and Google believe it. Amazon acquired Kiva Robotics for $775 million, and Google has acquired eight robot companies for a total of about $100 million, followed by the acquisition of DeepMind for about $500 million. But think carefully. Peter Abbeal at UC Berkeley is showing a robot that can fold towels. The video looks really amazing. However, that robot is not really "folding towels". Folding towels means more than simply picking a towel and folding it. A human who does the same job would put a towel aside if the towel is wet or has a hole. A human who simply folds a wet towels with the other (dry) towels is considered an idiot. It is not enough to perform a task. There is a whole context within which that task is performed. If you don't understand the context, you may look really dumb even if you are the world-champion of folding towels.

There is also progress in a less advertised field, bionics, that doesn't require a lot of intelligence but it improves our ability to do things.

Narnia: what is bionics?


Augmenting the brain with electronic chips. The history of "bionics" dates from 1961 when the first electronic chip was implanted in an ear (William House伍西s "cochlear implant"). Jose Delgado shocked the world in 1965 when he controlled a bull via a remote device and electrodes sending signals to the bull伍西s brain. In 1998 Philip Kennedy developed a brain implant that can capture the "will" of a paralyzed man to move an arm (output neuroprosthetics: getting data out of the brain into a machine). In 2000 William Dobelle developed an implanted vision system that allows blind people to see outlines of the scene. His patients Jens Naumann and Cheri Robertson become "bionic" celebrities. In 2002 Miguel Nicolelis, who is probably the most famous researcher in this field, made a monkey's brain control a robot's arm via an implanted microchi. In 2005: Cathy Hutchinson, a paralyzed woman, received a brain implant from John Donoghue's team that allows her to operate a robotic arm (still output neuroprosthetics). In 2004: Theodore Berger demonstrated a hippocampal prosthesis that can provide the long-term-memory function lost by a damaged hippocampus.In 2011 Matti Mintz replaced a rat伍西s cerebellum with a computerized cerebellum. In 2013 Nicolelis made two rats communicate by capturing the "thoughts" of one rat's brain and sending them to the other rat's brain over the Internet: two-way neural transmission. Rajesh Rao and Andrea Stocco devised a way to send a brain signal from Rao's brain to Stocco's hand over the Internet, i.e. Rao made Stocco's hand move, the first time that a human controlled the body part of another human In 2014 an amputee, Dennis Aabo, received an artificial hand from Silvestro Micera's team, capable of sending electrical signals to the nervous system so as to create the touch sensation. There have been neural implants to improve the vision of semi-blind people and to help paralyzed people. In 2006 Amal Graafstra became famous for having a microchip in each hand, one for storing data (that could be uploaded and downloaded from/to a smartphone) and and one for a code that unlocked his front door and logged into his computer. In 2012 he started implanting chips on attendees of the Toorcamp for $50 each. The cyborgs are coming, and it is not bad news, just like eyeglasses and hearing aids were not bad news. I suspect that neural implants will make a difference in many people's lives before Artificial Intelligence does.

There are two major centers of research in bionics. One is at Brown Univ, especially Arto Nurmikko's laboratory that in 2008 develped BrainGate in collaboration with Cyberkinetics: a wireless transmitter for paralyzed patients with a neural implant that bypasses the spinal cord. In 2011 Leigh Hochberg of that team used BrainGate to make a paralyzed woman operate a robotic arm simply by thinking about the movement. The other one is at the Federal Institute of Technology (EPFL) in Switzerland that in 2015 built a robotic wheelchair for paralyzed people. This chair combines brain control with artificial intelligence. In 2016 Gregoire Courtine at EPFL used Brown's BrainGate to restore movement to a primate's paralyzed leg.

Narnia: There must be something that worries you about AI...?


Yes, of course, but most of it has to do with the industry in general. For example, the biggest success story of A.I. is the way search engines display advertising on your computer. A former Facebook scientist, Jeffrey Hammerbacher, wrote: "The best minds of my generation are thinking about how to make people click ads." That is the sad truth: artificial intelligence could be used to improve people's lives in many ways, but the first application was to make people spend money on the web. I am worried that the applications of A.I. will depend on how corporation like Google decide to use the technology that they acquire.

People complain that the printed newspaper and the printed magazine are disappearing, but what is disappearing is not the writer, it is the reader: 30% of the readers of texts published on the web are robots. Most of those robots belong to big corporations. I am not comfortable when i think that in the future the robot readers will outnumber the human readers. Most of the content is still written by humans, but soon it will be read mostly by machines. Anything you do online is read and analyzed by machines; and not because they "like" what you write. They do it because your private life is a business opportunity. Human readers read my writings because they are interested in what i write. A small percentage of them might also do it professionally, but not many. It is too costly and time-consuming for humans to read all the texts published on the Internet. Machines, instead, read "professionally". They don't read what i write because they are interested in the subject: they are interested in checking if they can "use" what i write for their purposes. There is something sinister in this new kind of reader.

I am worried about what will happen to humanity in a fully automated world. Today we live in a partially automated world. Our interaction with other humans is increasingly limited because machines perform many of the functions that used to be performed by humans. Who gives you cash at the bank? An automatic teller machine. Who hands you the ticket at the parking garage? A machine. We tend to look at the machines that replace humans purely in economic terms: the service is now available 24/7 and it is cheap or even free; a job is lost; we can create more jobs elsewhere because we saved money here; etc. But to me there is a much more important story: if the people around me are replaced by machines, it means that i will interact less with humans. Every time a human is replaced by a machine, it decreases the interaction that i will have with other humans. We talk a lot about human-machine interaction, and tend to ignore the fact that a consequence of human-machine interaction is the decline of human-human interaction. This trend has been going on for at least a century (there used to be armies of telephone operators to direct phone calls, there used to be armies of secretaries typing documents, etc) and will continue into the age of Artificial Intelligence to the point that many individuals, especially the older ones, will only interact with machines. Machines will take care of your house, of your errands, of your health, of your entertainment. This will dramatically reduce your interaction with other human beings, even with your own family (family support will become less and less important). You will make fewer friends. Your coworkers will be robots. Your friends will be robots. Maybe your lovers will be robots. What happens to humanity when you don't interact with humans anymore?

Narnia: Will A.I. create more equality or inequality?


There is a key point. I am not worried about artificial intelligence and accelerating progress but there is something worrying about the different paces of progress around the world, the inequality that accelerating progress is creating. In the USA we are talking about super-intelligent machines, while parts of the Islamic world are struggling to get rid of the burqa and some parts of Africa are still trying to feed the population. Very few African children will go to a good university. While we build the Internet of Things, many villages of the world don't have electricity. The gap between the hightech societies of the rich world and the low-tech societies of the developing world is increasing every day. I think that this accelerating progress in technology is creating a form of "inequality" that is even bigger than when inequality was only about money.

Narnia: Is the self-driving car an example of the benefits of AI?


No, that is an example of how disconnected our hightech companies can be from the real world. Traffic is one of the main problems in every country of the world, especially in developing countries that don't have railways and not even good roads. The traffic jam has become the common factor from the Caribbean islands to Beijing. The self-driving car is a ridiculous idea: it will increase traffic, not reduce it. People who today cannot or don't want to drive will start driving when they can buy a car that drives itself. Every time someone has tried to "optimize" the process of driving this has resulted in more traffic. You can share a car in San Francisco with at least four services: City CarShare, ZipCar, RelayRides, Getaround. The result has been that people started using cars instead of using buses and trains. Uber and Lyft have added thousands of cars to already congested roads. The whole idea of "sharing cars" was that the car is a valuable assett and it shouldn't be left in the garage when you are not using it. But every car that leaves the garage for the road is obviously increasing traffic, pollution and oil consumption. The fans of the self-driving car say that it will save lives. Today cars kill 1.2 million people a year. But trains too save lives: put everybody on trains and there will be no car accidents. We want more public transportation, not more cars.

I grew up when sociologists were talking about the virtual movement of information replacing the physical movement of people, but instead the car is becoming a second home, a second center of personal life, equipped with communications, entertainment and, soon, ecommerce. In 2014 Rinspeed in Switzerland demonstrated the "XchangE Concept", an autonomous car equipped with all sorts of digital gadgets. Something like that will put 8 billion people on the road, even toddlers and certainly a lot of elderly people.

Narnia: will intelligent machines make us more intelligent or more stupid?


There are three main ways to look at technology. One is the pessimistic way: new technologies cause us to forget how to do things manually, so in a sense each new generation of humans is a little more stupid than the previous one. You give intelligence to the machine, you take away intelligence from humans. The second one is the exact opposite. You can view technology as an extension of the body. In nature each animal uses "technology" to survive. The spider cannot live without the spiderweb. The beaver cannot live without its dams. The bee cannot live without the beehive. Etc. Richard Dawkins calls is the "extended phenotype". Technology "augments" the human body. It allows us to do things that our body cannot do. Ray Kurzweil thinks that digital technology will cause a jump in intelligence comparable to the "invention" of the neocortex. Thanks to the neocortex the human brain can do things like poetry and science that were impossible before. That invention opened up a whole new range of activities for the brain. Kurzweil believes that digital technology will constitute a similar increase in intellectual power. This new augmented brain will be capable of doing things that don't have a name because human intelligence cannot do them. The third way to look at technology is to invert the relationship between inventors and invention. I think this view dates back to a French philosopher of the 1960s, Jean Baudrillard, but recently this view has been made popular by Kevin Kelly in "What Technology Wants". We tend to think of objects as made by someone. But, if you were an object capable of thinking, you would probably see it the other way around: the human being is just a tool to build you. We see the story of technology as a story of inventors inventing inventions, but one can also view the story of technology as a story of inventions that need inventors to be born; an evolution not of life but of objects. Objects evolve, and use life, especially humans, to evolve. According to this view, it is not humanity that needs a new technology but technology that creates human so that their needs will make humans create better technology. After all, what is progressing? Technology, not humans. Humans remain more or less the same, but technology changes dramatically. I think that all these three views are true to some extent. Each of them is correct. If you put them together, you get a fairly accurate view of what is happening.

Of course i am not too happy that people, especially young people, are becoming so dependent on "smart" devices. Those devices are not particularly more intelligent than the old ones, but sometimes young people look a lot less intelligent than their parents and grandparents, who had to use their brains instead of smart devices. It would be great if smart devices were used for more intelligent activities but instead they are mostly used for e-commerce, entertainment and socializing. Maybe it is not correct to say that smart devices are making us "stupid", but it is true that many people are happy to be "mediocre" and let the devices do the "thinking". Humans always wanted to become omnipotent but now they seem content with letting machines become omnipotent while they (the humans) settle for mediocrity.

Artificial Intelligence/ Take 2

Narnia: how much progress has there been?


I think people started believing that Artificial Intelligence is real in 2011 when a question-answering software called Watson, designed by IBM, appeared on a television show called Jeopardy and beat the human contenders. The following year a Stanford/Google team led by Andrew Ng scanned millions of videos on youtube and identified cats with a very error rate. That year we actually witnessed a real milestone. Imagenet is a very large database of images that have been tagged by humans. Every year Artificial Intelligence teams from the world compete to recognize images and are given a score that is their error rate. In 2012 a new technique, "deep learning" " (largely the brainchild of George Hinton at the Univ of Toronto, who spent 30 years refining neural networks, but specifically his student Alex Krizhevsky) made that error rate collapse to a low number, and since then the error rate has kept declining, rapidly approaching the human level.

In 2014 Fei-Fei Li at Stanford Univ demonstrated a computer-vision algorithm that can describe photos. There were already software algorithms capable of recognizing faces (typically, within your group of friends, but Stanford's system (as well as a similar system developed at Yahoo/Flickr) can recognize what is in a picture, eg two young people playing frisbee in a park.

In 2014 Vladimir Veselov's and Eugene Demchenko's software Eugene Goostman, which simulates a 13-year-old Ukrainian boy, managed to fool 33 percent of the judges of the Turing Test at the Royal Society in London. It was not the first time that a software had fooled a significant number of humans but it was the first time that the humans were allowed to ask any kind of questions.

Alan Turing wrote a paper in 1950 where he toyed with the idea of a machine that can fool people into believing that it is a human being. Turing said that some day a machine may well fool 30% of people. So if you create a program that can fool 30% of people, you passed the Turing test.

Most of this progress is really about "recognizing". And that伍西s because the progress has been in neural networks, which Artificial Intelligence has always used for tasks of the recognizing type. AI was founded in 1955. Initially neural networks were the favorite technique. But it was difficult and expensive to build large neural networks. Our brain has billions of neurons and billions of connections. Creating a neural network of that size was impossible in the past. So neural networks were abandoned in favor of another technique, called "knowledge-based". The simplest way to understand "knowledge-based" AI is to think of the difference between information and knowledge. In an information-based system there is a database that contains the answers. When someone asks a question, such as Who is the president of the USA or Where is Rome, the system looks up the answer: Barack Obama, Italy. But now imagine that I ask the questions "Who WILL be the next president of the USA?" or "Where is Atlantis"? Aknowledge-basedsystem cannot find the answer anywhere: it has to THINK. It has to use the available knowledge, which is incomplete and imprecise, and "guess" who could become president of the USA; just like we do.

Or think of the doctor. What does the doctor do? When you are sick, is there a place where the doctor can find the disease? No medical encyclopedia is the same as a doctor. The doctor uses knowledge, experience, common sense, intuition, etc to "GUESS" what your disease is. If you drank water from a polluted source, he guesses that maybe the water made you sick. If there is an epidemic of flu, he guesses that maybe you have the flu. Plausible reasoning.

This type of AI prevailed until the 1990s. But the results were not very exciting. It was too difficult to encode human knowledge. So AI actually did not have a good reputation. It was theoretical research that never kept its promises. When Hinton and others developed deep learning (which is an evolution of neural networks), AI became popular again because it can really do interesting things like recognizing faces or recognizing voices or recognizing scenes. But the limitations of neural networks is that they are really a form of "pattern matching",which ultimately really means "recognizing". So you have to translate all your problems into problems of "recognition". It is not impossible, it just feels a little unnatural to turn the question "Who will be president" into a question of recognizing a pattern.

Then there is common sense. Neural networks are statistical methods. Fundamentally, they work well if you have many cases and you want to guess the next case. Modern translation systems are a good example. They are NOT based on knowledge of the language (grammar). They are based on statistic analysis. The system "learns" by comparing thousands and thousands of texts that have been translated. When you ask to translate a new sentence, the system guesses based on previous translations. It "recognizes" the closest likely translation. In a knowledge-based approach, the system would know the grammar of the language and use reasoning to understand the sentence, and then it would know the other language伍西 s grammar and create the translation.

The neural network approach was abandoned in the 1960s because we didn伍西t have computers fast enough. But now we can use thousands of servers, each of which is very powerful, to run big computations. So in a sense "deep learning" has been enabled by cheap computing. Of course, there has also been progress in the math of neural networks. George Hinton and others have come up with more and more efficient computational methods. But it would be pointless if we couldn伍西t use thousands of powerful computers to implement those neural networks.

Nothing with this method is wrong. I am just not sure that abandoning the knowledge-based approach was a good idea just like in the past I thought that abandoning neural networks was not a good idea.If you want to guess who will be the next president of the USA, you don伍西t have a statistical sample to work on. What we do is to analyze all the factors that help politicians win the presidential elections. We can spend hours or days arguing who will be the next president. We use reasoning based on knowledge.That isi not what a neural net does.

Deep Learning has become extremely popular, and this year Google/DeepMind's AlphaGo beat a go/weichi master. But if you analyze all these great successes you realize that they depend on 2 factors: 1. thousands of very fast processors; and 2. large collections of human examples. The success in recognizing images started after the creation of ImageNet, where thousands of students upload images and "tag" them. The success in playing chess started after humans created a huge database of chess games played by masters. The success in playing weichi came after humans created a huge collection of weichi games played by masters.

That伍西s why IBM's Watson of 2013 consumes 85,000 Watts compared with the human brain's 20 Watts and AlphaGo of 2016 consumes 440,000 Watts. Your brain can do an infinite number of things with 20 watts. AlphaGo can do only one thing with 440,000 watts. I will be impressed when someone builds a machine that uses only 20 watts and can do just 2 things. No cheating: you can build two machines that do two things and put them inside the same box, but that伍西s cheating. We don伍西t have one million brains to do one million things. We have one brain that can do one million things, and it uses only 20 watts. Obviously we are very very far from building truly intelligent machines. We are building better and better appliances: the light bulb, the washing machine, the refrigerator, the microwave oven, the chess player, AlphaGo, 雾熙 They can do one thing, and they do it better than me.

That伍西s why sometimes I joke about "the curse of Moore伍西s law". In the past the motivation to come up with creative ideas in A.I. was due to slow, big and expensive machines. You had to find a creative way to make machines "intelligent" when machines were slow, big and expensive. So AI scientists came up with a lot of interesting ideas. Today we are in the opposite situation: brute force (100s of supercomputers running in parallel) can find solutions using relatively simple mathematical techniques. Actually, you can find the answer to most questions by simply using a search engine: no need to think, no need for intelligence. In a sense, this has reduced our motivation to come up with creative ideas about intelligent machines.

Then there is common sense. The vast majority of what we do in a day is done without thinking. You don伍西t touch something that is very hot. You don伍西t walk out of a window if you are in a hurry. You don伍西t walk out in heavy rain without an umbrella. There are many actions that would make people laugh: common sense tells them that those actions are wrong. Most of what we do on a daily basis is due to common sense.

Statistical method yields a plausible result but it has not learned why. And that伍西s why the learned skills cannot be applied to other fields. Philosophers like John Searle have always argued that whatever the machine does雾熙 it is not what it "does"; meaning that the machine may have done something but it doesn伍西t know that it has done it. Searle explained it in 1980 with the "Chinese room" example. If you give me a book that has the answers in Chinese to all the possible Chinese questions, and then you ask me a question in Chinese, I will find the answer in Chinese. I give you the correct answer. But I still don伍西t know Chinese. In fact, I only know 3 sentences in Chinese. So when I answer in Chinese, I am NOT answering in Chinese. That applies to neural networks too: they may find the correct answer, but they don伍西t know why. They may translate correctly a sentence from English to Chinese, but they don伍西t know why. It is just that thousands of people translated it that way, so they guess it is the correct trnalsation. But the translating machine doesn伍西t know English and doesn't know Chinese.

Narnia: What about the Singularity? Ray Kurzweil believes that in a few years we will haave machines that are smarter than us, in fact so smart that we cannot understand their intelligence. Kurzweil;s predictions * Infinite life extension: "Medical technology will be more than a thousand times more advanced than it is today雾熙 every new year of research guaranteeing at least one more year of life expectancy"* (2022) * Precise computer simulations of all regions of the human brain (2027) * Small computers will have the same processing power as human brains (2029) * 2030s: Mind uploading - humans become software-based * 2045: The Singularity


This has become almost a new religion in Silicon Valley. The Singularity is coming and will make us immortal. This sort of religion is based on five "dogmas": 1. Artificial Intelligence systems are producing mindboggling results; 2.Progress is accelerating like never before; 3.Technology is creating super-human intelligence ; 4.Humans benefit from smarter machines; 5.Machines that pass the Turing Test are at least as intelligent as humans. I wrote a book titled "Intelligence is not Artificial" to explain why I don伍西t agree. Five arguments against the Singularity: 1.Reality Check; 2.Accelerating Progress?; 3.Non-human Intelligence; 4.Human Intelligence; 5.A Critique of the Turing Test.

1. Reality Check. A.I. has always been a bit optimistic. In 1965 the great mathematician Herbert Simon wrote "Machines will be capable, within twenty years, of doing any work that a man can do". Well... we are still waiting for that day. My reaction to recent achievements is the opposite to the enthusiastic headlines of the news media. Recognizing a cat is something that any mouse can do (it took 16,000 computers working in parallel to do the same). NASA's Curiosity is one of the most sophisticated robots ever built. In 2013 my friend the NASA planetary scientist Chris McKay told me "What Curiosity (robot) has done in 200 days a human field researcher could do in an easy afternoon". A video showing a robot that rides a bicycle went viral, but human-looking automata that mimic human behavior have been built since ancient times, particularly in China. It is true that machines can now recognize faces, and even scenes, but they have no clue what those scenes mean: we will soon have machines that can recognize the scene "someone picked up an object in a store" but when will we have a machine that can recognize "someone STOLE an object in a store?" A human being understands the meaning of this sentence because humans understand the context: some of those objects are for sale, or belong to the shopper, and a person walking away with those objects is a thief, which is very different from being a store clerk or a customer. We can train neural networks to recognize a lot of things, but not to understand what those things mean. The automatic translation software that you use from Chinese to English doesn't have a clue what those Chinese words mean nor what those English words mean. If the sentence says "Oh my god there's a bomb!" the automatic translation software simply translates it into another language. A human interpreter would shout "everybody get out!", call the emergency number and... run!

Nothing puts the progress in A.I. better in perspective than the progress in robots. The first car was built in 1886. 47 years later (1933) there were 25 million cars in the USA, probably 40 million in the world, and those cars were much better than the first one. The first airplane took off in 1903. 47 years later (1950) 31 million people flew in airplanes, and those airplanes were much better than the first one. The first television set was built in 1927. 47 years later (1974) 95% of households in the USA owned a tv set, and mostly a color tv set. The first commercial computer was delivered in 1951. 47 years later (1998) more than 40 million households in the USA had a computer, and those personal computers were more powerful than the first computer. The first (mobile) robot was demonstrated in 1969 (Shakey). In 2016 (47 years later) how many people own a robot that is at least as good as Shakey?

What i see around me is depressing. It is not machines that learned to understand human language but humans who got used to speak like machines in order to be understood by automated customer support (and mostly not even speak it but simply press keys).

What "automation" really means: in most cases the automation of those jobs has required the user/customer to accept a lower (not higher) quality of service. The more automation around you, the more you (you) are forced to behave like a machine to interact with machines.

A lot of "intelligent behavior" by machines is actually due to an environment that has been structured by humans so that even an idiot can perform. For example, the self-driving car is a car that can drive on roads that are highly structured. Over the decades we structured traffic in a way that even really bad drivers can drive safely. We made it really easy for cars to drive themselves.

We structure the chaos of nature because it makes it easier to survive and thrive in it The more we structure the environment, the easier for extremely dumb people and machines to survive and thrive in it. It is easy to build a machine that has to operate in a highly structured environment What really "does it" is not the machine: it's the structured environment

2. Accelerating progress. That伍西s the other dogma of the Singularity movement. Because progress is so rapid and it is even accelerating, then machines will become more and more intelligent more and more rapidly. My problem with this dogma is that it is just not true. There has been the same kind of rapid progress in the past, and maybe even more rapid. Think of the period between 1880 and 1915 when the car, the airplane, the telephone, the radio, the record and cinema were invented. Suddenly people were "flying", and one could talk to her mother who was thousands of kms away, and one could hear the voice of someone dead, etc. Those inventions must have looked like magic. They all happened in that short period of time. At the same time science invented Quantum Mechanics and Relativity. The USA produced 11,200 cars in 1903, but already 1.5 million in 1916. The Wright brothers flew the first plane in 1903: during World War I (1915-18) more than 200,000 planes were built. That伍西s accelerating progress. On the other hand, are we really sure that today there is so much progress? 47 years after the Moon landing we still haven't sent a human being to any planet . The only supersonic plane (the Concorde) has been retired. So I am not even sure that progress in our age is really so special. I am not denying that today there is a lot of change; but change is not necessarily progress. Sometimes it is fashion created by marketing. Sometimes it is change of a commercial type, that mainly benefits the big corporations. Maybe it is "progress", but progress for whom?

Argument number 3: non-human intelligence has always been around. We are surrounded by non-human intelligence, and frequently these non-human intelligences can do things that we cannot do. Bats can avoid objects in absolute darkness at impressive speeds. Birds are equipped with a sixth sense for the Earth's magnetic field. Some animals have the ability to camouflage, to change the color of their skin. Many animals have night vision. Most animals can see, sniff and hear things that we cannot. And flies can fly and land upside down on the ceiling. These brains are more powerful than ours in performing these tasks. We already built machines that can perform tasks that we cannot perform. For example, we cannot do what a light bulb do. One of the most influential inventions of all times is the clock, invented one thousand years ago, that can do something that no human being can do: keep time. I am not sure what they mean when they say that the Singularity will be a super-human intelligence: what is the difference between non-human and super-human? There already are so many kinds of intelligence that can do things that we cannot do. Nothing scary about it.

Number 4: I am more concerned about the future of human intelligence than about the future of machine intelligence. The Turing Test was asking "when can machines be said to be as intelligent as humans?" I always joke that this "Turing point" can be achieved in 2 ways: 1. Making machines smarter, or 2. Making humans dumber! If machines get just a little smarter while humans get a lot dumber, then, yes, we will have machines that are a lot smarter than humans. So that伍西s the danger: not that we create machines that are too intelligent, but that we create people who are too dumb. People make tools that make people obsolete, redundant and dumb. In fact, the success of many high-tech projects depends not on making smarter technology but on making dumber users. "They" increasingly expect us to behave like machines in order to interact efficiently with machines: we have to speak a "machine language" to phone customer support, automatic teller machines, gas pumps, etc. In most phone and web transactions the first question you are asked is a number (account #, frequent flyer#雾熙) because you are talking to a machine. The machine performs its task because YOU spoke the machine伍西s language (numbers), not because the machine spoke your language. Rules and regulations (driving a car, eating at restaurants, crossing a street) increasingly turn us into machines that must follow simple sequential steps in order to get what we need. I am afraid that we talk about Artificial Intelligence while humans are moving a lot closer towards machines than machines are moving towards humans?

And finally #5 there are philosophical objections to the Turing Test. Who is supposed to be the judge of the test? 33% of the members of the Royal Society jury were fooled by the machine in 2014. What if we replaced the Royal Society jury with a group of mountain villagers? Is it possible that the result of the test simply tells us that people who hang out at the Royal Society are not very smart? And what does it tell us that 33% of the jurors thought the human was a machine? What can we conclude from the fact that a human failed the Turing Test (s/he was mistook for a machine by 33% of the jurors)? If a machine fails the test (i.e. the jury thinks the machine is a machine), then we are supposed to conclude that the machine is not intelligent;but what are we supposed to conclude if a human fails the test (if the jury thinks that the human is a machine)? That humans are not intelligent? I think we need a better way to measure the intelligence of a machine. The whole discussion on the Singularity is really very vague.

Artificial Intelligence/ Take 3

Narnia: Can machines really think? Can machines have emotions?


Those are philosophical questions. If you can do everything that i can do, i assume you can think, but i have no way to prove it. I cannot get into your brain and verify that you are really thinking. You could be a zombie who behaves like me but actually has no feelings or emotions. I can never be sure that there are other people who "think" besides me. If you tell me that you think, i must believe that you think. Well, we can build a robot that will say "I think", "I am happy", "I feel sorry for her", etc. How can you verify whether this machine is really thinking and feeling emotions or not? IBM is already programming Watson to recognize people's emotions. Emotional A.I. is coming. We will have machines that try to understand our emotions and will act accordingly. It is actually not that difficult to program machines like this. It has very few pratical applications so this "emotional" A.I. has not evolved as much as face recognition or speech recognition. In 2016 Apple acquired San Diego-based Emotient, founded in 2012 and specialized in emotion detection based on facial expressions.

Some people assume that neural networks simulate the brain so well that machines will soon "think" the same way we think, but that assumption is based on a gross misunderstanding of how the brain works. First of all, we still know very little about the human brain. We can't repair even the most basic of brain diseases. It will take decades or maybe centuries to fully understand how the brain works. So we only have very superficial models of the brain structure. Secondly, the neural networks that we have today are rough approximations of those superficial models. For example, a neural network has only one type of "neurotransmitter", only one type of communication between neurons, whereas the human brain has 52 types of neurotransmitters (and maybe even more). Neural networks assume that the neuron is a simple zero-one switch, but neuroscientists have discovered a very complex structure inside the neuron. Today's machines are very far from simulating a human brain, and we are very far from understanding how the human brain works, so i think that we are very far from the day when we can have a machine equipped with the equivalent of a human brain. When that day comes (probably very far in the future), you can ask me this question again...

Narnia: Is there something that humans can do all the time and that machines will never be able to do?


One of the most influential books for me has been a little book written in 1942 by US philosopher Susanne Langer and titled "Philosophy in a new key". Langer's thesis was that humans are symbolic animals. We create vast systems of symbols that seem to go counter the "survival of the fittest" rule. Ritual and magic, which have been widespread in all human cultures, are symbolic activities that, from an animal's point of view, are hopelessly senseless: an animal would never dance around a fire the way a tribe dances around a fire to make something happen. When animals want sex, they just have sex. Humans hold elaborate weddings, that used to be structured in a long series of steps to be performed in front of hundreds of guests. We cannot help it: our mind create huge systems of symbols all the time. Some of them are really funny: think of the tea ceremony in Japan. Why not just drink the tea? Why make it so complicated to drink a liquid? A religion is the ultimate system of symbols. I am sure that processing symbols serves a purpose, but, in human beings, it seems to constitute an end in itself: we create systems of symbols for the sake of creating systems of symbols. The complexity of a traditional wedding doesn't seem to serve any practical purpose. It is way more than what is needed. Even human language is too complicated for being simply a communication tool. Language is obviously much more than communication. Machine language is simple and unambiguous: do this, do that, don't do this, don't do that. We humans give lengthy speeches that sometimes are confusing and ambiguous. Geoffrey Miller, one of the great evolutionary psychologists of our times, in "The Mating Mind" (2000) speculates that language could be a form of sexual display and that communication is only a secondary use of language. He compares human language to the colorful tail of the peacock. Human language is unique to humans the same way tha the peacock's tail is unique to peacocks. It is pointless to try and teach language to a chimpanzee the same way that it is pointless to expect a child to grow a colorful tail. Langer thinks that ritual and magic are spontaneous activities, the by-products of the human mind's propensity for transforming everything into symbols. The propensity for symbolic processing can grow forever, even to the point that it becomes no longer useful and even harmful. There actually is an advantage in "conceiving" a thing as a symbol: at the physical level no two people see the same thing (each brain is slightly different) but all people form the same symbol of the same thing. If we simply exchange a pixel map of what we saw, we would not find any two identical matches; but what we exchange is the concepts we formed of the respective pixel maps, and those are likely to be identical if the thing is the same. The reason that language became the primary form of communication is that, as Bertrand Russell originally noted, speech is the most economical way of rapidly producing many symbols via bodily movement. We recognize two situations as alike not because they provide identical sensory input but because they belong to the same symbol. The great linguist Edward Sapir also thought that language was not originally for communication. The Danish linguist Otto Jespersen argued that singing and dancing came first. Communication is a by-product of symbolization. Ritual, myths and music are other examples of systems of symbols that humans create.

The modern mind, however, looks down on many of these systems of symbols. Our society are increasingly based on "rational" rules and regulations that get rid of those "useless" and "expensive" rituals. Life is increasingly programmed to be efficient. Children are sent to school according to a program. Then they are expected to find a job. Even entertainment is highly regulated. This is what i call the "robotic" mind: a mind that has to obey rational rules. Think of the difference between the traditional wedding (in India it could last 3 days) and the secular wedding that takes place in a government office and consists in simply signing a contract. We are genetically programmed to be "symbolic minds" (minds that indulge in rituals and legends) but somehow we increasingly like societies that create "robotic minds".

Now you may begin to understand why i told you that maybe A.I. abandoned the knowledge-based approach too soon. Knowledge-based A.I. was all about systems of symbols: that's how knowledge is represented, that's what knowledge is. If you "know" something, it means that you created symbols about it. The A.I. that prevails today is about the "robotic mind", not the "symbolic mind". The "deep learning" networks are good at recognizing and performing tasks, not at creating complex systems of symbols. That's why i used the term "robotic mind": the robots that we are designing emulate the robotic mind, not the symbolic mind. A robot that has low battery does not start dancing around the fire or praying supernatural beings.

Narnia: but why is the symbolic mind so important?


First of all, that's what we are. It's like asking "Why are eyes so important?" A human being who eyes don't work well is not considered blessed. He is taken to a hospital. If your mind stops creating symbols, you are not human anymore. Secondly, those systems of symbols define our values. Some things are more important than others in life. Respecting your neighbor or an elderly person is more important than buying food or parking a car. Our "symbolic mind" tells us to be polite and to perform good deeds. Morality comes natural to a "symbolic mind". On the contrary, the "robotic mind" simply obeys rules and regulations: if there is no rule telling children to respect their parents, a robotic mind will not do it. The A.I. that we are conceiving and the robots that we are designing will have no moral values.

Ultimately, what is the purpose of having machines? I think the answer should be "to make us happier". Right now we are at the stage where we interpret "intelligent" as "useful": the more useful a machine is, the more intelligent it is. But being useful is not the same as making us happy. What is it that machines could do for us that would make us happier? This is not an easy question to answer. We have often interpreted happiness in a material way with the result that we got less happy. Suicide rates tend to be high in countries like Japan and Scandinavia that provide a high living standard to their citizens, and tend to be very low in poor countries. What is it that makes people truly happy? When i travel in poor African countries, i am surrounded by people who smile and laugh all the time. When i walk in Western cities, hardly anybody smiles. The reason for this apparent contradiction is that we often confuse goods with values. Many of the greatest philosophers, including Jesus and Buddha, warned that material wealth does not translate into happiness. That's what the great systems of symbols provide: a path to happiness. That's why it is dangerous to get rid of the "symbolic mind"; and machines that speed up this process then become truly dangerous.

When people ask me about immortality, i remind them that the longest living bodies on the planet have no brain: bacteria and trees. Are they happy? Would you like to be a tree?

(Note: Actually, Narnia would like to be a tree, but i don't think she represents the average reader).

Artificial Intelligence/ Take 4

Narnia: If you think that super-human intelligence like the one depicted in "Ex Machina" or "Teminator" is still 1,000 years away, what do you think is the future of AI? Andrew Ng also shares the same opinion, he thinks "deep learning " is way too far from true AI and Singularity. His idea of the future of AI is very optimistic: The Robots will be smarter, but human will also be smarter.AI will be our intelligence assistant or partner. It will help us make better decisions and be more efficient, the future will be "intelligent partner" time .What you think?


A.I. is a useful technology, just like many that came before: the wheel, the boat, the clock, the steam engine, television, the GPS, etc... All of them allowed us to do new things. The project of A.I. is not producing an "intelligence" like the human one but it is producing very useful things and will keep producing useful things. Ng is right. Technology has always been a partner, not a replacement, for humans. Every new technology "creates" better jobs for humans who have to work with that technology. Doug Engelbart, one of the most influential people in Silicon Valley (the SRI researcher who invented the mouse and who organized "the mother of all demos"), used to talk about "augmented intelligence", not "artificial intelligence". Machines "augment" our intelligence. That's why i am concerned about the people who use technology to become dumber, not more intelligent. The smartphone allows you to do many things faster and better. It is silly that people waste this potential to just check what their WeChat friends are doing.

Narnia: You give lots of examples of animal's ability as non-human intelligence, also mentioned clock and other simple machines. The common thing is they are below human's, no matter how fast a lion can run, how far a dog can sniff, human can control them.


Sure? Humans think that they can control a bat. The truth is that "control" usually means "kill". Yes, we can kill a bat. I am not sure that this means that we are more intelligent. Is a lion more intelligent than a gazelle just because the lion can kill the gazelle? Humans control clocks? Sure? I think that clocks control us. It is the clock that decides when you wake up and when you go to work and when you go home. A human set the clock, but you can also say that the clock "demands" to be set by a human. Clocks tell some humans "set the alarm". Those humans set the alarm. Then the clock's alarm tells other humans what to do. There is a tradition of weird philosophers who think that objects control us, not viceversa, and that objects are evolving to become better and better at controlling the world. Jean Baudrillard is my favorite.

Narnia: "The super- intelligence machines can control humans and even kill humans, which is worth worrying."


All technology can kill humans. Atomic energy can kill humans. Bad food kills humans. Cigarettes kill humans. Cars kill millions of humans. Each of us can decide the advantage and disadvantage of using a technology that enslaves us and may kill us. I am in favor of atomic energy even after the Fukushima disaster, because it can help produce cleaner energy. Of course, it could kill thousands of people. I wouldn't say that i am scared of atomic energy. I am interested in technology to make sure that disaster does not happen. And i am interested in technology to make sure that we can minimize and clean up the disaster if it happens.

Narnia: "A gorilla could never be smarter than a human being because they don't have the same brain that we have, but AI is copying the human brain."


Same answer. You assume that our brain is better than the brain of all animals. Each animal has a brain that is good at something, and many animals have brains that are good at things that we are bad at. And we are very bad at some things. For example, some humans become suicide bombers in the name of gods and kill dozens of people and think they will go to a place called "paradise": is this better than being a gorilla? I don't think gorillas are as stupid as humans. We decided that our brain is better than theirs because we can kill them. That's the thing that we are definitely better at: killing animals and destroying the environment. No animal is as good as humans at killing other animals (including other humans) and at destroying the environment. If AI is copying the human brain, it will produce machines that are very good at killing and at destroying. So i actually hope that AI does NOT copy the human brain but builds a better one, a brain that doesn't kill and doesn't destroy. Instead, i hope that AI will produce the "best of brains", a brain that learns from all animals the best behavior of each animal. I do want a machine that can fly in the dark like a bat and that can recognize smells like a dog. I am not sure that i want a machine that becomes a suicide bomber in the name of a god. In many cases i would prefer a machine that behaves like a gorilla over a machine that behaves like a human. Hopefully, AI will give us machines that make the world safer, cleaner and nicer; not machines that repeat the mistakes made by humans.

Narnia: What your advice for the young generations? What should young people study to prepare themselves for the future of machines?


It is always easier to imagine which jobs will disappear than which jobs will be created. Nobody in 1950 imagined that some day millions of people will be software engineers, and that the salary of a software engineer would be much higher than the salary of a factory worker. Nobody in 1950 imagined that we would be discussing the Internet of Things, Virtual Reality, etc. So nobody today can imagine the jobs that will exist 50 years from now, or even 20 years from now. And this applies even in "old" technologies: the gym instructor did not exist 50 years ago, but today every city has a gym. Even the yoga instructor is a new job that didn't exist 50 years ago, if you live in a Western country. Not many people predicted that new jobs will be created by the fact that people live longer lives and want to live healthier lives. Today we have fitness experts and health care providers in all sorts of specializations, and obviously the ones who take care of elderly people have the brighter future. It is not only technology that destroys and creates new jobs. Change in society causes change in the jobs that are needed. In some cases it is not difficult to guess: the cashier, the insurance underwriter, the realtor, the travel agent, the restaurant waiter are examples of jobs that can disappear the same way that the bookstore and the photography lab have disappeared from many towns. But in each of these cases the person who loses a job can start doing something that the machine cannot do well.

There are two general advices that we can give to younger people (and also to the older people who are afraid of losing their job): knowledge and context. Machines can store a lot of information, but they are really bad at turning that information into knowledge. That's why the machine cannot have a conversation about who will be the next president of the USA, something that every person in the USA is doing during a presidential campaign. People have knowledge. They don't have a lot of information (in fact, some of them are very ignorant) but they have knowledge. They know the problems of the country, they know what other people complain about, they know what the previous politicians did, they know a lot of things about politics and elections. Knowledge is not information. The names of all the presidents of the USA from Washington to Obama is information, not knowledge. Knowledge is that Roosevelt presided over a Great Depression, and what that means. Knowledge is that George W Bush started two wars, and what that means. Here is an example of the difference that knowledge makes. Machines are getting better and better at translating from German to English because there are so many books translated from German to English. Machines learn the statistics of those translations. Machines learn that "Ich bin" is usually translated "I am". The machines will get better and better at this. But what happens if tomorrow we discover a new language? We discover a number of books in Mongolia that are written in a language never seen before. What does the machine do? Absolutely nothing. What does the human expert do? He uses her knowledge to decipher the language. Of course the human expert will use computers to do statistical analyses of the terms. Of course the human expert will use computers to compare this new language to the existing ones. But computers cannot do what this expert does: she is trying to find out the logic behind the writing, she is using her "knowledge" of what a language is. The machine can help the expert with fast computations, but the expert has the knowledge that makes it possible to decipher a new language. A translating machine doesn't even know what a language it. A translating machine is simply a statistical tool to guess that a certain string of characters should be translated into another string of characters.

Secondly, people understand the context. If i ask you "Where is the library"? you may answer "The library is closed" or "The library doesn't have the magazine you want to read" or "The library is very crowded at this hour". This is actually not an answer to that question, but you know the context of the conversation and therefore you can guess important facts that are not asked by my question. The machines are trying to catch up: they display the address and then also the hours of operation. Waze knows the traffic on the road. And so on and on. But humans can still be far ahead of machines in understanding the context. We can listen to a person talking for six hours and turn those six hours into a context. A machine can do it only for a few sentences, then it gets lost in too much context.

If you are simply performing your job like a machine, you will be replaced by a machine. If you are acquiring knowledge and use common sense to understand the context, you will be promoted when a machine replaces you, because you can do something that the machine cannot do. And, when the machine arrives, you can use the machine to get dumber, or you can use the machine to get smarter. You can use the smartphone for WeChat or you can use the smartphone to acquire knowledge about your field of work.

Think of the simplest case in which a human is needed and the human is paid a lot of money: when the machine fails. If the machine breaks down (or it cannot operate because the building lost electricity), a human needs to take over. That human being is very valuable. If a machine takes your job, you want to become that person, the person who knows what to do when the machine fails.

This interview was complemented with these interviews in A.I.:

Stuart Russell, Artificial Intelligence at UC Berkeley

A.I.: Nell Watson of Singularity University

This interview was complemented with these interviews in Robotics:

Oussama Khatib, head of the Stanford Robotics Lab

Andra Keay, Managing Director of Silicon Valley Robotics

Morgan Quigley, designer of the Robot Operating System at Stanford and cofounder of the Open Source Robotics Foundation

Melonee Wise, founder of Fetch Robotics, formerly of Willow Garage

Pieter Abbeel, Professor of Robotics at UC Berkeley

Shohei Hido, Chief Research Officer of Preferred Networks America

See also "Intelligence is not Artificial"
Back to the Table of Contents

Piero Scaruffi | Contact