Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

The Dangers of Machine Intelligence: Machine Credibility

The world has indeed changed: these days humans have more faith in machines than in gods.

GPS mapping and navigation software is not completely reliable when you drive on secondary mountain roads. When my hiking group is heading to the mountains, we have to turn on the most popular "navigator" because some of my friends insist on using it even if there is someone in the car who knows the route very well. They will stop the car if the navigation system stops working. And they tend to defend the service even when faced with overwhelming evidence that it took us to the wrong place or via a ridiculous route.

In September 2013 i posted on Facebook that YouTube was returning an ad about (sic) pooping girls when i looked for "Gandhi videos". An incredible number of people wrote back that the ad was based on my search history. I replied that i was not logged into YouTube, Gmail or any other product. A friend (who has been in the software industry all his life) then wrote "It doesn't matter, Google knows". It was pointless to try and explain that if you are not logged in, the software (whether Google, Bing or anything else) does not know who is doing the search (it could be a guest of mine using my computer, or it could be someone who just moved into my house using the same IP address that i used to have). And it was pointless to swear that i had never searched for pooping girls! (for the last week or so i had been doing a research to compile a timeline of modern India). Anyway, the point is not that i was innocent, but that an incredible number of people were adamant that the software knows that i am the one doing that search. People believe that the software knows everything that you do. It reminded me of the Catholic priest in elementary school: "God knows!"

Maybe we're going down the same path. People will believe that software can perform miracles when in fact most software has bugs that make it incredibly stupid.

Maybe we are witnessing what happened in ancient times with the birth of religions. (Next they started burning at the stakes the heretics like me who refused to believe).

The faith that an ordinary user places in a digital gadget wildly exceeds the faith that its very creators place in it.

If i make a mistake just once giving directions, i lose credibility for a long time; if the navigation system makes a mistake, most users will simply assume it was an occasional glitch and will keep trusting it. The tolerance for mistakes seems to be a lot higher when it comes to machines.

People tend to believe machines more than they believe humans, and, surprisingly, seem to trust machine-mediated opinions better than first-hand opinions from an expert. For example, they will trust the opinions expressed on websites like Amazon or Yelp more than they will trust the opinion of the world's experts on books and restaurants. They believe their navigation system more than they believe someone who has spent her entire life in the neighborhood.

The evidence (e.g. political elections) show that we are a lot less smart than we think, and we can easily be fooled by humans. When we use a computer, we seem to become even more gullible. Think of how successful "spam" is, or even of how successful the ads posted by your favorite search engine and social media are. If we were smarter, those search engines and social media would rapidly go out of business. They thrive because millions of people click on those links.

The more "intelligent" software becomes, the more likely that people trust it. Unfortunately, at the same time the more "intelligent" it becomes, the more capable of harming people it will be. It doesn't have to be "intentionally" evil: it can just be a software bug, one of the many that software engineers routinely leave behind as they roll out new software releases that most of us never asked for.

Imagine a machine that broadcasts false news, for example that an epidemic is spreading around New York killing people at every corner. No matter what the most reputable reporters write, people will start fleeing New York. Panic would rapidly spread, from city to city, amplified by the very behavior of the millions of panicking citizens (and, presumably, by all the other machines that analyze, process and broadcast the data fed by that one machine).

In June 2016 Baidu published the news that an Indian woman gave birth to 11 twins. Those of you who are old enough will remember this story. It was false the first time it came out in 2011 and it is still false today, but it keeps being repeated on websites throughout the world. The Baidu spider simply scours the web for interesting news and has no way to find out whether the news is correct or not. An investigative report, or for that matter any intelligent being with 20 minutes to spare, can easily find out that the news was fabricated in 2011 (in Zambia, apparently). The scary thing is not that the spiders are dumb enough to believe all sorts of scams; the scary thing is that this becomes a news on Baidu, the main source of news in China. Millions of Chinese people are now convinced that (quote) "A Woman Gave Birth to 11 Babies at a Time in India".

In 2016 i had originally signed up for health care insurance through Covered California, a marketplace set up under the Affordable Health Care Act (better known as "Obamacare"). In February 2016 I realized that it was not worth it and called the insurance company, Anthem, to terminate my plan. Because of the way automatic payments are set up, Anthem had already received my payment for March. Anthem did the right thing: it refunded that payment to me. However, Covered California was paying monthly $528 "subsidies" for my insurance, and, unbeknownst to me, Anthem collected the subsidies for March and, apparently, never refunded them to Covered California. I was not insured in March, but at the end of the year Covered California sent me a 1095 form (required to file taxes) that showed those subsidies for March. I knew that, when filing taxes, this would be a problem because in March I did not qualify for those subsidies, so I called Covered California to explain that my coverage with Anthem had been terminated in February. CoveredCa told me that, basically, they only trusted what their computer received from Anthem's computer. I called Anthem and, after hours of discussions, I managed to prove to them that I did request termination in February and they did refund me in February (I still have the stub of the refund cheque) and therefore I was not insured in March. But they were powerless to change what their computer tells CoveredCa: their system just didn't contemplate this case. After more phone calls right and left, I was left with no option: I included that 1095 form in my tax return and I paid back $528 to CoveredCa for the March subsidies, i.e. for coverage that I never had, otherwise I would have automatically been sued by their computer and possibly gone to jail. The only way to solve this problem would have been to track down the software engineer who programmed the Anthem system and ask him to write a procedure to handle properly the termination and refund situation when there are subsidies involved. Too complicated, right? That's the real problem: machine absolutism. We do what the machine has been programmed for even if the program has a blatant mistake.

Drone strikes seem to enjoy the tacit support of the majority of citizens in the USA. That tacit support arises not only from military calculations (that a drone strike reduces the need to deploy foot soldiers in dangerous places) but also from the belief that drone strikes are accurate and will mainly kill terrorists. However, drone strikes that the USA routinely hails as having killed terrorists are often reported by local media and eyewitnesses in Pakistan, Afghanistan, Yemen and so on as having killed a lot of harmless civilians, including children. People who believe that machines are intelligent are more likely to support drone strikes. Those who believe that machines are still very dumb are very unlikely to support drone strikes. The latter (including me) believe that the odds of killing innocents are colossal because machines are so dumb and are likely to make awful mistakes (just like the odds that the next release of your favorite operating system has a bug are almost 100%). If everybody were fully aware of how inaccurate these machines are, i doubt that drone programs would exist for much longer.

In other words, i am not so much afraid of machine intelligence as of human gullibility.

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact