(These are excerpts from my book "Intelligence is not Artificial")
Life with Intelligent Machines
Our daily lives increasingly depend on the work of algorithms instead of fellow humans.
Banks have always been ahead in automation, so let me start with some banking examples.
In the old days of the coin-operated phone, i found myself in a foreign country with no access to my bank account: it had been locked because the algorithm had detected "unusual activity". I called the emergency number from a public phone on a sidewalk and had a brief conversation with the bank representative. The bank representative had to verify my identity by asking me all sorts of personal questions. I had to shout them in front of all the local people who were gathering around me. The whole town learned my personal data. The bank representative was obviously not interested in the security of my bank account because he had just compromised it by asking me to shout all that information. The bank representative was simply executing an algorithm. Without executing that algorithm he would not have been able to help me unlock my bank account. Now it was unlocked but, of course, that's precisely when it should have been locked because the security was completely compromised.
When i called my bank to inquire about a money transfer, the bank's automated phone system asked me for my account number, my ATM card's secret code, my social security number and my mother's maiden name before informing me that the office was currently closed. Because the bank's system was now using voice recognition technology, i had to repeat the numbers countless times before it understood them correctly, and some of those numbers have 16 digits.
One day i helped a neighbor who seemed to be having a panic attack. She was shouting "help! help! operator! operator!" to the bank's automated system that kept asking her to pick among 9 options and was responding with a simple: "Let's try again. Which of these 9 options best describes why you are calling today?" And kept repeating the 9 options. It wasn't an emergency but she had just lost her patience and wanted to speak to a person. When we finally found a route to get the "You want to talk to an operator. Is that correct?" She shouted a heartfelt "Yes!!!!" But the system, implacable, said: "Before i transfer you, are you interested in hearing about a special offer..." When she finally managed to speak to a human being, the human being began by telling her that the phone call was recorded for "quality assurance". That is obviously false: nobody at the bank will ever care that she had to spend 20 minutes shouting "help!" to a machine that kept repeating 9 irrelevant options to her.
When i had problems using my credit card outside the USA, i logged into my account and sent an angry message to customer support. Minutes later i saw that there was already a reply: the reply was saying "We are sorry that you decided to cancel your credit card". That is the exact opposite of what i needed. I replied with truly angry words to this message and this time it took 24 hours to get an answer: a customer service representative apologized for the misunderstanding that was due not to the stupidity of an employee but to the stupidity of a software "bot" that misunderstood my message.
I applied for "global entry", a program that speeds up immigration procedures for US citizens. I was accepted and a card was delivered to my home. I had a question about validating the card and i found a well-hid place on their website where one can submit a question. A few minutes later i got a reply by email that informed me of something totally irrelevant to my case: rules for crossing overland from Mexico in a private car. The reply had clearly been assembled by another software "bot" that didn't understand my simple question.
My friend Ania had an embarrassing car accident: the handbrake of her car failed while the car was parked uphill and the car rolled out and got damaged (luckily it didn't injure anybody and didn't hit any other car). When she filed the report with the car insurance, the agent asked her the usual questions that include whether she was wearing a seatbelt at the moment of the accident. Ania politely replied that she was not in the car when the car rolled down the hill. The agent politely repeated the question: "Were you wearing the seatbelt?" He also politely informed her that the machine only accepted "yes" or "no" as the answer, and that a "no" would certainly cause her premium to skyrocket. Ania protested that this was ridiculous, but eventually she had to lie and answer "yes" in order to avoid the punishment for not wearing seatbelts when not in the car.
A European friend, who resides in San Francisco, got married in China and his wife applied for a spouse visa at the Shanghai consulate of that country. The Shanghai consulate told her that her husband needed to register the marriage at the San Francisco consulate, otherwise the computer wouldn't accept that she was married to someone who didn't show up as married in the database; and it had to be done in San Francisco because that's where her husband resided. The San Francisco consulate told the couple that they had to provide a translation of the Chinese wedding certificate notarized by... the Shanghai consulate! Basically, the Shanghai consulate had to tell the San Francisco consulate "yes they are married" so that the San Francisco consulate could tell the Shanghai consulate "yes they are married". They did so and then both computers were happy: one computer reported that the husband was a married man, and the other computer gladly granted the spouse visa to his wife. Nobody seemed to find it odd that the Shanghai consulate actually had that information from the beginning.
Years ago, purchasing an airline ticket used to require a visit to the airline office or to a travel agent. Now you can visit one of the websites that sells tickets for all airlines. When these websites started, they delivered a great simplification to us. But now these websites have become so fragile that they work only if you have the right computer, the right operating system and the right browser. And they decide which one is the "right" one. Imagine a shop that you can enter only if you are wearing clothes approved by the shop owner. Even if you have the right browser, you may not have the most recent release, in which case you have to download and install it. And if your old computer cannot run the new release, oh well, you just need to buy a new computer. When you finally have all the hardware and software that is accepted by the airline's website, you can enjoy the frustration of a slow website with a cluttered (albeit colorful) user interface. If you are a member of their frequent flyer program, you'll also have to login, and this may involve password and secret questions. Make three mistakes and your account gets automatically locked. When you finally manage to login, often you end up buying a more expensive ticket than needed because you are exhausted.
When my preferred airline introduced a new level of security in their login process, i was asked to pick three "secret" questions from a list of about 20. None of those 20 applied to me: i was not married (therefore i didn't have a place where i met my wife), i don't have children (therefore i never hired a babysitter), i certainly don't remember the name of my first elementary school, i never had a favorite color, and so forth. Nonetheless, i could not have logged into my account without first picking three secret questions. So now i have an imaginary wife whom i met in Greece and the imaginary babysitter who took care of my first child was named Olga (and i won't tell you the third silly question).
We all had close encounters with stupid algorithms. The scary fact is that increasingly a) there is no human to talk to in order to override the algorithm, and b) there is often no human who actually knows or has the power to override it!
One day in China i couldn't log into my PayPal account because PayPal's website insisted on asking me for my pet's name (i never had a pet in my life, and i don't remember ever setting up a security question about a pet that i never had) or on sending me a text message to my phone (that was in California). As of 2017, PayPal offers no way to contact customer support electronically without logging in: if you cannot log in, you cannot email them that you cannot log in. Your only option is to make a phone call (their way to discourage customers from using customer support). I waited till midnight and then called PayPal in the USA (different time zone). The PayPal representative couldn't find a way to let me access my PayPal account. I asked him to simply transfer my balance to my bank account, the bank account that PayPal has on record. Answer: only the account owner can do that, and the only way to prove that i was the account owner was to login, precisely what i could not do. I asked him to escalate the problem to his supervisor. His supervisor told me that same thing: "nothing we can do". I threatened to close my account, publicize on social media the problem, sue PayPal, etc. I tried to reason about the situation: if i never set up a security question about a pet, then someone else must have done it, and that's a security breach, correct? This whole ordeal was about the security of my account, wasn't it? If security was the primary goal here, shouldn't PayPal immediately transfer my money out of my PayPal account into some kind of safe account? To no avail: no human being had the power to control the algorithm that blocked access to my account. At the end of a 40-minute conversation, the representative literally told me to "try again after 24 hours". What if it still doesn't work? "Try again after another 24 hours"! Note the word: "try". Nobody at PayPal could tell what the algorithm would do during the 24 hour period, but they hoped that it would stop asking me for the security question about a pet that i never had.
All these interactions with increasingly "smarter" machines share the same characteristic: throughout the operation you are expected to reply like a machine (a stupid machine) to all the questions. Any attempt to behave like an intelligent being results in the transaction getting canceled.
Automated procedures (algorithms) were invented to get rid of unskilled workers, whose repetitive work can easily be replaced by software. But now dealing with these "smart" algorithms requires higher, not lower, skills. Whenever something goes wrong, or you simply need an explanation, it is not trivial for the human representative who answers your call to figure out why the algorithm did what it did. It took almost one hour for an insurance agent to figure out why their automated system had sent me a bill for $212 even though i had paid the premium in full. In fact, he never found out where the number 212 came from, he just figured out that the algorithm generated the bill one day before another algorithm recorded my payment and, for mysterious reasons, the physical letter was sent out ten days later. But my premium has never been $212, nor has the monthly installment. If he really wanted to understand why his algorithm charged me that specific amount, he would have had to spend hours and maybe days studying the algorithm. Luckily, it was relatively easy to simply cancel the bill and neither of us was too eager to spend more time on this issue.
In 2015 i was trying to get an exemption from the mandate to get health insurance (at my age the cost is prohibitive), an exemption that is allowed under the law. Unfortunately, the bureaucracy was not (yet?) set up to deal with this exception: i eventually had to pay the penalty for not having health insurance even if every person who looked into my case agreed that i was entitled to the exemption. Think of it, and there are probably many cases in which the algorithm failed because your case was an "exception". Exceptions are no longer allowed.
You can also expect that "intelligent" algorithms will fail you precisely when you have an emergency. The reason is that they are trained to detect and then block "unusual activity". An antispam software started flagging all my emails to my wife as "550 High probability of spam" when she was in China and i was in California and we were frantically exchanging messages about an emergency. The "intelligent" software correctly detected unusual activity. It was unusual indeed: every emergency is "unusual", by definition. Then the algorithm decided to block that unusual activity, i.e. all my emails to my wife. In the middle of an emergency, i was not able to communicate with my wife by email. (You'd think that "intelligent" software would figure out that you are not spamming if you email someone from whom you have received thousands of emails over the years, but the "intelligence" is not trained to use common sense, it is trained to use real-time data).
Something similar happens to you when you are abroad and start using your credit card or ATM card frantically because some emergency has occurred: your bank's algorithm reacts to your emergency by blocking your card.
What "intelligent" algorithms want is that everybody behaves the same way all the time. Anything slightly different is cause for concern. Think of it, and that is actually what human beings do: if your patterns of behavior suddenly change in a very visible manner, your neighbors may suspect that you are up to something illegal; except, of course, that you can simply tell a human being what is going on ("my mother is sick" or "my son lost his wallet" or "we are remodeling the bathroom") and a human being, even a not particularly "intelligent" one, can understand the implication of that fact on your patterns of behavior.
Alas, that's precisely what an "intelligent" algorithm cannot understand.
Even more discouraging was the vocabulary used by the tech-support engineer when the antispam software failed me: he called it "a false positive", not "a bug", despite the fact that "High probability of spam" is factually false since the probability of spam is zero when i message someone who has messaged me countless times.
It will get worse: soon, we won't even be able to speak with a human engineer. Another "intelligent" algorithm (i.e. another incredibly stupid algorithm) will receive our complaints and calmly explain to us that we have to stop emailing our wives or stop using email altogether or maybe just drop dead.
In fact, the best business plan of our days is probably about "what will poor citizens need in order to cope with the incredibly stupid machines that they are being forced to adopt?" For example, if people are really forced to buy self-driving cars, that obviously don't work, what services will the not-so-proud owners of those vehicles need? Probably the call center for driving: make a phone call and a human tele-operator will take control of the car and drive it for you so that the car can finally get around the pillow that someone dropped in the middle of the street or so that the car can finally go through a crowd of pedestrians crossing while their traffic light is red (obviously we will all start doing that after we realize that the cars are programmed to always stop for pedestrians).
In 2017 i heard Li Jiang, the director of the Robotics and Future Education Initiative at Stanford University, talk about the way children interact with talking robots such as Mattel's chat-friendly Barbie (that debuted at the New York Toy Fair in 2015) and Musio (made by Santa Monica-based startup called AKA), or with conservational agents such as Apple Siri or Amazon Alexa. Children are very good at testing the limits of algorithms, and quickly reach a point at which the algorithm consistently fails to understand their questions. Then the children get very rude with the device. After a while, children have indirectly learned to be always rude to the device. Then they transfer that rude behavior to humans. When i heard this story, i realized that the exact same thing is happening to us adults: we get so used to curse against our dumb machines that we soon start doing it to fellow humans too. We start treating humans like (dumb) algorithms that deserve no respect for their work.
Then, again, when you are frantically trying to finish something and your assistant becomes hostile because it is time for her to go home, or s/he becomes hostile because your emails and texts are getting less and less courteous due to stress and lack of time (with some of the stress being caused by that very assistant), you do regret not having replaced the human assistant with an algorithm that works 24 hours a day, 7 days a week, doesn't get sick, doesn't get upset, doesn't get hungry, and doesn't have to watch a movie with friends!
You are actually surrounded by more than an army of algorithms: you are surrounded by hierarchies of algorithms. If the algorithm does not provide you the service that you need, you can press a key or say something that will route you to "customer service". But that is increasingly another algorithm. Even if you manage to bypass this algorithm and speak with a human being, increasingly you are asked to provide feedback about your experience, and that survey is run by another algorithm and analyzed by another algorithm. In the end, the experience itself, i.e. the way algorithms behave to you, is designed by algorithms.
When a business or government agency tells us that they are using an "intelligent" algorithm, they should also tell us what the algorithm is intelligent "in". Nobody is intelligent in everything. One can be very smart at playing chess and very dumb at saving money. Einstein was intelligent in physics, but probably not in stocks or golf. Let's say that a bank announces a new "intelligent" algorithm: is the algorithm "intelligent" in helping you or in helping the bank? It does make a difference.
There is worldwide competition to build "smart cities". Each time a new "smart city" is announced it is presented as even "smarter" than previous generations because controlled by even smarter algorithms. A smart city is about efficiency. It should really be called "efficient" city. Buildings, utilities, streets, cars and so on are tightly integrated so that the life of the city is optimized. But citizens should ask "what exactly gets optimized? efficient for what?" Efficiency is a dangerous concept when applied to human lives. The most efficient thing you can do is to die. You are going to die anyway. Staying alive is postponing the inevitable. What is the purpose of postponing it? What do you think is so important about your life to take away resources from others? You are wasting energy, using servives, causing pollution, etc. In the name of efficiency, you should die. We should all die. The designers of smart cities often forget that a city is not only made of buildings and streets: there are also people.
"The trouble with modern theories of behaviorism is not that they are wrong but that they could become true" (Hannah Arendt, 1958)
Back to the Table of Contents
Purchase "Intelligence is not Artificial")