Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

The Moral Issue: Who's Responsible for a Machine's Action?

During the 2000s, drones and robotic warfare stepped out of science-fiction movies and into reality. According to the Bureau of Investigative Journalism, an independent not-profit organization founded by David and Elaine Potter in 2010, US drones have killed between 2500 and 4,000 people in at least seven countries (Afghanistan, Pakistan, Syria, Iraq, Yemen, Libya and Somalia). About 1,000 of them were civilians, about 200 were children.

These weapons represent the ultimate example of how machines can relieve us of the sense of guilt. If i accidentally kill three children, i will feel guilty for the rest of my life and perhaps commit suicide. But who feels guilty if the three children are killed by mistake by a drone that was programmed 5.000 kms away by a team using Google maps, Pakistani information and Artificial Intelligence software, a strike authorized by a general or by the president in person? The beauty of delegating tasks to machines is that we decouple the action from the perpetrator. We dilute the responsibility so much that it becomes easier to "pull the trigger" than not to pull it. What if the mistake was due to malfunctioning software? Will the software engineer feel guilty? She may not even learn that there was a "bug" in her piece of software; and, if she does, she may never realize that the bug caused the death of three children.

This process of divorcing the killing from the killer is not new. It started at least in World War i with the first aerial bombings (a practice later immortalized by Pablo Picasso, when it still sounded horrible, in his painting "Guernica") and that happened precisely because humans were using machines (the airplanes) to drop the bombs on invisible citizens instead of throwing grenades or shooting guns against visible enemies. The killer will never know nor see the people he killed.

What applies to warfare applies to everything else. The use of machines to carry out an action basically relieves the machine's designers and its operators of real responsibility for that action.

The same concept can be applied, for example, to surgery: if the operation performed by a machine fails and the patient dies, who is to blame? The team that controlled the machine? The company that built the machine? The doctor who prescribed the use of that specific machine? I suspect that none of these will feel particularly guilty. There will simply be a counter that will mechanically add one to a statistical number of failed procedures. "Oops: you are dead". That will be the reaction of society to a terrible incident.

You don't need to think of armed drones to visualize the problem. Think of a fast-food chain. You order at a counter, then you move down the counter to pay at the cash register, and then you hang out by the pick-up area. Eventually some other kid will bring you the food that you ordered. If what you get is not what you ordered, it is natural to complain with the kid who delivered it; but he does not feel guilty (correctly so) and his main concern is to continue his job of serving the other customers who are waiting for their food. In theory, you could go back to the ordering counter, but that would imply either standing in line again or upsetting the people who are in line. You could summon the manager, who was not even present when the incident happened, and blame him for the lousy service. The manager would certainly apologize (it is his job), but even the manager would be unable to pinpoint who is responsible for the mistake (the kid who took the order? the chef? the pen that wasn't writing properly?)

In fact, many businesses and government agencies neatly separate you from the chain of responsibility so that you will not be able to have an argument with a specific person. When something goes wrong and you get upset, each person will reply "I just did my job". You can blame the system in its totality, but in most cases nobody within that system is guilty or gets punished. And, still, you feel that the system let you down, that you are the victim of an unfair treatment.

This manner of decoupling the service from the servers has become so pervasive that younger generations take it for granted that often you won't get what you ordered.

The decoupling of action and responsibility via a machine is becoming pervasive now that ordinary people use machines all the time. Increasingly, people shift responsibility for their failures to the machines that they are using. For example, people who are late for an appointment routinely blame their gadgets. For example, "The navigator sent me to the wrong address" or "The online maps are confusing" or "My phone's batteries died". In all of these cases the implicit assumption is that you are not responsible, the machine is. The fact that you decided to use a navigator (instead of asking local people for directions) or that you decided to use those online maps (instead of the official government maps) or that you forgot to recharge your phone doesn't seem to matter anymore. It is taken for granted that your life depends on machines that are supposed to do the job for you and, if they don't, it is not your fault.

There are many other ethical issues that are not obvious. Being a writer who is bombarded with copyright issues all the time, here is one favorite. Let us imagine a future in which someone can create an exact replica of any person. The replica is just a machine, although it looks and feels and behaves exactly like the original person. You are a pretty girl and a man is obsessed with you. That man goes online and purchases a replica of you. The replica is delivered by mail. He opens the package, enters an activation code and the replica starts behaving exactly like you would. Nonetheless, the replica is, technically and legally speaking, just a toy. The manufacturer guarantees that this toy has no feelings/emotions, it simply simulates the behavior that your feelings/emotions would cause. Then this man proceeds to abuse that replica of you and later it "kills" it. This is a toy bought from a toy store, so it is perfectly legal to do anything the buyer wants to do with it, even to rape it and even to kill it. I think you get the point: we have laws that protect this very sentence that you are reading from being plagiarized and my statements from being distorted, but no law protects a full replica of us.

Back to our robots capable of critical missions: since they are becoming easier and cheaper, they are likely to be used more and more often to carry out these mission-critical tasks. Easy, cheap and efficient: no moral doubts, no falling asleep, no double crossing. The temptation to use machines instead of humans in more and more fields will be too strong to resist.

The program of neural networks is increasingly a program of building a silicon copy of the human brain, which will then pilot a body to perform human-level tasks. Hidden behind this program is the unspoken goal of human nature: deprive other human beings of their rights and make them work for us. The neural network that will achieve full parity with the human brain will, ultimately, be a human being without human rights. We can do anything we want to a machine and to the software that the machine is running, whereas we have laws that limit what we can do to other humans. When we have a machine that is fully equivalent to a human being, we will be able to satisfy our secret desire to use human beings without having to worry about their rights.

I wonder if it is technology that drives the process of de-responsabilization or it is the desire to be relieved of moral responsibility that drives the adoption of new technology. I wonder whether society is aiming for the technology that minimizes our responsibilities instead of aiming for the technology that maximizes our effectiveness; instead of aiming for the technology that maximizes our accountability.

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact