Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

The Ethics of Algorithms

The vast algorithmic bureaucracy can also program ethics. Of course, we first need to choose among the many variants of ethics. John Stuart Mill's "utilitarianism" states that humankind should strive for the greatest possible happiness for the greatest number of people. Utilitarianism cannot, per se, decide whether war or peace is better, whether lying or telling the truth is better, etc. It all depends on the consequences.
At the other extreme, Immanuel Kant believed in universal laws, regardless of the consequences, and therefore he considered lying always morally wrong.
A few centuries earlier, Thomas Aquinas worked out a hierarchy of lies, all of them wrong but to different degrees: malicious lies, joking lies, helpful lies. Today's algorithmic bureaucracies have no concept of helpful or joking lies: they don't admit a lie as a legitimate answer. In the future, as they become more "intelligent", they will probably lean towards Mill's utilitarianism of maximizing benefits for the maximum number of people, maybe with the provision that the happiness of rich and powerful people (and advertisers?) should count more. The genetic consequences of medicine, for example, could be a weakening of the human genetic pool because medicine allows the "unfit" to reproduce. Saving one life today could harm many lives in the future. If medicine is going to cause a catastrophic genetic decline, is it morally good?
The vast algorithmic bureaucracy will likely adopt an extreme form of rationalism and turn everything into numbers and formulas.
The Australian philosopher Peter Singer, and Princeton University's first professor of bioethics, advocated a strictly rational approach to action in his book "Practical Ethics" (1979), even to the extent of viewing euthanasia and infanticide as necessities. Singer calculated the ethical value of an action simply based on the outcome: if killing a child will make, in the long run, a family happier, why not? That's where Nazi suprematism and enlightened liberalism meet.
But this approach has indeed a rational advantage. For example, would you risk your life to benefit the people who will live in the year 3,000? Mostly likely not: your life is important to support your family now, and you don't really care for the people of the year 3,000. You have no emotional attachment to them, even though some of them will be your direct descendants. This is irrational behavior. What is the point of caring for your children if you don't care for their children's children's children's children's children's ... children? You presumably want your children to be happy with their children, and their children to be happy with their children, and so forth: don't you? Then why stop before the year 3,000? The vast algorithmic bureaucracy, instead, can be programmed to reward actions that maximize the benefits to humankind no matter how far in the future. So, if survival of the species is the ultimate goal, why not use an algorithm to calculate the moral value of your actions and remove all emotional attachments? Why not calculate the benefits and costs to the future of the human race before deciding to save you from a disease?
Peter Singer's ethics is reminiscent of Adolf Hitler's minus the racism. Hitler was inspired by evolutionary ethics to pursue the utopian project of biologically improving the human race. Richard Weikart's "Hitler's Ethic" (2009) makes the case that, rationally speaking, Hitler wasn't a monster, but rather a very moral person for whom the ultimate goal (improving the human race) justified scientific genocide. Of course, the specific genocide that he picked was not really going to improve the race: blame it on the algorithm!
I consider Singer one of the greatest philosophers of our age so don't take this as criticism against him personally. In a later book, "The Expanding Circle" (1981), Singer studied the parallel development of reasoning and ethics, and argued that the human species inevitably tends towards a more and more universal morality. The circle of altruism expands from the family to the tribe to the nation to the race to even other species (as in today's animal-rights movement).
Whatever its evolutionary advantage, evolution has equipped humans with the faculty of reasoning, and reasoning now plays against our self-interest in making us develop a more and more universal moral code. Leon Festinger in "A Theory of Cognitive Dissonance" (1957) introduced the notion that we, mathematical beings, instinctively try to remove inconsistencies and therefore keep moving towards a more objective point of view. We strive for an objective viewpoint based on logic.
From this point of view one could conceive a different kind of "ethical" algorithm, one designed to maximize not the benefits for the most but equal treatment for the most. This algorithm would continue the human race's evolution towards a universal ethics with the big advantage that such as algorithm would not be biased towards siblings, friends or neighbors.
After all, technology is the way of dealing with the world so that we don't have to deal with it.

Back to the Table of Contents

Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact