Peter Norvig and Stuart Russell:

Artificial Intelligence: A Modern Approach (2009)

(Copyright © 2012 Piero Scaruffi | Legal restrictions - Termini d'uso )
This has been an immensely popular book since it first came out in the 1990s. The 2009 edition is vastly updated and augmented. There is no argument that this is the best book around for creating programs that perform "intelligent" tasks such as reasoning, planning, recognizing, learning, etc. Below is a bit of a rant about the history and state of A.I. in general.

What is noticeable is the shift towards massive computation to solve problems. Artificial Intelligence was born in an era of slow, expensive and cumbersome computers: theorists were forced to be... theorists. They could not rely on computers to calculate every possible move at chess or every possible outcome of a situation. Therefore A.I. theorists came up with ever more creative ways to solve complex, uncertain problems using the slow, expensive and cumbersome computers of the time. Fast forward to the 21st century and A.I. theorists don't need much of a theory: they can use an arsenal of superfast computers that can crunch numbers until they find a reasonable solution. Is that what the human brain does? The question seems to have become irrelevant. As long as we get an answer, why bother how the computer got it. Hence your favorite search engine can answer a question that the smartest "expert system" couldn't even understand. The search engine simply scans the whole Web until it finds a page that contains that topic, and chances are that you get a useful answer to your query. Is that intelligence?

When it came out in the mid-1990s this book's novel take on Artificial Intelligence was the definition itself: "rational action", i.e. best possible action in a situation.

In 2012 a multimillion-dollar combined Google/Stanford research team, led by Stanford computer science professor Andrew Ng and Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections and let it loose on the Internet to learn from millions of YouTube videos how to recognize cats. My comment was and is: Brute force. Absolutely no conceptual innovation (i.e. no "intelligence"), just lots and lots of brute force. That is the paradigm that dominates computer science these days, and Norvig (who coincidentally teaches at the same university and works for the same corporation as Ng) has simply written a handbook on how to program "brute force".

I have to wonder if slow and cumbersome computers were a gift to the scientific community of the 1960s because that forced computer scientists to come up with creative models instead of just letting computers crunch numbers until they find a solution. Basically we should be impressed that 16,000 of the fastest computers in the world took a few years to recognize a cat, something that a toddler with a still under-developed brain can do in a split second. I would be happy if the 16,000 computers could just simulate the 302-neuron brain of the roundworm, no more than 5000 synapses that nonetheless can recognize with incredible accuracy and speed a lot of very interesting things.

Brute force is not intelligence. I am an experienced mountaineer. If an avalanche buries me dead, it doesn't mean that the avalanche is a smarter mountaineer than me, but simply that it is a lot stronger.

What leading scientists such as Norvig and Ng are doing is simply not A.I. It is good old-fashioned computational mathematics. It has been around since at least Fourier. This book can be viewed as, essentially, a manual on how to program fast computers equipped with lots of memory in order to solve problems that have a lot of probabilistic aspects. it is a discipline tightly related to modern statistical math.

If it is not A.I., then why call it A.I.? Because it achieves what A.I. was trying to achieve. It may be misleading though: when i travel from Europe to the USA, i am achieving what Columbus achieved, but i don't call it "sailing". A textbook or a course titled "Computational Math - A Modern Approach" would get a lot less attention than one titled "Artificial Intelligence".

That said, if your job is to program computers to solve complex probabilistic problems, this book is your Bible.

TM, ®, Copyright © 2012 Piero Scaruffi