Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

Intermezzo: The GPU and the Spring of A.I.

Neural networks had fallen out of favor again in the 1990s. In fact, in 2000 the organizers of the Neural Information Processing System (NIPS) conference discouraged the submission of papers on neural networks while encouraging instead submission of papers on SVMs.

A.I. was rescued by videogames. Videogames require very fast analysis of images. An image is made of millions of pixels. Analyzing one pixel at a time is not a good way to analyze an image quickly. Therefore the industry invented the GPU, the graphical processing unit, that analyzes all the pixels in parallel. It turns out that neural networks work in a similar way: the nodes of a layer behave like the pixels of an image, and must ideally be processed in parallel. Technically speaking, the layer of a neural network is a matrix and the training of a neural network consists in a series of matrix multiplications. In 2001 David McAllister's student Scott Larsen at the University of North Carolina implemented matrix multiplication on a GPU. They turned the GPU, originally invented for rendering images, into a fast numeric calculator, i.e. a scientific instrument. In 2005 Patrice Simard's team at Microsoft pioneered the use of GPUs, in particular, for machine learning. In 2007 Nvidia, the most famous GPU manufacturer, released the high-level programming language CUDA (which originally meant Compute Unified Device Architecture), an extension to the C programming language, to help programmers implement matrix multiplications in their GPUs. It was perfect timing because Hinton and Bengio had just figured out the math to train "deep" neural networks. In 2009 Hinton's students Abdelrahman Mohamed and George Dahl at the University of Toronto ("Deep Belief Networks Using Discriminative Features for Phone Recognition", 2011) seemed to show that deep belief nets were better than hidden Markov models at speech recognition but in reality it showed that amazing results could be achieved using GPUs (in their case a Nvidia Tesla S1070). In 2010 Dan Ciresan of Schmidhuber's team at IDSIA built a nine-layer neural net on a Nvidia GTX 280 graphic processor.

The GPU made multi-layered neural networks possible.

Back to the Table of Contents

Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact