Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from, and additions to, my book "Intelligence is not Artificial")


The first Failure: Machine Translation

The discipline of Machine Translation actually predates Artificial Intelligence. Warren Weaver, now director of the Natural Sciences Division of the Rockefeller Foundation, had been the chair of the Applied Mathematics Panel of the US Office of Scientific Research and Development during World War II, and still exerted some influence on government agencies. He had been impressed at the success of cryptography in deciphering enemy code using machines to analyze the frequency of letter patterns. In 1946 he discussed with British computer pioneer Andrew Booth the possibility of using the same technique to translate languages. In March 1947 he mentioned this idea in a letter to Norbert Wiener and finally, while he was in New Mexico, in July 1949 Weaver sent a "memorandum" simply titled "Translation" to about 30 important friends about using the digital electronic computer (that had just been invented) for language translation. As he wrote: ""It is very tempting to say that a book written in Chinese is simply a book written in English which was coded into the Chinese code". That started research in machine translation as an evolution of cryptography.

Meanwhile, Harry Huskey at UCLA had just built one of the earliest computers, the SWAC, and decided to use it for machine translation. In May 1949 the New York Times ran the first article ever on the new discipline when it described a demo of the SWAC (quote): "a new type of electric brain calculating machine capable not only of performing complex mathematical problems but even of translating foreign languages".

Weaver's memorandum set in motion several labs. For example, Abraham Kaplan at the RAND Corporation published the first paper on resolving ambiguity ("An Experimental Study of Ambiguity and Context", 1950). More importantly, the MIT appointed the Israeli philosopher Yehoshua Bar-Hillel to lead research on machine translation. Bar-Hillel toured all the labs in 1951, and in 1952 organized the first International Conference on Machine Translation at the MIT.

In 1954 Leon Dostert's team at Georgetown University and Cuthbert Hurd's team at IBM demonstrated a machine-translation system running on a 701 computer, using a vocabulary of 250 words and six rules of syntax. It was one of the first non-numerical applications of the digital computer. For example, the IBM 701 translated "Myezhdunarodnoye ponyimanyiye yavlyayetsya vazhnim faktorom v ryeshyenyiyi polyityichyeskix voprosov" into "International understanding constitutes an important factor in decision of political questions." In 2016 Google translates it as "International understanding is an important factor in the decision of political issues".

William Locke and Donald Booth collected all the historical papers in "Machine Translation of Languages", published in 1955, one year before the first Artificial Intelligence conference.

Noam Chomsky's "Syntactic Structures" (1957) galvanized the field because it showed an elegant way to formalized language as a "code".

Less appreciated was a fact emphasized by Chomsky's teacher, the linguist Zellig Harris at the University of Pennsylvania: words that occur in similar contexts tend to have similar meanings. His article "Distributional Structure" (1954), in turn based on ideas already published by the structural linguists Edward Sapir at Yale University and Leonard Bloomfield at the University of Chicago, anticipated ideas of deep learning and support vector machines. The British linguist John-Rupert Firth quipped: "You shall know a word by the company it keeps" (1957). This was the prehistory of "vector semantics": that the meaning of a word is determined by the distribution of words that surround it. Trivia: Harris' wife was Princeton University's physicist Bruria Kaufman, who was Albert Einstein's main assistant.

In fact, in 1958 Bar-Hillel had published a "proof" that machine translation is impossible without common-sense knowledge.

Accordingly, in 1959 the philosopher Silvio Ceccato started a project in Italy funded by the US military, and in 1961 published his theory in the book "Linguistic Analysis and Programming for Mechanical Translation". Unfortunately, Ceccato's machine was destroyed in 1965 by communist demonstrators.

Peter Toma started working on machine translation at Caltech in 1956 and in 1958 moved to Georgetown University. He demonstrated his Russian-to-English machine-translation software, SYSTRAN in 1964.

Philip Stone at Harvard developed the General Inquirer (running on an IBM 7090 in 1961), the archetype of what would be called "sentiment analysis" in text understanding. But there was little opinionated text available before the explosions of user-generated content on the World Wide Web, so sentiment analysis didn't stage much of a progress until the 2000s. Sentiment analysis had been pioneered by (of all people) novelist Kurt Vonnegut: his 1946 master dissertation in anthropology, that was rejected by the University of Chicago, spoke of the "emotional arc of a story".

In 1961 Melvin Maron, a philosopher working at the RAND Corporation, suggested a statistical approach to analyze language (technically speaking, a "naive Bayes classifier"), an approach that was initially ignored by the linguistic community.

In 1962 IBM demonstrated (at the World's Fair in Seattle) the first speech-recognition device, Shoebox, developed by William Dersch at IBM's San Jose laboratories.

Mortimer Taube, inventor of the most popular indexing and retrieval method for libraries, wrote in his book "Computers and Common Sense" (1961) that something can be automated only after it is formalized. First you turn a process into mathematics, then you can build a machine that performs that process. He argued, however, that formalizing human language makes little sense because a formalized language is a code, not the language that we speak.

The first practical implementations of natural language processing were conversational agents such as Daniel Bobrow's Student (1964), Joe Weizenbaum's Eliza (1966) and Terry Winograd's SHRDLU (1972), all from the MIT, as well as LUNAR (1973), built by William Woods at nearby Bolt Beranek and Newman to answer questions about moon rocks,

Stanford psychiatrist Kenneth Colby developed Parry and, during the International Conference on Computer Communications of October 1972 in Washington, Vint Cerf (who two years later would publish the TCP protocol) staged the first chatbot to chatbot conversation ever: Stanford and MIT ran, respectively, Parry and Eliza over the Arpanet (soon to be renamed Internet).

Alas, in 1966 an advisory committee, the Automatic Language Processing Advisory Committee (ALPAC), featuring linguists from Harvard University, Cornell University, the University of Chicago, the Carnegie Institute of Technology, as well as David Hays from the RAND Corp, and led by John Pierce of Bell Labs, issued a report titled "Computers in Translation and Linguistics" (the "ALPAC Report") that caused a dramatic reduction in funding for machine translation programs. (For the record, Pierce was an engineer who in 1946 worked with Claude Shannon and Bernard Oliver on the "pulse-code modulation" (PCM), without which we wouldn't have digital audio in computers, and who in 1947 supervised the trio who invented the transistor, and in fact he was the one who named it that way.

"Science progresses one funeral at a time" (Max Planck)

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact