Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

Boring Footnote: Semantic Analysis

Semantic parsing is different from the generative approach that Chomsky pioneered. Syntactic parsing wants to find out which one is the noun and which one the verb and so on: i.e. wants to build a tree that represents the grammatical structure of the sentence. Semantic parsing wants to turn a sentence into a logical representation, for example into a formula of first-order predicate logic. The advantage of this approach is that the logical representation lends itself to logical reasoning, i.e. automated processing by the computer. In 1970 Alfred Tarski's former student in philosophy Richard Montague at UCLA developed a formal method for mapping natural language into first-order predicate logic. Mark Steedman at the University of Edinburgh introduced "combinatory categorial grammar" that treats verbs as functions ("Combinatory Grammars and Parasitic Gaps", 1987). Technically speaking, both employed a compositional semantics based on the lambda calculus invented in 1936 by Alonzo Church at Princeton University. Semantic parsing was applied to database queries by John Zelle and Raymond Mooney at the University of Texas, who designed the system CHILL (Constructive Heuristics Induction for Language Learning), based on the learning methods of inductive logic programming ("Learning Semantic Grammars with Constructive Inductive Logic Programming", 1993). In 2005 Luke Zettlemoyer at MIT started developing a Steedman-style learning semantic parser ("Learning to Map Sentences to Logical Form", 2005). These approaches turn an utterance directly into a logical representation.

Probabilistic logic has been used to represent the meaning of natural language by Lise Getoor's student Matthias Broecheler at the University of Maryland ("Probabilistic Similarity Logic", 2012), by Raymond Mooney's team at the University of Texas, that merged Montague and Markov via Pedro Domingos' Markov logic networks ("Montague Meets Markov", 2013); and by Tom Mitchell for parsing of conversations, i.e. not just one sentence at a time but an entire discourse ("Parsing Natural Language Conversations using Contextual Cues", 2017).

The parser is supposed to learn how to map natural language sentences into logical representations of their meaning. The training data may consist of sentences coupled with lambda-calculus meaning representations, and the parser is expected to build a generalization that will help generate the logical representation of future sentences. Mooney's students built systems such as KRISP and WASP (2006) that use statistical machine learning to learn grammars. Mark Steedman's student Tom Kwiatkowski at the University of Edinburgh introduced an intermediate representation to learn language-independent grammars ("Inducing Probabilistic CCG Grammars from Logical Form with Higher-order Unification", 2010).

None of these experiments has been particularly successful. Either our natural language is fundamentally not logical or we still haven't figured out its logic.

Back to the Table of Contents

Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact