Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw.[1] It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program".[1][a] Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them.[3]

History

edit

In 1955, when Newell and Simon began to work on the Logic Theorist, the field of artificial intelligence did not yet exist. Even the term itself ("artificial intelligence") would not be coined until the following summer.[b]

Simon was a political scientist who had already produced classic work in the study of how bureaucracies function as well as developing his theory of bounded rationality (for which he would later win a Nobel Prize). The study of business organizations requires, like artificial intelligence, an insight into the nature of human problem solving and decision making. Simon remembers consulting at RAND Corporation in the early 1950s and seeing a printer typing out a map, using ordinary letters and punctuation as symbols. He realized that a machine that could manipulate symbols could just as well simulate decision making and possibly even the process of human thought.[5][6]

The program that printed the map had been written by Newell, a RAND scientist studying logistics and organization theory. For Newell, the decisive moment was in 1954 when Oliver Selfridge came to RAND to describe his work on pattern matching. Watching the presentation, Newell suddenly understood how the interaction of simple, programmable units could accomplish complex behavior, including the intelligent behavior of human beings. "It all happened in one afternoon," he would later say.[2][7] It was a rare moment of scientific epiphany.

"I had such a sense of clarity that this was a new path, and one I was going to go down. I haven't had that sensation very many times. I'm pretty skeptical, and so I don't normally go off on a toot, but I did on that one. Completely absorbed in it—without existing with the two or three levels consciousness so that you're working, and aware that you're working, and aware of the consequences and implications, the normal mode of thought. No. Completely absorbed for ten to twelve hours."[8]

Newell and Simon began to talk about the possibility of teaching machines to think. Their first project was a program that could prove mathematical theorems like the ones used in Bertrand Russell and Alfred North Whitehead's Principia Mathematica. They enlisted the help of computer programmer Cliff Shaw, also from RAND, to develop the program. (Newell says "Cliff was the genuine computer scientist of the three".[9])

The first version was hand-simulated: they wrote the program onto 3x5 cards and, as Simon recalled:

In January 1956, we assembled my wife and three children together with some graduate students. To each member of the group, we gave one of the cards, so that each one became, in effect, a component of the computer program ... Here was nature imitating art imitating nature.[10]

They succeeded in showing that the program could successfully prove theorems as well as a talented mathematician. Eventually Shaw was able to run the program on the computer at RAND's Santa Monica facility.

In the summer of 1956, John McCarthy, Marvin Minsky, Claude Shannon and Nathan Rochester organized a conference on the subject of what they called "artificial intelligence" (a term coined by McCarthy for the occasion). Newell and Simon proudly presented the group with the Logic Theorist. It was met with a lukewarm reception. Pamela McCorduck writes "the evidence is that nobody save Newell and Simon themselves sensed the long-range significance of what they were doing."[11] Simon confides that "we were probably fairly arrogant about it all"[12] and adds:

They didn't want to hear from us, and we sure didn't want to hear from them: we had something to show them! ... In a way it was ironic because we already had done the first example of what they were after; and second, they didn't pay much attention to it.[13]

Logic Theorist soon proved 38 of the first 52 theorems in chapter 2 of the Principia Mathematica. The proof of theorem 2.85 was actually more elegant than the proof produced laboriously by hand by Russell and Whitehead. Simon was able to show the new proof to Russell himself who "responded with delight".[3] They attempted to publish the new proof in The Journal of Symbolic Logic, but it was rejected on the grounds that a new proof of an elementary mathematical theorem was not notable, apparently overlooking the fact that one of the authors was a computer program.[14][3]

Newell and Simon formed a lasting partnership, founding one of the first AI laboratories at the Carnegie Institute of Technology and developing a series of influential artificial intelligence programs and ideas, including the General Problem Solver, Soar, and their unified theory of cognition.

Architecture

edit

This is a brief presentation, based on.[15]

The logical theorist is a program that performs logical processes on logical expressions.

Expressions

edit
  • An expression is made of elements.
  • There are two kinds of memories: working and storage.
  • Each working memory contains a single element. The logical theorist usually uses 1 to 3 working memories.
  • Each storage memory is a list representing a full expression or a set of elements. In particular, it contains all the axioms and proven logical theorems.
  • An expression is an abstract syntax tree, each node being an element with up to 11 attributes.

For example, the logical expression   is represented as a tree with a root element representing  . Among the attributes of the root element are pointers to the two elements representing the subexpressions   and  .

Processes

edit

There are four kinds of processes, from the lowest to the highest level.

  • instruction: These are similar to assembly code. They may either perform a primitive operation on an expression in working memory, or perform a conditional jump to another instruction. An example is "put the right sub-element of working-memory 1 to working-memory 2"
  • elementary process: These are similar to subroutines. A sequence of instructions that can be called.
  • method: A sequence of elementary processes. There are just 4 methods:
    • substitution: given an expression, it attempts to transform it to a proven theorem or axiom by substitutions of variables and logical connectives.
    • detachment: given expression  , it attempts to find a proven theorem or axiom of form  , where   yields   after substitution, then attempts to prove   by substitution.
    • chaining forward: given expression  , it attempts to find for a proven theorem or axiom of form  , then attempt to prove   by substitution.
    • chaining backward: given expression  , it attempts to find for a proven theorem or axiom of form  , then attempt to prove   by substitution.
  • executive control method: This method applies each of the 4 methods in sequence to each theorem to be proved.

Logic Theorist's influence on AI

edit

Logic Theorist introduced several concepts that would be central to AI research:

Reasoning as search
Logic Theorist explored a search tree: the root was the initial hypothesis, each branch was a deduction based on the rules of logic. Somewhere in the tree was the goal: the proposition the program intended to prove. The pathway along the branches that led to the goal was a proof – a series of statements, each deduced using the rules of logic, that led from the hypothesis to the proposition to be proved.
Heuristics
Newell and Simon realized that the search tree would grow exponentially and that they needed to "trim" some branches, using "rules of thumb" to determine which pathways were unlikely to lead to a solution. They called these ad hoc rules "heuristics", using a term introduced by George Pólya in his classic book on mathematical proof, How to Solve It. (Newell had taken courses from Pólya at Stanford).[16] Heuristics would become an important area of research in artificial intelligence and remains an important method to overcome the intractable combinatorial explosion of exponentially growing searches.
List processing
To implement Logic Theorist on a computer, the three researchers developed a programming language, IPL, which used the same form of symbolic list processing that would later form the basis of McCarthy's Lisp programming language, an important language still used by AI researchers.[17][18]

Philosophical implications

edit

Pamela McCorduck writes that the Logic Theorist was "proof positive that a machine could perform tasks heretofore considered intelligent, creative and uniquely human".[3] And, as such, it represents a milestone in the development of artificial intelligence and our understanding of intelligence in general.

Simon told a graduate class in January 1956, "Over Christmas, Al Newell and I invented a thinking machine,"[19][20] and would write:

[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.[21]

This statement, that machines can have minds just as people do, would be later named "Strong AI" by philosopher John Searle. It remains a serious subject of debate up to the present day.

Pamela McCorduck also sees in the Logic Theorist the debut of a new theory of the mind, the information processing model (sometimes called computationalism or cognitivism). She writes that "this view would come to be central to their later work, and in their opinion, as central to understanding mind in the 20th century as Darwin's principle of natural selection had been to understanding biology in the nineteenth century."[22] Newell and Simon would later formalize this proposal as the physical symbol systems hypothesis.

Notes

edit
  1. ^ Logic theorist is usually considered the first true AI program, although Arthur Samuel's checkers program was released earlier. Christopher Strachey also wrote a checkers program in 1951.[2]
  2. ^ The term "artificial intelligence" was coined by John McCarthy in the proposal for the 1956 Dartmouth Conference. The conference is "generally recognized as the official birthdate of the new science", according to Daniel Crevier.[4]

Citations

edit
  1. ^ a b McCorduck 2004, pp. 123–125, Crevier 1993, pp. 44–46 and Russell & Norvig 2021, p. 17
  2. ^ a b Crevier 1993, p. 44.
  3. ^ a b c d McCorduck 2004, p. 167.
  4. ^ Crevier 1993, pp. 49–50.
  5. ^ Crevier 1993, pp. 41–44.
  6. ^ McCorduck 2004, p. 148.
  7. ^ McCorduck 2004, pp. 157–158.
  8. ^ McCorduck 2004, pp. 158–159.
  9. ^ McCorduck 2004, p. 169.
  10. ^ Crevier 1993, p. 45.
  11. ^ McCorduck 2004, p. 124.
  12. ^ Crevier 1993, p. 48.
  13. ^ Crevier 1993, p. 49.
  14. ^ Crevier 1993, p. 146.
  15. ^ Gugerty, Leo (October 2006). "Newell and Simon's Logic Theorist: Historical Background and Impact on Cognitive Modeling". Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 50 (9): 880–884. doi:10.1177/154193120605000904. ISSN 2169-5067.
  16. ^ Crevier 1993, p. 43.
  17. ^ Crevier 1993, pp. 46–48.
  18. ^ McCorduck 2004, pp. 167–168.
  19. ^ Quoted in McCorduck (2004, p. 138)
  20. ^ "Libraries/UnivArchives: Mind Models Exhibit/Problem Solving Research". shelf1.library.cmu.edu.
  21. ^ Quoted in Crevier 1993, p. 46
  22. ^ McCorduck 2004, p. 127.

References

edit
edit