The General Problem Solver
One of the very first attempts at AI was in 1956. Allen Newell and Herbert A. Simon (Figure 1.3) created a computer program they called the general problem solver. This program was designed to solve any problem that could be presented in the form of mathematical formulas.
One of the key parts of the general problem solver was what Newell and Simon called the physical symbol system hypothesis (PSSH). They argued that symbols were the key to general intelligence. If you could get a program to connect enough of these symbols, you would have a machine that behaved in a way similar to human intelligence.
Symbols play a big role in how we interact with the world. When we see a stop sign, we know to stop and look for traffic. When we see the word cat, we know that it represents a small furry feline that meows. If we see a chair, we know it’s an object to sit in. When we see a sandwich, we know it’s something to eat, and we may even feel hungry.
Newell and Simon argued that creating enough of these connections would make machines behave more like us. They thought a key part of human reasoning was just connecting symbols—that our language, ideas, and concepts were just broad groupings of interconnected symbols (Figure 1.4).
Figure 1.4 Interconnected symbols
But not everyone bought into this idea. In 1980, philosopher John Searle argued that merely connecting symbols could not be considered intelligence. To support his argument against the claim that computers think or at least have the potential of someday being able to think, he created an experiment called the Chinese room argument (Figure 1.5).
In this experiment, imagine yourself, an English-only speaker, locked in a windowless room with a narrow slot on the door through which you can pass notes. You have a book filled with long lists of statements in Chinese, a floor covered in Chinese characters, and instructions that if you’re given a certain sequence of Chinese characters you are to respond with the corresponding statement from the book.
Someone outside the room who speaks fluent Chinese writes a note on a sheet of paper and passes it to you through the slot on the door. You have no idea what it says. You go through the tedious process of looking through your book and finding the statement in response to the sequence of Chinese characters on the note. Using the characters from the floor, you paste together the statement to a sheet of paper and pass it through the slot to the person who gave you the original message.
The native Chinese speaker who passed you the note believes that the two of you are conversing and that you’re intelligent. However, Searle argues that this is far from intelligence because you can’t speak Chinese, and you have no understanding of the notes you’re receiving or sending.
You can try a similar experiment with your smart phone. If you ask Siri or Cortana how she’s feeling, she’s likely to say she’s feeling fine, but that doesn’t mean she’s feeling fine or feeling anything at all. She doesn’t even understand the question. She’s just matching your question to what are considered acceptable answers and choosing one.
A key drawback of matching symbols is what’s referred to as the combinatorial explosion—the rapid growth of symbol combinations that makes matching increasingly difficult. Just imagine the variety of questions that people can ask and all the different responses to a single question. In the Chinese room example, you’d have an ever-growing book of possible inputs and outputs, which would take you longer and longer to find the correct response.
Even with these challenges, symbol matching remained the cornerstone of AI for 25 years. However, symbol matching has been unable to keep up with the growing complexity of AI applications. Early machines had trouble matching all the possibilities, and even when they could, the process took too much time.