SHARE

IN THE SUMMER of 1956, a small group of computer science pioneers convened at Dartmouth College to discuss a new concept: artificial intelligence. The vision, in the meeting’s proposal, was that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Ultimately, they presented just one operational program, stored on computer punch cards: the Logic Theorist.

Many have called the Logic Theorist the first AI program, though that description was debated then—and still is today. The Logic Theorist was designed to mimic human skills, but there’s disagreement about whether the invention actually mirrored the human mind and whether a machine really can replicate the insightfulness of our intelligence. But science historians view the Logic Theorist as the first program to simulate how humans use reason to solve complex problems and was among the first made for a digital processor. It was created in a new system, the Information Processing Language, and coding it meant strategically pricking holes in pieces of paper to be fed into a computer. In just a few hours, the Logic Theorist proved 38 of 52 theorems in Principia Mathematica, a foundational text of mathematical reasoning. 

The Logic Theorist’s design reflects its historical context and the mind of one of its creators, Herbert Simon, who was not a mathematician but a political scientist, explains Ekaterina Babintseva, a historian of science and technology at Purdue University. Simon was interested in how organizations could enhance rational decision-making. Artificial systems, he believed, could help people make more sensible choices. 

“The type of intelligence the Logic Theorist really emulated was the intelligence of an institution,” Babintseva says. “It’s bureaucratic intelligence.” 

But Simon also thought there was something fundamentally similar between human minds and computers, in that he viewed them both as information-processing systems, says Stephanie Dick, a historian and assistant professor at Simon Fraser University. While consulting at the RAND Corporation, a nonprofit research institute, Simon encountered computer scientist and psychologist Allen Newell, who became his closest collaborator. Inspired by the heuristic teachings of mathematician George Pólya, who taught problem-solving, they aimed to replicate Pólya’s approach to logical, discovery-oriented decision-making with more intelligent machines.

This stab at human reasoning was written into a program for JOHNNIAC, an early computer built by RAND. The Logic Theorist proved Principia’s mathematical theorems through what its creators claimed was heuristic deductive methodology: It worked backward, making minor substitutions to possible answers until it reached a conclusion equivalent to what had already been proven. Before this, computer programs mainly solved problems by following linear step-by-step instructions. 

The Logic Theorist was a breakthrough, says Babintseva, because it was the first program in symbolic AI, which uses symbols or concepts, rather than data, to train AI to think like a person. It was the predominant approach to artificial intelligence until the 1990s, she explains. More recently, researchers have revived another approach considered at the 1950s Dartmouth conference: mimicking our physical brains through machine-learning algorithms and neural networks, rather than simulating how we reason. Combining both methods is viewed by some engineers as the next phase of AI development.  

The Logic Machine’s contemporary critics argued that it didn’t actually channel heuristic thinking, which includes guesswork and shortcuts, and instead showed precise trial-and-error problem-solving. In other words, it could approximate the workings of the human mind but not the spontaneity of its thoughts. The debate over whether this kind of program can ever match our brainpower continues. “Artificial intelligence is really a moving target,” Babintseva says, “and many computer scientists would tell you that artificial intelligence doesn’t exist.”

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.