In 1955, in a small room at Dartmouth University, four scientists proposed that if 10 researchers dedicated a summer to building machines that could learn, they could make a considerable dent in a new field, which they called “artificial intelligence.”
Marvin Minsky, who passed away at age 88 on Sunday evening, was one of those four researchers. In the 60 years since then, the industries of computer science and machine learning have grown tremendously.
Minsky proceeded to found MIT’s Artificial Intelligence Laboratory, which is still a hub of A.I. research. There, he is remembered for not only his life’s achievements and diversions, which span building the first neural network simulator and advising on the movie 2001: A Space Odyssey, but his vision for what artificial intelligence should be.
Joscha Bach, an MIT professor who worked with Minsky over the last year, wrote to Popular Science that Minsky was a great thinker not only in computer science and mathematics, but in how we understand the mind. Minsky’s research was a computational application of the theory of the human mind.
“Marvin talked in riddles that made perfect sense, were always profound and often so funny that you would find yourself laughing days later. His genius was so self-evident that it defined ‘awesome,’” said Nicholas Negroponte, co-founder of MIT’s Media Lab, in a statement by the university.
“His genius was so self-evident that it defined ‘awesome.'”
Even in Popular Science, Minsky’s sharp sense of humor has shone. In a June 1995 article about fellow MIT roboticist Rodney Brooks’ project to build a humanoid robot, Minsky hijacks the story saying, “I’ll help out whenever they have an interesting idea. That could happen aaa-ny day now.” (Minsky did not directly end up working on the project.)
But even as a central figure in the field of A.I. research, Minsky often criticized the problems that many modern researchers have been tackling.
“My impression is that the last ten years has shown very little growth in artificial intelligence, it’s been most attempting to improve systems that aren’t very good, and haven’t improved much in two decades. The 1950s and 60s were wonderful. Something new every week,” Minsky told MIT Technology Review in October.
“We have to get rid of the big companies and go back to giving support to individuals who have new ideas, because the attempt to commercialize the existing things haven’t worked very well. Big companies and bad ideas don’t mix very well,” he said, chuckling.
Despite our modern artificial intelligence models’ ability to solve simple problems, Minsky is right that most of the underlying ways we tackles these problems haven’t changed. In fact, the neural networks used by Facebook and Google today are larger, more complex versions of the system he created in 1954.
Minsky was awarded the A.M. Turing Award, the highest accolade in computer science, in 1969, and a great number of honors listed by MIT. He also wrote a number of formative books on intelligence and machine learning, including Perceptrons, which laid the foundation for artificial neural networks, The Society of Mind, a model of human intelligence, and The Emotion Machine, a review and commentary on popular intelligence theories.
Minsky’s legacy is one intricately intertwined with humanity’s quest to replicate our own minds, our very method of thinking. And although it’s uncertain if we will ever reach that benchmark, we attempt in the path that Minsky laid before us.
He certainly thought it was possible.
“How long this takes will depend on how many people we have working on the right problems,” he said.