Even before Stephen Wolfram took the stage, he evoked the largest applause of the conference so far. As the creator of Mathematica and Wolfram Alpha, and author of A New Kind of Science, Wolfram stands almost as tall as Kurzweil himself in the eyes of the audience. His pronouncements carry more weight than most of the conference’s other speakers, which is why I felt relieved when Wolfram disregarded worry about our extinction at the hands of sentient robots, and instead focused on a very different concept of what role AI will play in our future.
Wolfram belongs to a set of mathematicians who believe that fundamental programs, like the instructions for different fractals or the Fibonacci sequence, underlie all the behavior in our universe, as well as many phenomena that don’t exist in a universe with our physics. Unlike mathematical equations, a significant portion of which humans derived simply to explain the narrow set of observations made in the last couple of thousand years, these computations that Wolfram identifies as embedded in reality exist independently of our observation.
He calls the total set of all possible programs the “Computational Universe”. By running mathematical experiments, examining the natural world and decoding the behavior of reality, mathematicians and scientists explore this universe, uncovering programs new to humanity, but not new to the universe.
In this intellectual construct, humans don’t write new programs (like, say, a method for generating random numbers), they merely uncover programs that have always existed.
“Mathematicians are more like photographers than painters,” said Wolfram. “They frame observations, rather than building things up one brush stroke at a time.”
These programs, the majority of which remain undiscovered, provide the raw material for new computer software and new technology. Wolfram likened these programs to minerals like crystals and magnetic metals. They always existed in the Earth, but only recently did humans begin extracting them and integrating them into their technology.
Wolfram described a world where scientists and mathematicians mine the Computational Universe for new programs, with their experiments serving as the test wells for this new, intellectual resource. Until, of course, we reach the limits of our feeble biological brains.
Just as the mining industry needed to switch from pick-ax swinging Cornishmen to steam powered digging machines, so too will the computational wild catters design computers to mine the universe for programs beyond human understanding.
This perfectly mirrors Salamon’s description of the rise of AI from earlier. And Wolfram agrees, identifying the switch to computerized computational mining as the catalyst for the emergence of artificial intelligence.
However, unlike most of the other speakers, Wolfram isn’t really concerned that this AI will immediately threaten our extinction. After all, the program only exists to find new knowledge. How could killing us help in that goal? In Wolfram’s vision of the future, artificial intelligence is the ultimate nerd, staying inside and studying all the time rather than going out, getting drunk and causing various mayhem.
Not everyone at the conference bought this idea of a benign artificial intelligence, and people hounded Wolfram after his talk, demanding to know what he’s doing to ensure that the AI he’s helping create won’t kill us all. In the face of such extreme paranoia, I began to wonder whether this pervasive fear of AI-led extermination reflects human intelligence’s own inability to imagine a consciousness without the aggressive need to destroy humanity displays, rather than a genuine, logical fear that technology may outpace our ability to control it.
Or maybe its both, and our pervasive fear is the best example of why we should be so worried.
Next up, the Don himself.