Even before Stephen Wolfram took the stage, he evoked the largest applause of the conference so far. As the creator of Mathematica and Wolfram Alpha, and author of A New Kind of Science, Wolfram stands almost as tall as Kurzweil himself in the eyes of the audience. His pronouncements carry more weight than most of the conference's other speakers, which is why I felt relieved when Wolfram disregarded worry about our extinction at the hands of sentient robots, and instead focused on a very different concept of what role AI will play in our future.
Since I got here, I've been wondering what exactly the Singularity's going to look like. How are we going to create artificial intelligence, and when we do, how are we going to integrate ourselves with this advanced technology? Luckily, NYU philosopher David Chalmers was there to break it all down.
Ray Kurzweil's concept of the Singularity rests on two axioms: that computers will become more intelligent than humans, and that humans and computers will merge, allowing us access to that increased thinking power. So it only makes sense to begin the conference with discussions of those two fundamental concepts. No one disputed the emergence of intelligence beyond our own, but they did give me plenty of reasons to worry about how that process might take place.
Ray Kurzweil wasn't like the other nice, Jewish boys he grew up with in Queens. While they were putting baseball cards in the spokes of their bikes, Ray was writing computer programs and shaking hands with the President. Now, those other kids from the neighborhood are doctors and lawyers, and Kurzweil is a techno-prophet whose book, The Singularity Is Near: When Humans Transcend Biology, changed our discourse on technology with its bold predictions about the coming merger between man and machine.
The long-awaited robot-led holocaust may happen any day now. That seems to be the finding of a secret conference of the world's top computer scientists, roboticists, and artificial intelligence researchers. The clandestine meeting focused on topics surrounding advancements in robotics and how they could quickly spiral out of human control. This includes the danger that robots could autonomously kill humans -- a danger than conference participants believe may already exist.