While I undoubtedly learned a lot at the Singularity Summit, the conference's greatest benefit was the questions it didn't answer. Unresolved issues regarding the Singularity have provided a lot of philosophical grist for my admittedly limited intellectual mill, and working through those problems has been as exciting as any talk I saw at the Summit.
To wrap up our coverage of the Singularity Summit, I'm going to count down my ten most vexing unanswered questions about Kurzweil's theoretical baby, the eventual merge of human and artificial intellifnece, and I am interested to hear any opinions, questions or (hopefully) answers you all have about any or all of these still unexplained facets of our future.
The Singularity Summit drew a wide range of people from around the globe. There were technology companies hoping to spread brand recognition, quasi-spiritual sojourners looking for a new clue to the secret of immortality, and serious academics interested in cutting edge in artificial intelligence.
We asked them if they're looking forward to the Singularity's hypothesized robot takeover.
Welcome to the main event.
At the end of a day filled with many interesting, thought provoking talks (and a few that gave me some much needed sleep), the audience at the Singularity Summit 2009 sat content but exhausted. After all, contemplating the future of humanity really takes it out of you.
Then came Kurzweil. He's the man everyone came to see, and they greeted him appropriately. After the standing ovation died down, the auditorium reached its quietest point yet, as the collected skeptics, crazies and disciples waited to hear from the first prophet of Singularity.
Even before Stephen Wolfram took the stage, he evoked the largest applause of the conference so far. As the creator of Mathematica and Wolfram Alpha, and author of A New Kind of Science, Wolfram stands almost as tall as Kurzweil himself in the eyes of the audience. His pronouncements carry more weight than most of the conference's other speakers, which is why I felt relieved when Wolfram disregarded worry about our extinction at the hands of sentient robots, and instead focused on a very different concept of what role AI will play in our future.
Since I got here, I've been wondering what exactly the Singularity's going to look like. How are we going to create artificial intelligence, and when we do, how are we going to integrate ourselves with this advanced technology? Luckily, NYU philosopher David Chalmers was there to break it all down.
Ray Kurzweil's concept of the Singularity rests on two axioms: that computers will become more intelligent than humans, and that humans and computers will merge, allowing us access to that increased thinking power. So it only makes sense to begin the conference with discussions of those two fundamental concepts. No one disputed the emergence of intelligence beyond our own, but they did give me plenty of reasons to worry about how that process might take place.
Ray Kurzweil wasn't like the other nice, Jewish boys he grew up with in Queens. While they were putting baseball cards in the spokes of their bikes, Ray was writing computer programs and shaking hands with the President. Now, those other kids from the neighborhood are doctors and lawyers, and Kurzweil is a techno-prophet whose book, The Singularity Is Near: When Humans Transcend Biology, changed our discourse on technology with its bold predictions about the coming merger between man and machine.
When first-person-shooter video games first hit the market, the computer-controlled bot characters that were deployed in multiplayer matches to fill out the ranks ran around like the Keystone Cops. Now, the bots do a bit better, but not nearly good enough for the people behind the BotPrize.
Ask anyone who's ever talked back to their GPS navigation system: Product developers are pretty good at using technology to humanize inanimate objects. But how would you like it if your car responded to your presence -- lighting up with delight or panting like a pet dog? What if, more helpfully, it recognized your touch on the steering wheel, and queued up your favorite MP3s and set your seating position just the way you liked it?
With the development of killer drones, it seems like everyone is worrying about killer robots. Now, as if that wasn't bad enough, we need to start worrying about lying, cheating robots as well.
In an experiment run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland*, robots that were designed to cooperate in searching out a beneficial resource and avoiding a poisonous one learned to lie to each other in an attempt to hoard the resource. Picture a robo-Treasure of the Sierra Madre.