Singularity Summit 2009: Open The Pod Bay Door, HAL
SHARE

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Ray Kurzweil’s concept of the Singularity rests on two axioms: that computers will become more intelligent than humans, and that humans and computers will merge, allowing us access to that increased thinking power. So it only makes sense to begin the conference with discussions of those two fundamental concepts. No one disputed the emergence of intelligence beyond our own, but they did give me plenty of reasons to worry about how that process might take place.

According to Anna Salamon, a former NASA researcher who now works for the Singularity Institute for Artificial Intelligence that hosts the conference, artificial intelligence greater than our own is inevitable and dangerous. Salamon argued that biological brains have finite intellectual capacity. Just as a goldfish can’t appreciate opera and a cat can’t learn quantum mechanics, so too will humans soon confront problems beyond the comprehension of our slimy, mortal brains.

She believes we will create super computers to solve those problems for us. Just as relatively weak human muscles can work together to create stronger lifting machines like cranes, relatively stupid human brains can design vastly more powerful computers minds. Unfortunately, Salamon worries that if humans and AI have divergent goals, we could find ourselves in competition with the AI for resources to achieve those different goals. And when you compete with something vastly smarter than yourself, you lose. She stressed that assuring humanity and AI have the same goals requires a level of care and responsibility greater than even our stewardship of nuclear weapons technology.

To head off the Skynet take over, Salamon advocates starting now to ensure that positive, human assisting missions get hardwired into the basic architecture of artificial intelligence.

But according philosopher Anders Sandberg, the nature of artificial intelligence development may complicate the embedding of those fail-safes. Sandberg believes that engineers have to base their first attempts at AI on the only current example of natural intelligence: the human brain.

And if the first artificial intelligence has to take the form of a human brain, it has to take the form of a particular human brain. Sandberg noted that the first artificial brain, as copy of a specific human brain, would necessarily contain elements of the personality of the test subject that the artificial brain copied. Personality traits that could become locked into all artificial intelligence as the initial AI software proliferates.

Based on my experience with people who volunteer for scientific tests, this means the first artificial intelligence will most likely have the personality of a half stoned, cash-strapped, college student. So if both Salamon and Sandberg prove right, I think avoiding destruction at the hands of artificial intelligence could mean convincing a computer hardwired for a love of Asher Roth, keg stands and pornography to concentrate on helping mankind.

Take home message: as long as we keep letting our robot overlord beat us at beer pong, we just might make it out of the Singularity alive.

And remember to check back soon for more Singularity Conference 2009 updates.