Singularity Summit 2009: Just How’s This Thing Gonna Work, Anyways?
Since I got here, I’ve been wondering what exactly the Singularity’s going to look like. How are we going to … Continued
Since I got here, I’ve been wondering what exactly the Singularity’s going to look like. How are we going to create artificial intelligence, and when we do, how are we going to integrate ourselves with this advanced technology? Luckily, NYU philosopher David Chalmers was there to break it all down.
Contradicting this morning’s talks, and solving the problem of complications due to personality quirks from a copied brain, Chalmers rejected the idea of brain emulation as the path to super intelligent AI. Instead, Chalmers thinks that we have to evolve artificial intelligence by planting computer programs in a simulated environment and selecting the smartest bots. Basically, set up the Matrix, but with only artificial inhabitants. The process may take a while, but he stressed that human intelligence serves as proof of concept.
To ensure that the resultant AI adheres to pro-human values, we would have to set up a “leak-proof” world where we control what goes in, and can prevent any of the artificial consciousness from becoming aware of us too early and escaping (essentially, no red pill). We could then adjust this world to favor pro-human traits while routing anti-human tendencies towards extinction.
As Chalmers sees it, the second the artificial personalities become as smart as us, they will emulate our leap by creating AI even smarter than themselves inside their simulated world. Essentially, they will undergo their own, digital Singularity.
This will start a chain reaction that quickly leads to a digital intelligence far greater than anything we ever imagined. Unless, of course, the first AI more intelligent than us uses that additional foresight to realize creating intelligence greater than itself is a bad idea, and cuts off the entire process.
But assuming that AI does manage to get smarter than us, we will have to either integrate with it, coexist with it as a lower form of life, or pull the plug (which may or may not be genocide). Since mass murder is generally frowned upon, and no one wants to be pets to a machine, Chalmers sees integration as the only way to go. Of course, no higher intelligence would willingly integrate with a lower one (humans don’t forsake language and bipedalism to live with wolves), so Chalmers said we’ll have to computerize ourselves to meld with the AI system.
To sustain consciousness, he advocates physically replacing one neuron at a time with a digital equivalent, while the person is awake, so as to retain continuity of personality.
What Chalmers did not address, however, is whether or not the AI would want to meld with a warmongering, greedy, sex obsessed inferior intelligence like ourselves. If the AI is really that much smarter than us, it might be more like George Wallace than the Borg, insisting on human segregation now, tomorrow and forever rather than total assimilation. Prejudice computers leads right back to Salamon’s prediction of extinction at the hands of our creations.
Can’t we all just get along!