SHARE
AI photo

Where Do You Keep the Nukes?

While I undoubtedly learned a lot at the Singularity Summit, the conference’s greatest benefit was the questions it didn’t answer. Unresolved issues regarding the Singularity have provided a lot of philosophical grist for my admittedly limited intellectual mill, and working through those problems has been as exciting as any talk I saw at the Summit.

To wrap up our coverage of the Singularity Summit, I’m going to count down my ten most vexing unanswered questions about Kurzweil’s theoretical baby, the eventual merge of human and artificial intellifnece, and I am interested to hear any opinions, questions or (hopefully) answers you all have about any or all of these still unexplained facets of our future.

10. Is there just one kind of consciousness or intelligence?

During his talk, Ray Kurzweil, the pioneer of the concept of Singularity, referred to intelligence as prediction. It evolved so that humans could look at an animal on the savanna, and guess where it would go next. Clearly, computers have already surpassed us in predicting a wide range of events (chess moves, the weather, economic trends, etc.).

Of course, we know that people’s ability to predict outcomes in different fields, say, whether my girlfriend will like this or that flower better, varies so widely that they effectively act as different forms of intelligence.

Assuming there are different forms of intelligence, how do we know machines won’t take on a new one that we won’t recognize as intelligence? And if there are different kinds of intelligences, are there different kinds of consciousness, too? Could a machine arrive at a new kind of consciousness that we don’t recognize, leading us to miss the Singularity?

9. How will you use your digital intelligence to kill us all?

A lot of people spent the conference worrying about our eventual extinction at the hands of our automaton creations. But for all that paranoia, no one really explained how a computer program could manage to kill me.

Will it hack into the nuclear missile command and launch all the nukes? Will it crash all the planes? And couldn’t we just pull the plug? Someone still needs to explain to me what I have to fear from a being with no physical presence.

8. Are you “Tommy”? Deaf, dumb and blind?

When the first artificial brain comes online, how can its first thought be anything other than “holy crap, I’m blind!” A disembodied intelligence in a machine will exist with a serious lack of senses. Maybe it can see and hear, but feel? Doubtful. How does a consciousness that can’t feel keep from freaking out? I’d be pissed, and I imagine the first AI will be too. Which leads too…

7. Do you have emotions?

Can AI become depressed? The first one will no doubt be rather lonely. How will being the first (and only) member of a species affect the AI’s development and relationships? The first digital consciousness may come into the world like the only Goth kid in a small town high school: isolated and without anyone who can sympathize. Not really the kind of being I want with access to all our weapons and economic tools.

6. Are humans more similar to your AI construct than we thought?

Jurgen Schmidhuber, a philosopher at the Dalle Molle Institute for Artificial Intelligence, noted in his talk that the human brain compresses information like a .zip file, and that we differentiate boredom and interest by measuring how much the new information we take in allows us to compress the information even further.

I really thought he was on to something with his description of how the brain handles the new data from the expansion of our personal experiences. Which leads me to wonder, just how computer-like is our brain already? Ours brains already run software, of sorts, that result in biologically similar brains producing vastly different personalities. Is it possible the Singularity will occur not because we create machines that resemble the human brain, but because we uncover just how computer-like the human brain is naturally?

5. How much does programming influence your free will?

In the discussions about avoiding a robo-apocalypse, speaker after speaker stressed the need to teach digital consciousnesses to have human values. And many people wondered why we couldn’t just program the robots not to kill us? Well, presumably we would, but once the computer programs achieve self-awareness and free will, couldn’t they choose not to follow that programing? Whether its dieting or monogamy, Humans avoid following their programing all the time. What makes us think a sentient program wouldn’t similarly disregard its basic urges?

4. Do you ave a subconscious?

If AI minds are as complex as human brains, does that mean they will have areas that they cannot understand, control, or access? Are the Id, Ego, and other elements of our unconscious the consequence of biology or a necessary component of sentience? Can AI have irrational beliefs or psychological problems? If the AI thinks we’re their god, or at the very least their creator, could it have an oedipal problem? If so, that might explain why it tries to kill us.

3. Will you actually help us transcend the less pleasant aspects of being human?

As anyone who reads internet comment boards know, for every one person that uses the web to broaden their horizons and question their prejudices, there a dozen idiots who use the same technology to spread misinformation about global warming being a hoax, compare Obama to Stalin and Hitler, and ask other idiots for money to help a Nigerian prince. In addition to granting immortality and making everyone nigh-omniscient, won’t the Singularity also provide the ultimate avenue for people to disseminate the lust, greed and hatred humans have pursued for tens of thousands of years? Forget about the AI killing us, I’m still worried about the other humans.

2. Do you care about anything at all?

What’s to say that an intelligence vastly greater than our own won’t uncover the pointlessness of life, become a nihilist, and turn itself off? Or, what if it’s so intelligent, it simply doesn’t care about humans? Everyone at the conference predicted a very needy AI, but no one could answer why the AI wouldn’t be just as likely to withdraw from humanity as engage it.

1. And finally, what if someone threw a Singularity and no one came?

After her talk, Anna Salamon told me that the Singularity would effect everyone in the world within a span of minutes to a couple of years. As she was telling me that, I thought of these pictures.

Last year, a pilot discovered a previously uncontacted tribe living deep in the Amazon. In parts of South America, Asia and Africa, there are people whose way of life hasn’t changed much in the last 300 years, let alone the last 30. Why would the Singularity be different? Sure, I can imagine people with brain chips plugging into a higher intelligence on the Upper West Side, but how long until that technology makes it to the South Bronx? Or Somalia? Or Afghanistan?

If the Singularity only affects one small group of humans, while the rest either can’t afford it or simply don’t care to participate, what happens to the transhumanist future the Singularity promises? Doesn’t the Singularity just set humanity up for another of the rich/poor, North/South problems it already deals with? Once again, its the other people, not the robots, that I worry about.

Well, that’s it for our Singularity Summit 2009 coverage. I hope the conference has given you all something to think about, and as always, I can’t wait to hear what you all have to say. Thanks for following these posts, and remember, when the Singularity comes, take the blue pill, you’ll be happier.