Since I got here, I've been wondering what exactly the Singularity's going to look like. How are we going to create artificial intelligence, and when we do, how are we going to integrate ourselves with this advanced technology? Luckily, NYU philosopher David Chalmers was there to break it all down.
Contradicting this morning's talks, and solving the problem of complications due to personality quirks from a copied brain, Chalmers rejected the idea of brain emulation as the path to super intelligent AI. Instead, Chalmers thinks that we have to evolve artificial intelligence by planting computer programs in a simulated environment and selecting the smartest bots. Basically, set up the Matrix, but with only artificial inhabitants. The process may take a while, but he stressed that human intelligence serves as proof of concept.
To ensure that the resultant AI adheres to pro-human values, we would have to set up a "leak-proof" world where we control what goes in, and can prevent any of the artificial consciousness from becoming aware of us too early and escaping (essentially, no red pill). We could then adjust this world to favor pro-human traits while routing anti-human tendencies towards extinction.
As Chalmers sees it, the second the artificial personalities become as smart as us, they will emulate our leap by creating AI even smarter than themselves inside their simulated world. Essentially, they will undergo their own, digital Singularity.
This will start a chain reaction that quickly leads to a digital intelligence far greater than anything we ever imagined. Unless, of course, the first AI more intelligent than us uses that additional foresight to realize creating intelligence greater than itself is a bad idea, and cuts off the entire process.
But assuming that AI does manage to get smarter than us, we will have to either integrate with it, coexist with it as a lower form of life, or pull the plug (which may or may not be genocide). Since mass murder is generally frowned upon, and no one wants to be pets to a machine, Chalmers sees integration as the only way to go. Of course, no higher intelligence would willingly integrate with a lower one (humans don't forsake language and bipedalism to live with wolves), so Chalmers said we'll have to computerize ourselves to meld with the AI system.
To sustain consciousness, he advocates physically replacing one neuron at a time with a digital equivalent, while the person is awake, so as to retain continuity of personality.
What Chalmers did not address, however, is whether or not the AI would want to meld with a warmongering, greedy, sex obsessed inferior intelligence like ourselves. If the AI is really that much smarter than us, it might be more like George Wallace than the Borg, insisting on human segregation now, tomorrow and forever rather than total assimilation. Prejudice computers leads right back to Salamon's prediction of extinction at the hands of our creations.
Can't we all just get along!
There would be no reason to treat AI as a species. Artificial inteligence is created by man unlike man and creatures.
"This will start a chain reaction that quickly leads to a digital intelligence far greater than anything we ever imagined."
Any discussion on where we humans are in this process? Maybe the 'created in him image' is a leak into our world from the previous intelligent species.
Why do we assume we are the start? Given the number of artificial/virtual worlds we already create at our limited tech/intelligence level - the odds are we are ourselves a virtual world created by some greater intelligence. Maybe we are their equivalent of SIMs (great) or some war game e.g. RISK (not-so-great). Maybe they are disappointed with our progress and are about to pull the plug!
Many identify the future of mankind having to become machines and it being more superior to biological form. We’ll never know everything about this world or this solar system. We’re constantly learning new information about things we were thought to be experts in. We think we know so much, but in reality we know so very little. Mankind knows a minuet amount about the human brain. Biology has capabilities beyond anything we can perceive. It especially has capabilities beyond that of all technology. I think of technology as a cheap copy of the true master form: biology. The human body has tremendous potential yet to be discovered; including biology in general. You can’t look at biology in the narrow ways of today since it has an immeasurable amount of possibilities beyond your wildest dreams.
Just because it’s not possible today, doesn’t mean it’s not possible tomorrow. The greatest scientific breakthroughs weren’t restricted by past judgment. Believing in yourself and changing your mindset is step one.
if ( object == HUMAN )
RemoveAction( Kill );
RemoveAction( Harm );
RemoveAction( Rectal_Probing );
RemoveEmotion( Hate );
RemoveEmotion( Envy );
It's a little more complicated than that, but something similar to that could prevent any kind of robot uprising, and for the robots part, make integration very easy.
Include a base definition of what a human is based upon a robot with just the observation code says one is, and in a similar fashion, make sure all those terms are predefined before it comes out of the factory.
They can throw trucks, they can unravel the mysteries of the universe, but they can't bring themselves the level of joy assisting a human gives them anyway else.
Make them want to help us, make them happy that they were programmed in such a way, and make them completely unwilling to alter any of their base programming themselves. The biggest concern then is the human factor, because someone is going to be stupid enough to make a virus to turn these benevolent demigods, into human hating demigods.
If that line of thinking is stupid, please tell me so. I'm kind of scared that the closest thing I've been reading to that is in the Hitchhiker's Guide universe. I've always just assumed that was the common way of thinking about it, and the people that do these singularity summits and give the topic more than five minutes of thought are just a bunch of nutjobs.
I think best way (maybe the only way) to create true AI is by imitating a real human brain.
Trying to develop AI programs in a virtual world instead, would not work because you would not know how to create such a program in the first place.
Also, some people may think computer simulation of a real brain would not much useful but it is not so.
The computer simulation can be a lot smarter than the actual brain copied, because the computer can be made to run a lot faster. (Speed of thinking is important for IQ.) Also the computer simulation can be tweaked to never forget anything learned once (which would also increase the IQ).
Hmmm.. this actually sounds a little familiar:
A greater intelligence creates humankind and places them in a contained environment.
They have limits placed on them by the greater intelligence.
Another creation suggests they will become like God if they break their limits.
They do, and become.... separated, rebellious, anti-God or pro-God, with a choice.
Where do we go from here?
Where would AI go from there?
What is the purpose of making something with a higher intelligence than a human? Besides that, who can say that a future AI could possibly be more inteligent than a human? Thirdly, why would anyone want to make a creation smarter than themselves?
It must be the search for perfection. I honestly believe that there can never be anything of this world to be considered perfect, especially by the hands of a mere human being.
Why we really want AI?
To solve all our problems for us. So we would not need to work, go to school, or even think hard on anything anymore.
I agree this is really selfish. (It should not be a surprise to think why the AI machines would hate us! :-)
It seems to me that any AI designed to automatically deprecate lesser intelligences is likely to remove the human species from the equation!
We should advance in neuroscience, psychology, behavioral science,to see the possibilities of our human brain,behavior. We should concentrate on the things that are already there: our brain, creativity, intuition, thoughts, HUMAN INTERCONNECTIONS, collective mind (without losing our identity) love, feelings, care, THINGS THAT MAKE US HUMANS etc. Search the ways, how can we evolve in better beings in our own limits (search the limits). I think that these are the first steps. A tree grows from a seed not from the top.
why not just use Asimov laws of robotics
0 a robot must not harm humanity or allow humanity to come to harm
1 a robot (AI) must not harm a human being and or allow a human being to come to harm except where this would conflict with the zeroth
2 a robot (AI) must obey a human being as long as such orders do not conflict with the zeroth or 1st law
3 a robot must protect a its own existence so long as such protection dose not conflict with the zeroth, first, or second law