A neuroscience blogger recently uncovered this BBC special on the prospects of brain-machine mergers, and what it means for the future.
The video special itself isn't brand new, but, since it's all about what's going to happen in a few decades, the ideas are still fresh, and the comments and insights from leading minds like Miguel Nicolelis of Johns Hopkins, MIT's Seth Lloyd and, of course, AI pioneer Ray Kurzweil, are fascinating. The piece, which inter-cuts short profiles of these leading thinkers with strange, dreamy scenes of little kids in a forest, is centered on the merger of man with machine, and how it might affect our world in a few generations time. (Apparently, we'll still have trees. And jump ropes.) One of the experts says, "I believe that my children's children will be able to . . . download their thoughts, store their memories, interface with machines." Another believes they will be caught up in a technologically-driven war, "and they may even be destroyed by it."
Don't worry, though, there's good news, too. And while the video is much longer than your standard YouTube clip, it's worth the watch.
It kind of reminds me of how people saw artificial intelligence way back in the 60's. We've managed some very basic AIs, so what, maybe 1980 before they can out think humans, right? Sentience can't be so hard. So now, we have machines that can read general trends in someone's brain and use that for information. Finding out what each individual signal from each of the billions of neurons in the human brain means should be easy, right? 2029 sounds realistic, doesn't it?
I've read a study that found that individual neurons are more than on/off switches, for that matter, and just reading the signals isn't the end of it. In that study, they found that the recognition of a person came down to a single neuron, regardless of the type of recognition- picture, drawing, sound, ect, all came down to one single neuron. So making a computer that can read your memories would require reading the contents of a neuron, not simply the signals it gives off.
For that matter, we're reaching the limit of Moore's law with our current technological system for computing. "Shrinking" computer chips isn't a magical concept, they literally get smaller... and there's a limit before they're too small to actually work. You can't have a wire half an atom thick, for example. Theoretically quantum computers would let us pass this barrier for a time, but theoretically you could use a wormhole to travel to another planet. Doesn't mean it'll happen in ten years.
You point out two very common reactions to Kurzweil. He gets this often so he ends up responding to the same stuff over and over.
You can read his book if your interested, but I'll respond to your comments real quick.
1. 2029. That's about 20 years from now. Isn't everything great going to occur about 20 years from now?
Kurzweil didn't just pick this number out of thin air and he has consistently stuck to this number for over a decade. It is based on the exponential growth in processing power that you eluded to. He does account for the fact that neurons are not simply binary on/off switches and he goes orders of magnitude beyond that in his estimation of where we'd need to be for human level intelligence.
2. Moore's law is about to run out.
Moore's law is about to run out. Before Moore's law (which deals with integrated circuits) we still had exponential growth in computing power, even with vacuum tubes and those punch cards. When one paradigm dies out, another one takes its place. While it's true this may not happen, I can't see it as the most likely scenario that we suddenly stop our technological progress because we can't fit more transistors onto a chip.
There are so many other possible Moore's Law successors out there besides quantum computing. Carbon nanotubes, graphene, spintronics, photonics, using 3 dimensions instead of just 2 etc.
I don't know if we'll have the computational power of the human brain by 2029 or not, but I just thought I'd respond.
Well, both those points hinge on the assumption that a handy new technology will step in when we can't shrink circuits anymore (which I've heard might happen in as little as four years, although it's possible that someone will figure out a way around that). Alright, this has more or less happened in the past, and probably will in the future, but it's not exactly going to happen on a schedule, so putting a date on it is a bit arrogant. Plus, there's no guarantee that this new technology will progress as quickly, that there won't be problems with it... we've been out of the vacuum tube era for a while, maybe it'll take fifty years for another technology to be as commercially viable as the transistor is. Saying that this will happen eventually is all well and good, I just think anyone counting on it being close to possible in 20 years is going to be disappointed, like the people in the 60's who thought it'd happen in 20 years and are still disappointed.
I mean, the two examples of brain interface technology here aren't nearly as inspiring as they sound.* Being able to recognize a signal designed for output is a far cry from decoding internal processes (say, displaying a game on a monitor vs running the actual game), and the thing with the mouse was neurological slight of hand. They found where two nerves ended and rewarded the rat for following instructions. You could do that with two wires to the actual whiskers and something that dropped food pellets or injected shots of morphine (if you wanted to mimic that experiment closer). Kinda cool, but it's not even the same sort of technology you'd need to download judo into someone's head.
*Not inspiring for brain downloading technology, but pretty inspiring for prostheses. I'm definitely for that machine that can talk for people who're paralized.