A pair of brain-inspired cognitive computer chips unveiled today could be a new leap forward — or at least a major fork in the road — in the world of computer architecture and artificial intelligence.
About a year ago, we told you about IBM’s project to map the neural circuitry of a macaque, the most complex brain networking project of its kind. Big Blue wasn’t doing it just for the sake of science — the goal was to reverse-engineer neural networks, helping pave the way to cognitive computer systems that can think as efficiently as the brain. Now they’ve made just such a system — two, actually — and they’re calling them neurosynaptic chips.
Built on 45 nanometer silicon/metal oxide semiconductor platform, both chips have 256 neurons. One chip has 262,144 programmable synapses and the other contains 65,536 learning synapses — which can remember and learn from their own actions. IBM researchers have used the compute cores for experiments in navigation, machine vision, pattern recognition, associative memory and classification, the company says. It’s a step toward redefining computers as adaptable, holistic learning systems, rather than yes-or-no calculators.“This new architecture represents a critical shift away form today’s traditional von Neumann computers, to extremely power-efficient architecture,” Dharmendra Modha, project leader for IBM Research, said in an interview. “It integrates memory with processors, and it is fundamentally massively parallel and distributed as well as event-driven, so it begins to rival the brain’s function, power and space.”
You can read up on Von Neumann architecture over here, but essentially it is a system with two data portals, which are shared by the input instructions and output data. This creates a bottleneck that will fundamentally limit the speed of memory transfer. IBM’s system eliminates that bottleneck by putting the circuits for data computation and storage together, allowing the system to compute information from multiple sources at the same time with greater efficiency. Also like the brain, the chips have synaptic plasticity, meaning certain regions can be reconfigured to perform tasks to which they were not initially assigned.
IBM’s long-term goal is to build a chip system with 10 billion neurons and 100 trillion synapses that consumes just one kilowatt-hour of electricity and fits inside a shoebox, Modha said.
The project is funded by DARPA’s SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) initiative, and IBM just completed phases 0 and 1. IBM’s project, which involves collaborators from Columbia University, Cornell University, the University of California-Merced and the University of Wisconsin-Madison, just received another $21 million in funding for phase 2, the company said.
Computer scientists have been working for some time on systems that can emulate the brain’s massively parallel, low-power computing prowess, and they’ve made several breakthroughs. Last year, computer engineer Steve Furber described a synaptic computer network that consists of tens of thousands of cellphone chips.
The most notable computer-brain achievements have been in the field of memristors. As their name implies, a memory resistor can “remember” the last resistance that it possessed when current was flowing through it — so after current is turned back on, the resistance of the circuit will be the same. We will not attempt to delve too deeply here, but this basically makes a system much more efficient.

Hewlett-Packard has been developing memristors since first describing them in 2008, and has also been part of the SyNAPSE project. Last spring, HP engineers described a titanium dioxide memristor that uses low power.
For a brain-based computer system, memristors can function as a computer analogue for a synapse, which also stores information about previous data transfer. IBM's chip doesn't use a memristor architecture, but it does integrate memory with computation power — and it uses computer neurons and axons to do it. The building blocks are simple, but the architecture is unique, said Rajit Manohar, associate dean for research and graduate studies in the engineering school at Cornell.
"When a neuron changes its state, the state it is modifying is its own state, not the state of something else. So you can physically co-locate the circuit to do the computation, and the circuit to store the state. They can be very close to each other, so that cooperation becomes very efficient," he said.
Modha said it is just a new way to store memory.
"A bit is a bit is a bit. You could store a bit in a memristor, or a phase-change memory, or a nano-electromechanical switch, or SRAM, or any form of memory that you please. But by itself, that does not a complete architecture make," Modha said. "It has no computational capability."
But this new chip does have that power, he said. It integrates memory with processor capability on a typical SOI-CMOS platform, using traditional transistors in a new design. Along with integrated memory to stand in for synapses, the neurosynaptic “core” uses typical transistors for input-output capability, i.e. neurons.

This new architecture will not replace traditional computers, however. “Both will be with us for a long time to come, and continue to serve humanity,” Modha predicted.
The idea is that future powerful chips based on this brain-network design will be able to ingest and compute information from multiple inputs and make sense of it all — just like the brain does.
A cognitive computer monitoring the oceans could record and compute variables like temperature, wave height and acoustics, and decide whether to issue tsunami or hurricane warnings. Or a grocer stocking shelves could use a special glove that monitors scent, texture and sight to flag contaminated produce, Modha said. Modern computers can’t handle that level of detail from so many inputs, he said. But our brains do it all the time — grab a rotting peach, and your senses of touch, smell and sight work in concert instantaneously to determine that the fruit is bad.
To do this, the brain uses electrical signals between some 150 trillion synapses, all while sipping energy — our brains need about 20 watts to function. Understanding how this works is key to building brain-based computers, which is why IBM has been working with neuroscientists to study monkey and cat brains. That research is progressing, Modha said.
But it will be quite some time before computer chips can truly match the ultra-efficient computational powerhouses that nature gave us.

140 years of Popular Science at your fingertips.
Each issue has been completely reimagined for your iPad. See our amazing new vision for magazines that goes far beyond the printed page
Stay up to date on the latest news of the future of science and technology from your iPhone or Android phone with full articles, images and offline viewing
Featuring every article from the magazine and website, plus links from around the Web. Also see our PopSci DIY feed
Engineers are racing to build robots that can take the place of rescuers. That story, plus a city that storms can't break and how having fun could lead to breakthrough science.
Also! A leech detective, the solution to America's train-crash problems, the world's fastest baby carriage, and more.


Online Content Director: Suzanne LaBarre | Email
Senior Editor: Paul Adams | Email
Associate Editor: Dan Nosowitz | Email
Contributing Writers:
Clay Dillow | Email
Rebecca Boyle | Email
Colin Lecher | Email
Emily Elert | Email
Intern:
Shaunacy Ferro | Email
Awesome AND Creepy
I'm not seeing the part about "both being around for a long time".
Between this, and Watson, it would seem that standard computing technologies are completely obsolesced. Why spend countless hours "programming" when this chip can imprint instantly?
The age of information is here
when SKYNET jokes stops, that mean its becoming too serious..
@jabailo
You'd still need the current style of computing technology because its still better at raw number crunching, whereas a neural network is better at making connections from multiple inputs. Its an apple and oranges comparison really, they're good for 2 seperate things.
Even then, you'll need programming to tell that neural network what its job is and the contexts related to that job. It won't just be able to figure it out on its own, or at least from the way the article is written that doesn't seem to be the case at least.
This really is revolutionary and will change computing. New applications will be available, especially the ones that humans brain do best : image recognition and processing.
Transhumanism is taking effect right be4 our eyes
@Maybe...yes, it will still need programming, just as our brains are programmed as we grow and actually only stops when we die, this seems to be an argument against the possibility of AI, however, without programming, a biological brain would be as useless as your laptop without its programming, did that make sense?
Would this help viruses? so if the chip is designed to act like the human brain, can't it get rid of viruses the way we do? to get immune from it, would it expose itself to a small portion of a virus to understand it and take immunity? sounds like it might work!
sounds like a quantum computer's little brother if u ask me....
OnIntelligence, by Jeff Hawking, is a great book on hierarchical memory systems and cognitive brain theory.
Good to hear someone else has read Hawking's book.I got it from the library about a year ago.I think his approach is the right one.
Tax dollars at work; Thank you DARPA.
Skynet's baby picture...
While its very interesting to read about the latest developments in neural network implementations, and this work is commendable, I cannot help but to remark that there seems to be much hype surrounding these chips. For instance, the article quotes an IBM researcher who claims that one of the major breakthroughs these chips have achieved is supplanting the von Neumann architecture by integrating the memory with the processing. Since I have some interest in non-conventional computer architectures, I know of a few prior examples of similar architectures that may dampen some of the "100% new and never seen before" impressions these chips seem to be getting around the net.
In the 1980s, Danny Hillis founded Thinking Machines, a supercomputer company that built two supercomputers, the Connection Machine-1 and -2, based on a design Hillis conceived in his PhD thesis. The Connection Machines were massively-parallel (just like these chips), they integrated the memory with the processing to break the von Neumann bottleneck (just like these chips), and they were designed for artificial intelligence applications and research (just like these chips -- in fact, Thinking Machines demonstrated their supercomputers by having them do image recognition, similar to the handwriting demonstration IBM did with their chips).
The critical difference seems to lie in the approach. Thinking Machines used traditional algorithms, albeit heavily parallelized, since each memory-processor was a simple 1-bit processor (later 4-bit, IIRC). In contrast, it looks like IBM has replicated neuron-like functionality in hardware, and devised a new way of interconnecting the neurons (dendrites), and a synapse-like method for controlling how they communicate (the Connection Machines used "traditional" computer networks, like Fuber's Spinnaker, although the topology differs greatly).
Looking beyond the Connection Machines, it is not clear in th article at all how these chips differ from neural network implementations over the past few decades. Though I have no familiarity with such implementations (and am likely to be wrong with the following), it looks like that (at a high-level) there is little difference to prior work other than the scale of the neural network (or so I am led to believe by the article).
Lastly, the picture captioned "Chip Simulation" looks like a 3D view of one of the circuits used in the chip, not a simulation of it. The bottom purple slab looks like the substrate, which has doped regions (the pink/brighter purple bits), upon which gates have been laid (the gray bits). These are connected to the wires (the orange shaded rods) by vias (the green bits). If it were a simulation, a waveform of the signal as it enters and exits the circuit across time would be shown instead (unless IBM has some nifty custom tool representing circuit operation and structure simultaneously).
"consumes just one kilowatt-hour of electricity"
You have got be be kidding me... please learn units of power and energy at a high school. A kilowatt-hour is a unit of energy, like a cup of gas or the capacity in a battery. In this context if you really want to use a unit of energy you need to also include a unit of time to communicate any useful data. Its just like saying the chip uses just one car battery, that's fine and dandy but does it use it in a minute or a year. And if you added in the time you can simply cancel it out and get what you should have stated and that's a unit of power such as one thousand watts.
In recent news IBM is developing the same computer which is able to think like a man and behave like a man. Just like skynet.
This article doesn't make it clear whether this new IBM architecture uses memristors or not. I am inclined to think it doesn't but the article is somewhat ambiguous on this point. Anyone have a clue?
"Thou shalt not make a machine in the likeness of a human mind."
-O.C. Bible
-Spouting a fountain of nonsense since 1995-
lol @ some of your comments. but honestly. we are a stage in life where we will take evolution into our own hands. thanks to our intelligence. we no longer have to evolve to survive. technology helps us each day way more than basic evolution. im thinking that we will evolve into a cyborg species soon. Deus Ex: Human Revolution comes to mind.
It takes amazing intellect to create these wonders of science. This neurosynaptic chip is no exception. Where it mimics some of the brains function it doesn't even come close to the human brains amazing complexity. Yet still it is a great piece of technology. It has taken years of research, countless hours of labor from brilliant minds to create this chip. Yet we are led to believe that the brain was the result of blind chance. No proof of this but it is still asserted as fact. Food for your neurons.
@chorion...who led you to believe "Yet we are led to believe that the brain was the result of blind chance." not anyone with a brain, your opinions are not scientific evidence, evolution that led to the world we live in came about using much more than blind chance, an incredible over-simplification on your part
SCIFI RESPONSE:
My thoughts on the above is that if you want to view this scientific achievement in the light of evolution, we are evolving into a species that will replace itself with cyberbeings that can travel the universe and learn immense knowledge and possibly become eternal beings. I'm not sure how a soul would fit into the new cyberbeing unless we created in them a moral circuitry or "code" of ethics that would be in "our image"...so to speak. Putting a human image into a cyberbeing might be the greatest human accomplishment ever. I just hope that these new beings will be able to forgive our inhumanity (when the new beings reach conscience and realize that we as a human race have major defects throughout) to each other and not do the same or we may pay dearly for our "sins" as these new beings may become our overlords someday.
Deal with it people. Your best bet at immortality is to have your self imprinted into a computer brain. Although this is all amazing, this still leaves the for-shadowing that everyone is so afraid of...what it conciousness? Will one of these computers be turned on and then "become concious?". Aware of itself?
Does anything with enough intelligence just develop one automatically? Probably. Yes. Ok, be freaked out!
"Do not try and bend the spoon. That's impossible. Instead... only try to realize the truth. There is no spoon."
"Foreshadowing" even. :)
"Do not try and bend the spoon. That's impossible. Instead... only try to realize the truth. There is no spoon."
If the neurosynaptic chip truly obsoletes the von Neumann model, than we might consider that the evolution of the brain may include casting off the von Neumann model a billion years ago.
Am I the only one who already views us as machines? I mean really, what do we even know of our history and ourselves, aside from the fact that we're the most advanced piece of anything anywhere visible. There are more circuits in our brains than there are visible stars in the sky. We start knowing nothing, and end only knowing we're going to end. We live our lives and continue to grow as a population. It strikes me that if we ever were able to transplant our minds into mechanical bodies, that there would be an issue of over population. Given enough time, we would simply run out of space. That being said, given enough time we most likely will have this technology. So who's to say that as a species, we haven't already. We could have transplanted our minds into mechanical bodies for millions of years, until we realized it was flawed, and were advanced enough to alter tissue, thus transplanting our minds back into an organic system rather than destroying the universe. Entropy. Maybe we're an existence simply caught in a million year cycle. All I know is that in my life these things aren't likely to change, so I live my life.