IBM and 3M are collaborating on a new kind of semiconductor glue that will bind together future generations of 3-D semiconductor chips. The idea is to create a whole new kind of adhesive that hold things tightly together while also conducting heat and insulating at the same time.
In other words, it doesn't sound easy. But a material like this is necessary if companies like IBM are going to move beyond stacking a few layers of silicon and get down to the business of stacking 100-chip towers that will power the devices of the future.
3-D semiconductors are basically multi-layered chips that can stack computing power, networking, and memory all into one neat system-on-a-chip. Right now companies like IBM can stack a handful of chips, but what they want are silicon towers. That means they need some kind of mortar that possesses these unique properties to hold everything together. That's what 3M and IBM are striving for: some kind of adhesive that could coat entire silicon wafers, holding them tightly together while still dissipating heat away from heat-sensitive components like logic circuits.
And they want it by 2013--about the same time the first generation of smaller 3-D processors is expected to hit the market in mobile devices. If they get it right, they predict that they could leapfrog today's existing processor technology, creating a silicon "brick" 1,000 times faster than today's fastest microprocessors.
Gee, that's why my computer keeps "sticking" on certain instructions.
Seriously, though, I'm skeptical about whether such a material can passively dissipate heat that well. It would seem to me that they would do better to design a MEMS or perhaps even microfluidic cooling system that actively dissipates the heat of these 100 stacked chips.
I think the idea is that you can take a chip that was made with, say, 65nm architecture, make the same chip with 22nm (so much less heat at the same speed), and then stack those chips which are cooler, but individually not as powerful as what a chip designed with 22nm in mind could be; collectively, even a stack of 100 slower ones is still miles ahead of a single 2D processor. Just imagine how much parallel processing could be done with a brick...
Personally, I think a good idea for cooling these bricks would be to integrate these power chips: http://www.micropower-global.com/technology/
Formerly developed by Eneco, the IP on these chips was bought by MicroPower Global. These tiny chips can either convert heat into electrical current, or accept an electrical current to cool one side. You wouldn't even need fans or liquid cooling with these integrated into the system, just a nice airtight chamber to prevent condensation on the chilled parts ;)
this is going to dramatically increase the cost of processors.. not every die they make is functional and up to spec, so if they stack up 10 "chips" and the 3rd one develops an error the whole stack gets tossed..
wouldn't they be tested for errors before being stacked?
I think everyone is getting carried away with Parallel processing, while a few core are nice for those apps that can and do take advantage of it, most applications are really not designed for it, so adding 100 threads vs. 10 threads is not going to increase speed for most personal computers right now. We could tighten things up, and have block component computers though. For instance a small box where you put dice sides blocks in to do a variety of different tasks, and make really small modular computers.
Then I always though thumb drives becoming so cheap would bring back cartridge like games using thumb drives and allow you to store save games on the drive (true plug and play). I have yet to see this even though I have seen diagnostic OS sticks come out on thumb drives, so who really knows where the future might lead.
@sacridias Your focusing too much on single application targeting and forgetting that your computer's OS is doing a ton of things at once and can almost always utilize more cores.
Skipping that their is a huge drive toward cloud computing right now and seeing that this tech is being developed by companies known for their server processors and supercomputers rather than workstation level computers, it's safe to assume that the extra multitasking capability will be fully utilized in large scale computers of the type today that already have thousands of multi-core processors.
of course they would be tested before stacking, but as they are stacked they have to be interconnected and if there is a failure to bond on a connection at some point it wouldn't be worth disassembling the whole stack so they would just pitch it.. hell, maybe they would just stack 20 with an estimated failure rate of one or two cores expected and route around the faulty cores..