SHARE
Silicon is Slow

Illustration by Christoph Niemann

Ike Chuang holds a pencil-thin test tube containing a bright orange solution of a billion billion molecules, the core of each one a combination of five fluorine and two carbon atoms. He slides the test tube into the chamber of a modified nuclear magnetic resonance (NMR) machine that looks like an enormous pressure cooker. Inside the machine, the sample is surrounded by radio frequency coils attached to amplifiers and signal generators “like those inside a cellphone, only much larger,” Chuang says.

Chuang types “GA”-Go ahead-on a keyboard. With a bell-like sound the radio waves wash over the test tube, and the nuclei of the carbon and fluorine atoms begin to spin. And, as they precess about their axes, perform calculations. The computation-of the prime factors of the number 15-takes less than a second, and Chuang repeats the experiment 35 more times, averaging the results to control for errors.

Factoring 15 is a problem fit for grade school

students and cheap calculators, but it’s not the size or speed of the calculation, merely the fact of it that matters in this case. Chuang’s

seven-“qubit” quantum computer, at the moment the most powerful one ever built, provides concrete evidence of a proposition that scientists just a few years ago thought unworkable: that the properties of atoms at the quantum level can reliably be exploited for the brains of a working computer. Indeed, the work of Chuang and others suggests that quantum machines may one day be capable of massively parallel computing, in which billions of calculations happen at once-a feat that will never be possible with silicon chips.

“We want to go beyond the normal,” says Chuang, who is now an associate professor at MIT’s Center for Bits and Atoms, though he performed his seminal quantum computer experiments at IBM’s Almaden Research Center in San Jose, California. “We want to shrink computing to a scale where it can be done in ways no one’s ever thought of.”

Chuang does not work alone. Dozens of research teams around the world are pouring hundreds of millions of dollars into proving that computing at very small scales holds unique promise. They’re experimenting with carbon nanotubes, strands of DNA, and spinning nuclei. What they seek are computational devices that leapfrog over problems inherent in the “classical” chip-based computer, problems that have to do not only with size but with the serial nature of chip operation, in which one job follows the next in lock step. No matter how quickly a silicon chip completes each task, the sequential nature of its operation limits its power. (As the physicist Richard Feynman famously said, “the inside of a computer is as dumb as hell, but it goes like mad!” With certain problems, going like mad isn’t good enough.)

What uses might nanoscale computing be put to? We probably won’t need massively parallel processing for our PDAs or cellphones anytime soon, but the limits of classical computing are already being felt in, for example, the field of data encryption. Encryption is not only a national security prerequisite but also a critical foundation of Internet commerce and data exchange. The ability of future computers to perform simultaneous calculations on a massive scale could help break, or protect, seemingly unbreakable codes.

In biochemical research, non-chip-based computers could potentially process massive amounts of data simultaneously, seeking critical genetic patterns that might lead to new drugs. Nanocomputers may also hold promise for managing vast databases, solving complex problems such as long-range weather forecasting, and-because they can theoretically be integrated into nanomachines-monitoring or even repairing our bodies at the cellular level. All this remains highly speculative, of course, because nanocomputing research is at the pressure cooker stage.

The Size Barrier
Computers have shrunk so much, and become so madly fast, that one might wonder why they couldn’t just keep shrinking. The first general-purpose computer, the Electronic Numerical Integrator and Computer (ENIAC), took up an entire room at the University of Pennsylvania (see “More Brawn Than Brains”). It weighed 30 tons and employed more than 17,000 vacuum tubes. When scientists turned it on, parts of Philadelphia went dark. The ENIAC was a 4-bit computer that ran at a now-paltry 20,000 cycles per second-about the computing power found in an electronic greeting card that plays a silly song when opened.

The ENIAC and its descendants are basically a collection of binary on/off switches that precisely modify information. A bit, the most basic unit of information, is expressed within a circuit by a voltage: high voltage means the bit has a value of 1; low voltage means it equals 0. These bits flow through simple logic gates constructed from switches, which together perform the tasks that the computer is directed to do. The ENIAC’s vacuum tube switches became obsolete in 1947 with the advent of transistors, solid-state devices that remain the fundamental component of integrated circuits and microprocessors. With each new generation, switches have grown smaller, enabling engineers to fit more of them into the same space, but their essential function hasn’t varied.

As circuits shrink, electrons can make more trips around the chip, distributing more binary code and handling more tasks. Today’s Pentium IV processor is the size of a dime, and sends electrons zipping around its 55 million transistors at 2 gigahertz, or 2 billion times per second. In 10 years the average silicon chip will likely contain a billion or more transistors and run at speeds exceeding 25 billion cycles per second. Already, exotic, high-performance chips, such as one made from silicon germanium that was recently announced by IBM, can exceed speeds of 100 gigahertz.

But there’s a limit to how small these circuits can get. Chips are made by using ultraviolet light to etch circuit patterns onto a photoresistant surface. By using light with progressively shorter wavelengths, chipmakers have been able to make tinier and tinier circuits. Eventually, though, the wavelengths needed to cut the circuit will be so small that lenses and air molecules will absorb the light before it’s able to carve a pattern. At that point, in about 10 to 15 years, silicon circuits will stop shrinking. Moreover, the cost of revamping the chip fabrication process to utilize light at ever-shorter wavelengths is increasingly high. Ultimately, science will need to turn to other kinds of computer systems. “We’re surrounded by computation,” says Neil Gershenfeld, who directs the Center for Bits and Atoms at MIT. Breakthroughs may happen, he says, “if you ask nature how it solves a problem. A computer can be a tube of chloroform.”

“Is there something beyond silicon?” asks Tom Theis, director of physical sciences at IBM Research. “We’ve been on a continuous search for that for decades.”

Silicon Substitutes

One tactic is to avoid the size limitation of silicon without abandoning the comfort of classical computer circuitry. Molecular electronics, or moletronics, which involves building circuits from carbon and other elements, mimics traditional computing architecture while potentially speeding it up immeasurably. Theoretically, it will be possible to design machines that capitalize on both silicon and molecular circuits, possibly before the decade is out.

Last August, researchers at IBM’s Watson Labs in Yorktown Heights, New York, built the first working logic gate made from a single molecule. Using carbon nanotubes, arrangements of atoms that resemble rolls of chicken wire, scientists created a circuit just 10 atoms wide, or 1/500th the size of a silicon circuit. Then, in October, Bell Labs scientists Hendrik Schon, Zhenan Bao, and Hong Meng designed a molecular transistor even tinier than a nanotube-one that’s one-millionth the size of a grain of sand. Schon and colleagues sandwiched a thiol molecule-a mixture of carbon, hydrogen, and sulfur-between two gold electrodes, then used the thiol to control the flow of electricity through it. What’s important about this nanocircuit is not merely its size. In a discovery that baffles even its creators, the molecule also acts as a powerful signal amplifier-an essential part of a transistor that boosts the electronic signal (or gain). “We were amazed to be able to (operate) at low voltage and achieve such high gain,” says Schon. “It was a very pleasant surprise.”

If molecules can do double duty as both transistors and amplifiers, then logic gates-and, by extension, an entire chip-could be made not only smaller but also more cheaply, says Stan Williams, director of quantum science research for Hewlett-Packard: “The results are quite stunning and extremely puzzling. If this proves true, it has the possibility of outperforming the best silicon can do.”

Williams and fellow HP researcher Phil Kuekes are on the threshold of marrying moletronics with silicon technology. Last July, they were awarded a patent for a method they devised that allows molecule-size circuits to communicate with traditional semiconductors; by 2005, they and a team of researchers at UCLA expect to produce a 16-kilobit memory circuit. In 10 to 15 years, Williams says, pure moletronic circuits will begin to replace traditional chips in devices like handheld computers. Perhaps the greatest impact will be in biomedical implants-tiny computers could be inserted into the human body to, for example, measure insulin levels or warn of a pending heart attack. Much research is being done on the mechanics of cells and on cellular information exchange at the DNA level, and at some point, tiny machines will know how to talk to cells in cell language.

Where it will all end is, however, conjecture: “We’re quite a long way from being able to augment human physical or mental capabilities just by plugging something into our bodies,” Williams says.

The Double Helix as Computer
Beyond silicon and moletronics lie redefinitions of computing that are much stranger and harder to conceive. One method utilizes DNA. There’s logic to this: DNA is nature’s extraordinarily efficient data storage and delivery mechanism for life processes, and the familiar four-base double helix structure encodes enormous amounts of information at the molecular level. DNA combines in consistently predictable ways, and 10 trillion strands can fit in a teaspoon. By turning each of these strands into a type of “processor,” scientists foresee building a nanocomputer that performs trillions of calculations at the same time.

In 1994, University of Southern California professor Leonard Adleman set the stage by using DNA to solve the Hamiltonian Path, or traveling salesman problem. This problem seeks the shortest route between a number of cities, with no city being visited more than once. When only a few cities are involved, you can solve the problem with a pencil and paper. As the number of cities grows, the number of potential routes a conventional computer must try in sequential fashion increases exponentially. To get the answer quickly, you’d have to divvy up the question among a large number of computers working in parallel. Or you could, as Adleman did, solve the puzzle by letting a few teaspoons of DNA generate all possible solutions simultaneously.

Adleman illustrated the methodology for performing a massively parallel chemical reaction-with each possible answer provided in the form of a strand of DNA code-then spent a week sorting the incorrect strands from the correct answer. Computing with DNA may sound odd, but it’s a logical product of the biochemical research that has enabled scientists to decode, manipulate, and synthesize the genetic material of plants and animals.

In March of this year, Adleman and his colleagues at USC reported taking DNA computing a giant step further, solving what they say may be the largest problem ever tackled by a nonelectronic device. In this experiment, Adleman sought to compile a viable guest list for a group of demanding partygoers, each of whom imposes specific demands: I’ll come only if so-and-so is snubbed but my good friend so-and-so is invited. To accommodate the demands of 20 such finicky attendees, more than a million combinations of guests must be considered. After four days of chemical reactions and code sifting, during which nucleic acids representing individual partygoers attracted and repelled each other, Adleman’s DNA computer produced the master party list. This wasn’t the first time USC researchers had attempted to unravel the party puzzle with a DNA computer, but earlier efforts had involved at most nine guests. Taking into account the preferences of 20 people, which required calculations by trillions of snippets of DNA molecules, was an immeasurably more imposing experiment.

Other researchers have suggested that DNA computing based on reliable, robust DNA-based structures, rather than strands floating in solution, could be even more powerful. One of the more successful efforts is at Duke University, where computer scientists John Reif and Thom LaBean are working with so-called DNA tiles-strands of nucleic acids woven into interlocking structures that, by virtue of their interaction with other tiles, form simple logic circuits. LaBean and Reif are testing their concept at the simplest binary level: Do the combined tiles reliably behave like basic logic gates? Their initial tests prove a tile-based DNA computer actually works. Connect a sufficient number of such logic gates and the result could be a supercomputer no larger than a teardrop.

Huge barriers must be overcome before even a small full-fledged DNA computer can be designed. For one thing, says Reif, the more complex the DNA structure, the more likely it is to make errors that can produce faulty computations. In nature, these errors are mutations, and error correction through constant DNA repair is built into living cells; no such automatic correction exists, however, in DNA computing. Moreover, the parts of the DNA that contain the “answer” have to be extracted and analyzed. Consequently, researchers still need to develop an efficient way to read results. Otherwise the speed of the DNA computation would be offset by the time it takes to actually ascertain the outcome.

“In theory,” says Reif, “you could probably use a DNA computer to do anything a normal computer could do. But in practice, you probably wouldn’t use one for running Microsoft Windows. You’d use it for things you couldn’t otherwise build at the molecular scale.”

Suggestions for DNA computers include the creation of biosensors that could identify pathogens in the environment or detect biochemical events at the cellular level within the body. Another logical use: deploying DNA computers to aid in the enormous data searches undertaken in the hugely expanding field of genetic research.

Quantum Leaps

Quantum computing combines the nanoscale world of molecular circuits with the speed of DNA’s parallel processing-then adds a weirdness all its own.

In a quantum computer, the nuclei of atoms function as so-called qubits-the 1’s and 0’s of binary code. When the spin of the nucleus points “up,” it’s a 0; when it points down, it’s a 1. But in quantum computing there’s yet a third possibility: A nucleus can be in a special kind of quantum state that enables it to occupy both positions at once. This phenomenon is called a superposition, and it underlies the enormous potential power of the quantum computer. For if the nucleus can represent 0, 1, or both simultaneously, one qubit can do the work of two ordinary bits; 2 can do the work of 4; 4 can do the work of 16, and so on. Keep ascending the exponential scale and soon a relatively small quantum computer (say, 40 qubits) attains the capacity of a supercomputer. Indeed, conventional computers have difficulty modeling the quantum behavior of even a small number of atoms precisely, because nuclei are so slippery; a quantum computer might be much better at studying quantum behavior.

Here’s the problem: According to the peculiar rules of quantum mechanics, once you observe the state of a nucleus it ceases to be in a superposition and freezes into either a 0 or a 1. The pressure cooker-like machine Chuang worked on at IBM is designed to keep the atoms in a superposition long enough to perform a calculation. “The trick is to create a molecule that can stay in a quantum state for an incredibly long time-in this case, 1.5 seconds,” he says. “That’s eons for something quantum. In life you don’t normally get to see one thing sitting in two places at the same time.”

Chuang has helped design four quantum computers, each more sophisticated than its predecessor. Last fall, his 7-qubit machine was the first to implement an important algorithm for factoring prime numbers. Factoring is critical to encryption; Fred Chong, an associate professor of computer science at UC Davis, estimates that today’s fastest computer would require billions of years to factor a 300-digit encryption key as it laboriously tried one possibility after another; a quantum computer could crack the code in about 30 hours.

But that would require a quantum computer with hundreds of thousands of qubits-and even the most optimistic researchers say that such systems are at least 15 years away. In fact, Chuang doesn’t envision getting much beyond 10 to 20 qubits with his current scheme, because the magnetic signals that measure the direction of the spins-and determine whether the qubit is 1, 0, or both-grow fainter as the number of qubits increases. So researchers are exploring other technologies, such as encasing qubits in solid-state “cages” and reading them with lasers. “Nobody in this field understands what’s going on,” admits Chuang with a laugh. “Quantum physics defies our intuition.”

Many scientists still believe that nanocomputers will be limited to highly specialized applications such as cryptography or database searching. There’s precedent for such conservatism: The room-size ENIAC was originally built to calculate the trajectory of artillery shells, and at around the same time IBM chairman Thomas Watson made his famous remark about there being a world market for perhaps five computers. IBM and its successors have drummed up a few more uses for computers since then; nanocomputers are likely to experience a similar fate. If the computer revolution of the past half-century is any indication, as the machines continue to get smaller, the possibilities will only grow.

Looking Back: More Brawn Than Brains
Popular Science’s reporting on computers began in the 1940s, with the first room-size behemoths such as the Mark I and II, ENIAC, and MIT’s 100-ton “electro-mechanical differential analyzer.” At right is the “intricate, bedspring-like maze of wiring” from Mark II, built in the Harvard Computation Lab. Early coverage enthused over speeds that desktop computers now easily surpass. In January 1946 we said of MIT’s monster computer:

“In a few minutes, or in a few hours at most, this giant calculating machine provides answers to complex problems it would have taken trained men weeks to solve.

Able to tackle three problems at a time, with as many as 18 variables in any one, the calculator contains 200 miles of wire, 2,000 electronic tubes, several thousand relays, and about 150 motors. Yet, despite its apparent vast complexity, one man can operate it.”