IBM’s massive ‘Kookaburra’ quantum processor might land in 2025
Take a closer look at IBM’s ambitious goal to make quantum computing more powerful and more practical.
Today’s classical supercomputers can do a lot. But because their calculations are limited to binary states of 0 or 1, they can struggle with enormously complex problems such as natural science simulations. This is where quantum computers, which can represent information as 0, 1, or possibly both at the same time, might have an advantage.
Last year, IBM debuted a 127-qubit computing chip and a structure called the IBM Quantum System Two, intended to house components like the chandelier cryostat, wiring, and electronics for these bigger chips down the line. These developments edged IBM ahead of other big tech companies like Google and Microsoft in the race to build the most powerful quantum computer. Today, the company is laying out its three-year-plan to reach beyond 4,000-qubits by 2025 with a processor it is calling “Kookaburra.” Here’s how it is planning to get there.
To scale up its processing abilities for qubits, IBM will flesh out development on both the hardware and software components for the quantum chips. First to come is a new processor called Heron that boasts 133 qubits. In addition to having more qubits, the Heron chip has a different design from its predecessor, Eagle. “It actually allows us to get a much larger fraction of functioning 2-qubit gates. It’s using a new architecture called tunable couplers,” says Jerry Chow, director of quantum hardware system development at IBM Quantum.
“Along with this plan for this new processor for Heron, we want to be able to have multiple Herons that are all addressable via one control architecture,” he adds. “We want to be able to have classical communication linked across these chips and processors as we’re building them out.”
Better gate-level control
Before you can understand what a qubit is, you need to understand what a bit is, and what a gate is, too. On classical computers, information is encoded as binary bits (0 or 1). Transistors are switches that control the flow of electrons. Transistors are connected to several electrodes, including a gate electrode. Changing the electrical charge on the gate electrode controls whether the transistor is on in state 1, or off, in state 0. Physical changes to these states allow computers to encode information. Logic gates are made up of a specific arrangement of transistors. A bunch of transistors can make up an integrated circuit which can store chunks of data. These circuits are all interconnected on the surface of a chip.
Qubits work differently from bits, and quantum gates work differently than classical gates. Unlike classical bits, which can have a value of 1 or 0, under the right conditions, qubits can stay in the wave-like, quantum superposition state, which represents a sphere of all possible configurations—0, 1, or both at the same time. The information that each added qubit can hold scales exponentially, unlike bits, which scale linearly. Firing microwave photons at qubit-specific frequencies allows researchers to control their behavior, which can be to hold, change, or read out units of quantum information.
Unfortunately, qubits are quite fragile: They are heat-sensitive, unstable, and error-prone. When qubits talk to each other or to the wiring in their environment, they can lose their quantum properties, making calculations less accurate. When describing how long they can stay in their superposition states, experts refer to their “coherence time.” The coherence time and how long it takes to do a gate set the limit on how big of a quantum calculation you can do with a set of qubits.
“The way that we’ve been designing our current processors, Falcon, Hummingbird, Eagle, have been using fixed coupling between qubits, and we’ve been using a microwave-based 2-qubit cross-resonance gate,” says Chow. In those cases, they were using different frequencies to talk to the corresponding qubit. Now, they’re adding “individualized magnetic field controls for the couplers between the qubits,” Chow says, which allows them to turn on qubit interactions with the varying microwave frequencies.
Multiple, connected quantum processors
Classical computers have cores, which are groupings of transistors that can run multiple tasks in parallel. You can envision it as having multiple checkout registers open at a supermarket instead of having everyone line up for one. CPUs that offer multiple cores, or multi-threading, can split up a big task into smaller pieces that can be fed to the different cores for processing.
Now, IBM wants to apply this concept to quantum computing as well, through a technique called circuit knitting. This “effectively takes large quantum circuits, finds ways to break them down into smaller, more digestible quantum circuits, which can be almost parallelly run across a number of processors,” Chow explains. “With this classical parallelization, it increases the types of problems and capabilities that we’re able to address.” Parallelization could also be useful for decreasing error rates.
This design offshoot is separate from the development of Osprey or Condor, which are on track to hit 433 and 1,121 qubits, respectively, in the next few years. “But we also want to have some modularity built-in that will allow us to scale even further. At some level, just the amount of the number of qubits that we’re going to be able to pack into a single chip will start to become limited,” says Chow. “We’re testing some of those boundaries with Osprey and with Condor currently.”
With Heron, the idea is for engineers to test ways to establish quantum links across multiple quantum chips. “We’re exploring what we call these modularly couplers that will allow us to effectively have multiple chips that are connected together,” Chow says. This will create what is essentially a larger, quantum coherent processor made up of three individual quantum chips with the same underlying quantum processor. To this end, IBM hopes to couple three chips into a 408-qubit system, called Crossbill, in 2024.
To scale even more, IBM is also working on long-range couplers that can connect up clusters of quantum processors through a meter-long cryogenic cable (superconducting qubits need to be kept very cold). “We’re calling this the inter-quantum communication link,” says Chow, and it can extend quantum coherent connections within the shared cryogenic environment.
Combining parallelization, chip-to-chip connection, as well as long-range coupling is what could enable them to achieve their 2025 goal of a 4,158-qubit system: The Kookaburra.
Combining classical computing with quantum computing
Going quantum doesn’t mean redesigning an entire computer from the ground up. Much of the quantum system runs on classical computing infrastructure. “The way that we typically have our systems is you have your quantum processor inside the refrigerator and you’re constantly talking to it with the classical infrastructure,” Chow says. “The classical infrastructure is generating these microwave pulses, generating the read-outs. When you program a circuit it just turns into this orchestration of gates, operations that go to the chips.”
But instead of having just quantum processors, one controller can also feed into classical processors, like CPUs and GPUs, which would be connected in parallel to the quantum chip, but not in any quantum way. That way, it can do threaded applications utilizing both classical and quantum computing powers.
“The quantum processor is providing a different resource from a GPU or a super large CPU,” says Chow. “But overall, the whole thing is going to be something that feels like a supercomputer that is still orchestrated together.”
In IBM’s vision of the future of computation, machines will have components that can run quantum circuits on the quantum hardware. However, this component will be stitched together with classical memory and classical infrastructure. This type of hybrid structure can be used for problems like molecular simulations, which uses a hybrid quantum-classical algorithm called the variational quantum eigensolver.
When IBM’s first quantum computer was launched onto the cloud in 2016, it came with an assembly language, called OpenQASM, which has been used to build up programs. This coming year, IBM will integrate “dynamic circuits” that can measure qubits and process classical information concurrently into their OpenQASM 3 library. This is also a hardware improvement that hinges on improved control electronics and better real-time messaging between the control side of the circuit and the measurement side. It can allow for more error corrections and parity checks.
The basic language coding for these types of operations will form primitives, or the basic computation elements of an algorithm, all of which will be a part of IBM’s Qiskit Runtime platform, a computing service and programming model for quantum calculations. Qiskit contains different levels of assembly languages for kernel developers who might have to work with the code and the hardware and an API in the Qiskit stack for algorithm developers to work serverlessly.
“At this higher level for algorithm developers, you don’t need to care about running it on any particular backend when you have this cloud environment where you can access the CPUs, GPUs, and QPUs, all orchestrated together,” Chow says. “It allows us to use the classical resources in concert with our quantum resources to handle some of the larger quantum circuit problems—ones that might be pushing on things like quantum advantage.”