What's new

IBM Breaks 100-Qubit QPU Barrier, Marks Milestones on Ambitious Roadmap

F-22Raptor

ELITE MEMBER
Joined
Jun 19, 2014
Messages
16,980
Reaction score
3
Country
United States
Location
United States
The highlights reel of IBM’s steady progress in quantum computing was on full display at the company’s 2021 Quantum Summit presented last month while most of the HPC community was wrapped up in SC21. Underpinned by six milestones met this year, IBM has declared that 2023 will be the year, broadly, when its systems deliver quantum advantage, and quantum computing takes its early place as a powerful tool on the HPC landscape.

At present, the advances being made throughout the quantum computing community are impressive and accelerating perhaps beyond the expectations of many observers. In that context, IBM has long been the 500-pound gorilla in the quantum computing world, digging into virtually every aspect of the technology, its use cases, and customer/developer engagements. IBM, of course, is focused on semiconductor-based, superconducting qubit technology and the jury is out on which of the many qubit technologies will prevail. Likely, it won’t be just one.

Last year, IBM laid out a detailed quantum roadmap with milestones around hardware, software, and system infrastructure. At this year’s IBM Quantum Summit, Jay Gambetta, IBM fellow and vice president, quantum computing, along with a few colleagues, delivered a report card and glimpse into future IBM plans. He highlighted six milestones – not least the recent launch of IBM’s 127-qubit quantum processor, Eagle, and plans for IBM System Two, a new complete infrastructure that will supplant System One.

Look over the IBM roadmap shown below (click to enlarge). In many ways, it encompasses the challenges and aspirations faced by everyone in the quantum community.


While fault-tolerant quantum computing remains distant, the practical use of quantum computing on noisy intermediate scale quantum (NISQ) computers seems closer than many expected. We are starting to see early quantum-based applications emerge – mostly around random number generation (see HPCwire articles on Quantinuum and Zapata, both of whom are working to leverage quantum-generated random numbers).

Before digging into the tech talk, it’s worth noting how IBM expects the commercial landscape to emerge (figure below). Working with the Boston Consulting Group, IBM presented a rough roadmap for commercial applications. “IBM’s roadmap is not just concrete. It’s also ambitious,” said Matt Langione, principal and North America head of deep tech, BCG, at the IBM Summit. “We think the technical capabilities [IBM has] outlined today will help create $3 billion in value for end users during the period described.”

He cited portfolio optimization in financial services as an example. Efforts to scale up classical computing-based optimizers “struggle with non-continuous non-convex functions, things like interest rate yield curves, trading logs, buy-in thresholds, and transaction costs,” said Langione. Quantum optimizers could overcome those challenges and, “improve trading strategies by as much as 25 basis points with great fidelity at four nines by 2024 with [quantum] runtimes that integrate classical resources and have error mitigation built in. We believe this is the sort of capability that could be in trader workflows [around] 2025,” he said.

He also singled out mesh optimizers for computational fluid dynamics used in aerospace and automotive design which have similar constraints. He predicted, “In the next three years, quantum computers could start powering past limits that constrain surface size and accuracy.” Look over BCG/IBM’s market projection shown below.


Quantum computing has no shortage of big plans. IBM is betting that by laying out a clear vison and meeting its milestones it will entice broader buy-in from the wait-and-see community as well as within the quantum community. Here are brief summaries of the six topics reviewed by Gambetta and colleagues. IBM has posted a video of the talk, which in just over 30 minutes does a good, succinct job of reviewing IBM progress and plans.

  1. Breaking the 100-Qubit Barrier
IBM starts the formal counting of its current quantum processor portfolio with the introduction of the Falcon processor in 2019; it introduced IBM’s heavy-hexagonal qubit layout and has 27 qubits. IBM has been refining this design since. Hummingbird debuted in 2020 with 65 qubits. Eagle, just launched at the 2021 Summit, has 127 qubits. The qubit count has roughly doubled with each new processor. Next up is Osprey, due in 2022, which will have 433 qubits.

Jerry Chow, director of quantum hardware system development at IBM, explained the lineage this way, “With Falcon, our challenge was reliable yield. We met that challenge with a novel Josephson junctiontuning process, combined with our collision-reducing heavy hexagonal lattice. With Hummingbird, we implemented a large-ratio multiplexed readout allowing us to bring down the total cryogenic infrastructure needed for qubit state readout by a factor of eight. This reduced the raw amount of componentry needed.”

“Eagle [was] born out of a necessity to scale up the way that we do our device packaging so we can bring signals to and from our superconducting qubits in a more efficient way. Our work to achieve this is relied heavily upon IBM experience with CMOS technology. It’s actually two chips.”

For Eagle, “The Josephson junction base (qubits) sit on one chip which is attached to a separate interposer chip through bump bonds. This interposer chip provides connections to the qubits through the packaging techniques which are common throughout the CMOS world. These include things like substrate vias and a buried wiring layer, which is completely novel for this technology. The presence of the buried layer provides flexibility in terms of routing the signals and laying out of the device,” said Chow.


IBM says Eagle is the most advanced quantum computing chip ever built, the world’s first quantum processor over 100 qubits. Chow said, “Let me stress this isn’t just a processor we fabricated, but a full working system that is running quantum circuits today.” He said Eagle will be widely available by the end of the year, which presumably means now-ish.

Looking at the impact of Eagle, IBM isn’t shy: “The increased qubit count will allow users to explore problems at a new level of complexity when undertaking experiments and running applications, such as optimizing machine learning or modeling new molecules and materials for use in areas spanning from the energy industry to the drug discovery process. ‘Eagle’ is the first IBM quantum processor whose scale makes it impossible for a classical computer to reliably simulate. In fact, the number of classical bits necessary to represent a state on the 127-qubit processor exceeds the total number of atoms in the more than 7.5 billion people alive today.”

Osprey, due next year, will have 433 qubits as noted earlier and, said Chow, will introduce “the next generation of scalable input output that can deliver signals from room temperature to the cryogenic temperatures.”


  1. Overcoming the Gate Error Barrier
Measuring quality in quantum computing can be tricky. Key components such as coherence duration and gate fidelity are adversely affected by many factors usually lumped together as system and environmental noise. Taming these influences is why most quantum processors are housed in big dilution refrigerators. IBM developed a benchmark metric, Quantum Volume (QV), which has various performance attributes baked in and QV has been fairly widely used the quantum community. IBM achieved QV of 128 on some of its systems. Honeywell (now Quantinuum) also reportedachieving QV 128 on its trapped ion device.

At the IBM Quantum Summit, Matthias Steffen, IBM fellow and chief quantum architect reviewed progress on extending coherence times and improving gate fidelity.

“We’ve had a breakthrough with our new Falcon r8 processors. We have succeeded in improving our T1 times (spin-lattice relaxation) dramatically from about 0.1 milliseconds to 0.3 milliseconds. This breakthrough is not limited to a one-off-chip (good yield). It has now been repeated several times. In fact, some of our clients may have noticed [on] the device map showing up for IBM Peekskill recently,” said Steffen. “This is just the start. We have tested several research test devices and we’re now measuring 0.6 milliseconds closing in on reliably crossing the one millisecond barrier.”

“We also had a breakthrough this year with improved gate fidelities. You can see these improvements (figure below) color coded by device family. Our Falcon r4 devices generally achieved gate errors near 0.5 x 10-3.) Our Falcon r5 devices that also include faster readout are about 1/3 better. In fact, many of our recent demonstrations came from this r5 device family. Finally, in gold, you see some of our latest test devices, which include Falcon r8 with the improved coherence times.”

“You also see measured fidelity for other devices, including our very recently [developed] Falcon r10 [on which] we have measured a two-qubit gate breaking the 0.001 error per gate plain,” said Steffen.

IBM is touting achieving 0.001 gate fidelity, which corresponds to over 1000 gates per error, as achieving 3 nines or 99.9 percent quality, and a major milestone.


  1. Mainstreaming Falcon r5
Currently, the Falcon architecture is IBM’s workhorse. As explained by IBM, the portfolio of accessible QPUs includes core and exploratory chips: “Our users have access to the exploratory devices, but those devices are not online all the time. Premium users get access to both core and exploratory systems.”

IBM says there are three metrics that characterize system performance – quality, speed, and scale – and recently issued a white paper (brief excerpt at the article end) defining what’s meant by that. Speed is a core element and is defined as ‘primitive circuit layer operations per second’. IBM calls this CLOPS (catchy), roughly analogous to FLOPS in classical computing parlance.

“There’s no getting away from it,” said Katie Pizzolato, IBM director, quantum theory & applications systems. “Useful quantum computing requires running lots of circuits. Most applications require running at least a billion. If it takes my system more than five milliseconds to run a circuit, it’s simple math, a billion circuits will take you 58 days; that’s not useful quantum computing.”

At the lowest level QPU speed is driven by the underlying architecture. “This is one of the reasons we chose superconducting qubits. In these systems, we can easily couple the qubits to the resonators in the processors. This gives us fast gates, fast resets and fast readout fundamentals for speed,” said Pizzolato.

“Take the Falcon r5 processor for example, [which] is a huge upgrade over the Falcon r4. With the r5 we integrated new components into the processor that have eight times faster measurement rate than the r4 without any effect on coherence. This allows the measurement rate to be a few 100 nanoseconds compared to a few microseconds. Add this to other improvements we’ve made to gate time, and you have a major step forward with the Falcon r5,” she said.

IBM is now officially labelling Falcon r5 a core system, a step up from exploratory. “We’re making sure that Falcon r5 is up and running and with high reliability. We are confident that the r5, which has faster readout, can be maintained with high availability, so it is now labeled as a core system,” she said.

Pizzolato didn’t give a specific CLOPS number for Falcon r5 but in another talk given to the Society of HPC Professionals in early December, IBM’s Scott Crowder (VP and CTO, quantum) showed a slide indicating 4.3 CLOPS for IBM (though didn’t specific which QPU) versus 45 CLOPS for trapped ion.


  1. IBM Systems All Support Qiskit Runtime
In May, IBM rolled out a beta version of Qiskit Runtime, which it says is “a new architecture offered by IBM Quantum that streamlines computations requiring many iterations.” The idea is to leverage classical systems to accelerate access to QPUs not unlike the way CPUs manage access to GPUs in classical computing. Qiskit Runtime is now supported by all IBM QPUs.

“We created Qiskit Runtime to be the container platform for executing classical codes in an environment that has very fast access to quantum hardware,” said Pizzolato. “[It] completely changes the use model for quantum hardware. It allows users to submit programs of circuits rather than simply circuits to IBM’s quantum datacenters. This approach gives us 120-fold improvement. A program like VQE (variational quantum eigensolver), which used to take our users 45 days to run, can now be done in nine hours.”

IBM contends that these advances combined with the 127-qubit Eagle processor mean, “no one really needs to use a simulator anymore.”

Here’s the Qiskit Runtime description from the IBM website: “Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or others to use. A Qiskit quantum program, also called a Qiskit Runtime program, is a piece of Python code that takes certain inputs, performs quantum and maybe classical computation, interactively provides intermediate results if desired, and returns the processing results. The same or other authorized users can then invoke these quantum programs by simply passing in the required input parameters.”

  1. Serverless Quantum Introduction
Qiskit Runtime, says IBM, is part of a broader effort to bring classical and quantum resources closer together via the cloud and to create serverless quantum computing. This would be a big step in abstracting away many obstacles now faced by developers.

“Qiskit Runtime involves squeezing more performance from our QPU at the circuit level by combining it with classical resources to remove latency and increase efficiency. We call this classical with a little c,” said Sarah Sheldon, an IBM Research staff member. “We’ve also discovered we can use classical resources to accelerate progress towards quantum advantage and get us there earlier.”

“To do this, we use something we call using classical with a capital C. These capabilities will be both at the kernel and algorithm levels. We see them as a set of tools, allowing users to trade off quantum and classical resources to optimize the overall performance of an application at the kernel level. This will be achieved using libraries of circuits for sampling time, evolution, and more. But at the algorithm level, we see a future where we’re offering pre-built Qiskit Runtimes in conjunction with classical integration libraries. We call this circuit knitting,” said Sheldon.

Broadly, circuit knitting is a technique that decomposes a large quantum circuit with more qubits and larger gate depth into multiple smaller quantum circuits with fewer qubits as smaller gate depth; it then combines the outcomes together in classical post processing. “This allows us to simulate much larger systems than ever before. We can also knit together circuits along an edge where a high level of noise or crosstalk would be present. This lets us simulate quantum systems with higher levels of accuracy,” said Sheldon.


IBM reported having demonstrated circuit knitting by simulating the ground state of a water molecule using only five qubits with a specific technique of ‘entanglement forging,’ which knits circuits across weakly entangled halves. With circuit knitting, says IBM, users can boost the scale of the problem tackled or increase the quality of the result by making speed trading-offs with these tools.

The new capabilities are being bundled into IBM Code Engine on the IBM cloud. Code engine, combined with lower-level tools will deliver serverless computing says IBM. Pizzolato walked through and example, “The first step is to define the problem. In this case, we’re using VQE. Secondly, we use Lithops, a Python multicloud distributed computing framework to execute the code. Inside this function, we open a communication channel to the Qiskit Runtime and run the program estimator.”

“As an example, for the classical computation, we use the simultaneous perturbations stochastic approximation algorithm. This is just an example; you could put anything here. So now the user can just sit back and enjoy the results. As quantum is increasingly adopted by developers, quantum serverless enables developers to just focus on their code without getting dragged into configuring classical resources,” she said.


  1. Early Plans for System Two.
IBM’s final announcement was that it is “closing the chapter on” IBM Quantum System One, its fully enclosed quantum computer infrastructure, which debuted in 2019. Chow said System One would be able to handle Eagle, but that IBM was partnering with Finnish company Bluefors to develop System Two, its next generation cryogenic infrastructure.

“We are actively working on an entirely new set of technologies from novel high-density, cryogenic microwave flex cables to a new generation of FPGA based high-bandwidth, integrated control electronics,” said Chow.

Bluefors introduced its newest cryogenic platform, Kide, which will be the basis for IBM System Two.


“We call it Kide because in Finnish, Kide means snowflake or crystal, which represents the hexagonal crystal like geometry of the platform that enables unprecedented expandability and access,” said Russell Lake of Bluefors. “Even when we create a larger platform, we maintain the same user accessibility as with a smaller system. This is crucial as advanced quantum hardware scales up. We optimize cooling power by separating the cooling for the quantum processor from the operational heat loads. in addition, the six-fold symmetry of the key to platform means that systems can be joined and clustered to enable vastly expanded quantum hardware configurations.”

“The modular nature of IBM Quantum System Two will be the cornerstone of the future quantum datacenters,” said Gambetta. Presumably, the 433-qubit Osprey processor will be housed in a version of the new System Two infrastructure.

There was a lot to absorb in the IBM presentation. IBM was naturally attempting to put its best foot forward. Practically speaking, there are many companies working on all of the quantum computing aspects discussed by IBM but few tackling all of them. For this reason, IBM’s report serves as an interesting overview of progress generally, throughout the quantum community.

Reaching quantum advantage in 2023, even if for only a few applications, would be a big deal.

https://www.hpcwire.com/2021/12/13/...arrier-marks-milestones-on-ambitious-roadmap/
 
Back
Top Bottom