In this interview, AZoQuantum speaks with Quantinuum about a key milestone in quantum computing: demonstrating a fully fault-tolerant universal gate set. Central to this breakthrough are magic states, special resources required to perform non-Clifford gates. These gates are critical for universal quantum computation but have long been a challenge to implement reliably.
Can you walk us through the significance of demonstrating a fully fault-tolerant universal gate set with repeatable error correction?
To perform useful quantum computations, quantum information needs to be protected in an error-correcting code. Once that is done, the hard part is to manipulate that information, i.e., to apply gates. For a given code, some of these gates are naturally fault-tolerant and can be implemented easily. But there’s a theorem that says there will always be one that’s harder than the rest. Often, but not always, this is a “non-Clifford” gate, which can be performed by consuming resources known as “magic states.” To perform these non-Clifford gates accurately, these magic states need to have high fidelities.
In the “breaking-even with magic” paper, we demonstrated a new scheme for creating magic states. This scheme produced magic states with very high fidelities, high enough that we were able to “break-even” on a non-Clifford gate (i.e., perform it at a much higher fidelity than if we didn’t use a quantum error detection code). In fact, the gate that we broke-even on, used a universal, fault-tolerant gate set for this code. This shows that the magic states produced were at high enough fidelities to actually be useful, as well as showing, generally, an experimental demonstration that QEC really can lead to much lower error rates on a full set of gates.

Image Credit: Phonlamai Photo/Shutterstock.com
What specific technical hurdles did this achievement overcome that had previously prevented progress toward truly scalable systems?
The overhead of producing magic states has historically been quite high. Especially for the original approach of magic state distillation. So bringing this overhead down is critical for reducing the overall footprint of QEC in large algorithms. Furthermore, a lot of the work in this field had been done using the surface code and related topological codes like the color code. But that left promising codes, which could encode larger numbers of logical qubits with fewer physical qubits, at a comparative disadvantage. Part of the reason for this work then, was to bring the overheads for magic state production (for a family of these high-encoding-rate codes) to levels comparable to or lower than those for producing magic states in the surface code.
Download your PDF copy now!
The gate set you demonstrated meets the full criteria for universal fault tolerance. Could you explain how the integration of logical Clifford and T gates, as well as state preparation and measurement, ensures this universality in practice?
This is a great question because it highlights that gates aren’t the only thing that need to be fault-tolerant to perform arbitrary circuits on the logical level. State prep and measurement (SPAM) also must be fault-tolerant, as well as Quantum Error Correction (QEC, or in our case Quantum Error Detection or QED) cycles, and whatever else you do with your code. For this code, the Clifford gates could all be implemented easily (i.e., transversally). The challenging gate was the “T” gate, for which our magic state protocol was designed. But to benchmark these circuits, we needed fault-tolerant state prep, measurement, and QED gadgets as well. This was partly because we wanted to implement everything fault-tolerantly from the start. But just as importantly, if any of these components hadn’t been fault-tolerant, the errors they introduced would have been so significant that benchmarking the fault-tolerant gates would have been nearly impossible, as the signal-to-noise ratio would have been too low. By making everything fault-tolerant, including preparing computational input states for the gates, we were rewarded with very low error rates overall. Along with this being a satisfying demonstration of how fault-tolerance is crucial and practical, it also allowed us to safely conclude that our logical gates were outperforming our physical ones, even if we made no effort to separate out the different sources of error from the gate errors in our logical circuits (although we did separate these out for the physical circuits).
What were the core engineering or algorithmic innovations that made this level of precision and repeatability feasible?
Quantinuum's System Model H1-1 quantum processor is a remarkably accurate machine. In QEC terms, the low physical error rate of H1-1 combined with the high threshold of this protocol resulted in us being well-below threshold. This led to a surprisingly low logical error rate for these magic states and non-Clifford gates.
What benchmarking methods did you use to quantify this performance, and how did you ensure those comparisons were robust and fair?
There were two key things that we benchmarked. The first was the infidelity of the logical magic states themselves. Even though these were perhaps lower than any other demonstration, we did not attempt to compare them to the fidelity of physical magic states on H1-1, since the infidelity of state prep on that machine is still yet lower (something like 1 part per hundred thousand).
Still, benchmarking the fidelity of logical magic states posed an interesting technical challenge. The infidelity was low enough that repeatedly measuring a single state didn’t provide enough resolution to detect how much the results deviated from those of an ideal magic state. To get around that issue, I had the idea to prepare two magic states in separate code blocks and use one to rotate the other back to the |0> state in such a way that the percentage of the time that the second state ended up not being |0> was the combined infidelity of the two magic states. Interestingly, while we were writing our paper, a paper called “Efficient benchmarking of a logical magic state” appeared on arxiv saying that you really wanted to use two magic states and not one to benchmark. In fact, they proved in some sense that using one magic state to measure is suboptimal. Also, the authors of the other magic state paper run on Quantinuum hardware ended up doing something similar, although we didn’t communicate about this and came up with our approaches independently.
The other key quantity we benchmarked was the fidelity of a logical two-qubit non-Clifford gate, the controlled-Hadamard. For this gate, we did compare to physical as we expected to break even on it. There were two interesting problems here. The first was that, on most input states, the controlled-Hadamard produces non-stabilizer states, which have the same problem of being hard to take measurements efficiently as the magic states. To solve this problem, we evaluated both the physical and logical controlled-Hadamard on a set of 12 carefully chosen input states for which the output was efficiently measurable. This yielded both lower and upper bounds on the fidelities of both gates and allowed us to show that the best possibly fidelity of the physical gate was lower than the worst possible fidelity of the logical one. In obtaining these bounds, we also factored out the measurement errors in benchmarking the output of the physical controlled-Hadamard, so that what remained was the error coming from the gate itself.
Many quantum computing platforms struggle with leakage and crosstalk in multi-qubit systems. How did your system architecture—particularly with your use of trapped-ion technology—contribute to the fidelity and stability required for fault-tolerance?
In the Quantum Charge-Coupled Device (QCCD) architecture, the ability to physically separate the ions keeps cross-talk low enough that we could achieve high fidelities by focusing only on correlated errors from qubits being gated together, without needing to account for other forms of cross-talk. Similarly, leakage was a small enough part of the error budget that we didn’t need to consider it. In fact, at the fidelities that we were considering, performing our leakage detection gadget would have added more noise than it caught. However, there are many places in the protocols in our magic state paper, where leakage is naturally caught, and even corrected, if it occurs.
What are the next major technical challenges you anticipate delivering scalable, universal fault-tolerant quantum computers by 2029?
Continuing to build larger machines, of course. In addition, these magic states are designed to be used in an “exotic” family of concatenated codes. Although this protocol improves the efficiency of those codes, there may still be yet better constructions using even more efficient codes. So, we are racing to find even better protocols.
Looking ahead, how do you foresee this fault-tolerant gate set impacting near-term applications, and will it enable useful quantum algorithms or simulations even before full scalability is reached?
I’m hopeful that this protocol, or something similar, will enable some interesting quantum simulations at lower overheads than previously projected. These applications may still require a significant number of physical and logical qubits, but still far fewer than what may be needed for Shor’s algorithm, for instance. In particular, this protocol can be conveniently used in codes of medium size and intermediate distance, making it a good candidate for such applications.
About the Speaker

Shival Dasu received his bachelor's degree in mathematics with a minor in computer science at The California Institute of Technology. He received his PhD in mathematics from Indiana University where he specialized in Harmonic Analysis and Combinatorics. He now works on Quantum Error Correction at Quantinuum.
Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.