Overview
Randomized benchmarking offers several key advantages over alternative approaches to error characterization. For example, the number of experimental procedures required for full characterization of errors (calledHistory
Randomized benchmarking was proposed in ''Scalable noise estimation with random unitary operators'', where it was shown that long sequences of quantum gates sampled ''uniformly'' at random from the Haar measure on the group SU(''d'') would lead to an exponential decay at a rate that was uniquely fixed by the error model. They also showed, under the assumption of gate-independent errors, that the measured decay rate is directly related to an important figure of merit, the average gate fidelity and independent of the choice of initial state and any errors in the initial state, as well as the specific random sequences of quantum gates. This protocol applied for arbitrary dimension ''d ''and an arbitrary number ''n'' of qubits, where ''d''=2''n''. The SU(''d'') RB protocol had two important limitations that were overcome in a modified protocol proposed by Dankert ''et al.'', who proposed sampling the gate operations uniformly at random from any unitary two-design, such as the Clifford group. They proved that this would produce the same exponential decay rate as the random SU(''d'') version of the protocol proposed in Emerson ''et al.''. This follows from the observation that a random sequence of gates is equivalent to an independent sequence of twirls under that group, as conjectured in and later proven in. This Clifford-group approach to Randomized Benchmarking is the now standard method for assessing error rates in quantum computers. A variation of this protocol was proposed by NIST in 2008 for the first experimental implementation of an RB-type for single qubit gates. However, the sampling of random gates in the NIST protocol was later proven not to reproduce any unitary two-design. The NIST RB protocol was later shown to also produce an exponential fidelity decay, albeit with a rate that depends on non-invariant features of the error model In recent years a rigorous theoretical framework has been developed for Clifford-group RB protocols to show that they work reliably under very broad experimental conditions. In 2011 and 2012, Magesan ''et al.'' proved that the exponential decay rate is fully robust to arbitrary state preparation and measurement errors (SPAM). They also proved a connection between the average gate fidelity and diamond norm metric of error that is relevant to the fault-tolerant threshold. They also provided evidence that the observed decay was exponential and related to the average gate fidelity even if the error model varied across the gate operations, so-called gate-dependent errors, which is the experimentally realistic situation. In 2018, Wallman and Dugas ''et al.'', showed that, despite concerns raised in, even under very strong gate-dependence errors the standard RB protocols produces an exponential decay at a rate that precisely measures the average gate-fidelity of the experimentally relevant errors. The results of Wallman. in particular proved that the RB error rate is so robust to gate-dependent errors models that it provides an extremely sensitive tool for detecting non- Markovian errors. This follows because under a standard RB experiment only non-Markovian errors (including time-dependent Markovian errors) can produce a statistically significant deviation from an exponential decay The standard RB protocol was first implemented for single qubit gate operations in 2012 at Yale on a superconducting qubit. A variation of this standard protocol that is only defined for single qubit operations was implemented by NIST in 2008 on a trapped ion. The first implementation of the standard RB protocol for two-qubit gates was performed in 2012 at NIST for a system of two trapped ionsReferences
{{Quantum information Quantum computing Computer hardware Benchmarks (computing)