12.5 Approximating generic unitaries is hard, but…

Now that we understand approximations of unitary operators, we can revisit the question of universality that we touched upon in Sections 2.13 and 3.5. Recall that we call a finite set G of gates universal if any n-qubit unitary operator can be approximated (up to an overall phase) to arbitrary accuracy by some unitary constructed using only gates from G (and we then call the gates in G elementary). In other words, G is universal if, for any unitary U acting on n-qubits and for any \varepsilon>0, there exist U_1,\ldots,U_d\in G such that \widetilde{U}\coloneqq U_d\cdots U_1 satisfies \|\widetilde{U}-e^{i\varphi}U\|\leqslant\varepsilon for some phase \varphi.

For example, each of the following sets of gates is universal:

  • \{H,\texttt{c-}S\}
  • \{H,T,\texttt{c-NOT}\}
  • \{H,S,\texttt{Toff}\}

where S and T are the \pi/4- and \pi/8-phase gates (Section 2.6), \texttt{c-}S is the controlled S gate, and \texttt{Toff} is the Toffoli gate (Exercise 9.12.14).

But now we can be a bit more precise with the question that the notion of universality is trying to answer: given a universal set of gates, how hard is it to approximate any desired unitary transformation with accuracy \varepsilon? That is, how many gates do we need?

The answer is a lot. In fact, it is exponential in the number of qubits — most unitary transformation require large quantum circuits of elementary gates. We can show this by a counting argument (along with a healthy dose of geometric intuition).

Consider a universal set of gates G consisting of g gates, where each gates acts on no more than k qubits. How many circuits (acting on n-qubits) can we construct using t gates from this set? We have g\binom{n}{k} choices245 for the first gate, since there are g gates, and \binom{n}{k} ways to place it so that it acts on k out of n qubits. The same holds for all subsequent gates, and so we can build no more than \left( g\binom{n}{k} \right)^t circuits of size t from G. What is important is that g\binom{n}{k} is polynomial in n, and g and k are fixed constants, so we will write this upper bound as (\mathrm{poly}(n))^t. In more geometric language, we have shown that, with t gates, we can generate (\mathrm{poly}(n))^t points in the space U(N) of unitary transformations on n-qubits, where N=2^n. Now imagine drawing a ball of radius \varepsilon (in the operator norm) centred at each of these points — we want these balls to cover the entire unitary group U(N), since this then says that any unitary is within distance \varepsilon of a circuit built from t gates in G. We will not get into the details of the geometry of U(N), but simply use the fact that a ball of radius \varepsilon in U(N) has volume proportional to \varepsilon^{N^2}, whereas the volume of UN) itself is proportional to C^{N^2} for some fixed constant C. So we want \varepsilon^{N^2}(\mathrm{poly}(n))^t \geqslant C^{N^2} which (after some algebraic manipulation) requires that t \geqslant 2^{2n}\frac{\log(C/\varepsilon)}{\log(\mathrm{poly}(n))}. In words, the scaling is exponential in n but only logarithmic in 1/\varepsilon.

When we add qubits, the space of possible unitary operations grows very rapidly, and we have to work exponentially hard if we want to approximate the resulting unitaries with some prescribed precision. If, however, we fix the number of qubits and instead ask for better and better approximations, then things are much easier, since we only have to work logarithmically hard.

The snag is that this counting argument does not give us any hints as to how we can actually build such approximations. A more constructive approach is to pick a set of universal gates and play with them, building more and more complex circuits. There is an important theorem in this direction that tells us that it does not matter much which particular universal set of gates we choose to start with.

The Solovay–Kitaev Theorem. Choose any two universal sets of gates that are weakly closed under inverses (that is, the inverse of any gate in the set can be constructed (exactly) as a finite sequence of gates in the set, even if it is not itself an elementary gate). Then any t-gate circuit built from one set of gates can be implemented to precision \varepsilon using a t\mathrm{poly}(\log(t/\varepsilon))-gate circuit built from the other set. Furthermore, there is an efficient classical algorithm for finding this circuit.

Since errors accumulate linearly, it suffices to approximate each gate from one set to accuracy \varepsilon/t, which can be achieved by using a \mathrm{poly}(\log(t/\varepsilon))-gate circuit built from the other set. So we can efficiently convert (constructively, via some efficient classical algorithm) between universal sets of gates with overhead \mathrm{poly}(\log(1/\varepsilon))), i.e. \log^c(1/\varepsilon) for some constant c. For all practical purposes, we want to minimise c, but the counting argument above shows that the best possible exponent is 1, so the real question is can we get close to this lower bound? In general, we do not know. However, for some universal sets of gates we have nearly optimal constructions. For example, the set \{H,T\} can be used to approximate arbitrary single-qubit unitaries to accuracy \varepsilon using \log(1/\varepsilon) many gates, instead of \mathrm{poly}(\log(1/\varepsilon)), and the circuits achieving this improved overhead cost can be efficiently constructed (for example, by the Matsumoto–Amano construction).


  1. Counting arguments nearly always use binomial coefficient notation: \binom{a}{b}\coloneqq\frac{a!}{b!(b-a)!}.↩︎