1.10 Remarks and exercises

1.10.1 A historical remark

Back in 1926, Max Born simply postulated the connection between amplitudes and probabilities, but did not get it quite right on his first attempt. In the original paper20 proposing the probability interpretation of the state vector (wavefunction) he wrote:

… If one translates this result into terms of particles only one interpretation is possible. \Theta_{\eta,\tau,m}(\alpha,\beta,\gamma) [the wavefunction for the particular problem he is considering] gives the probability^* for the electron arriving from the z direction to be thrown out into the direction designated by the angles \alpha,\beta,\gamma

^* Addition in proof: More careful considerations show that the probability is proportional to the square of the quantity \Theta_{\eta,\tau,m}(\alpha,\beta,\gamma).

1.10.2 Modifying the Born rule

Suppose that we modified the Born rule, so that probabilities were given by the absolute values of amplitudes raised to power p (for some p not necessarily equal to 2). Then admissible physical evolutions would still have to preserve the normalisation of probability: mathematically speaking, they would have to be isometries of p-norms.

Recall that the p-norm of vector v, with components v_1, v_2,\ldots, v_n, is defined as \sqrt[{}^p]{|v_1|^p + |v_2|^p + \ldots + |v_n|^p}. It is clear that any permutation of vector components and multiplication by phase factors (i.e. unit complex numbers) will leave any p-norm unchanged. It turns out that these complex permutations are the only isometries, except for one special case! For p=2, the isometries are unitary operations, which form a continuous group; in all other cases we are restricted to discrete permutations. We do not have to go into details of the proof since we can see this result.

The unit spheres in the p-norm for p=1,2,42,\infty.

Figure 1.7: The unit spheres in the p-norm for p=1,2,42,\infty.

In particular, the image of the unit sphere must be preserved under probability preserving operations. As we can see in Figure 1.7, the 2-norm is special because of its rotational invariance — the probability measure picks out no preferred basis in the space of state vectors. Moreover, it respects unitary operations and does not restrict them in any way. If the admissible physical evolutions were restricted to discrete symmetries, e.g. permutations, then there would be no continuity, and no concept of “time” as we know it.

1.10.3 Complex numbers

Complex numbers have many applications in physics, however, not until the advent of quantum theory was their ubiquitous and fundamental role in the description of the actual physical world so evident. Even today, their profound link with probabilities appears to be a rather mysterious connection. Mathematically speaking, the set of complex numbers is a field. This is an important algebraic structure used in almost all branches of mathematics. You do not have to know much about algebraic fields to follow these lectures, but still, you should know the basics. Look them up.

  1. The set of rational numbers and the set of real numbers are both fields, but the set of integers is not. Why?
  2. What does it mean to say that the field of complex numbers is algebraically closed?
  3. Evaluate each of the following quantities: 1+e^{-i\pi}, \quad |1+i|, \quad (1+i)^{42}, \quad \sqrt{i}, \quad 2^i, \quad i^i.
  4. Here is a simple proof that +1=-1: 1=\sqrt{1}=\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}=i^2=-1. What is wrong with it?

1.10.4 Many computational paths

A quantum computer starts calculations in some initial state, then follows n different computational paths which lead to the final output. The computational paths are followed with probability amplitudes \frac{1}{\sqrt n}e^{i k \varphi}, where \varphi is a fixed angle 0< \varphi <2\pi and k=0,1,...n-1. Show that the probability of generating the output is21 \frac{1}{n}\left\vert \frac{1-e^{i n\varphi}}{1-e^{i\varphi}} \right\vert^2 = \frac{1}{n} \frac{\sin^2 (n\frac{\varphi}{2})}{\sin^2 (\frac{\varphi}{2})}. for 0<\varphi<2\pi, and 1 for \varphi=0. Plot the probability as a function of \varphi.

1.10.5 Distant photon emitters

Imagine two distant stars, A and B, that emit identical photons. If you point a single detector towards them you will register a click every now and then, but you never know which star the photon came from. Now prepare two detectors and point them towards the stars. Assume the photons arrive with the probability amplitudes specified in Figure 1.8. Every now and then you will register a coincidence: the two detectors will fire.

  1. Calculate the probability of a coincidence.
  2. Now, assume that z\approx \frac{1}{r}e^{i\frac{2r\pi}{\lambda}}, where r is the distance between detectors and the stars. How can we use this to measure r?
Two photon detectors pointing at two stars, with the probabilities of detection labelling the arrows.

Figure 1.8: Two photon detectors pointing at two stars, with the probabilities of detection labelling the arrows.

1.10.6 Physics against logic?

Now that we have poked our heads into the quantum world, let us see how quantum interference challenges conventional logic and leads to qualitatively different computations. Consider the following task (which we will return to a few more times in later chapters): design a logic gate that operates on a single bit such that, when it is followed by another, identical, logic gate, the output is always the negation of the input. Let us call this logic gate the square root of \texttt{NOT}, or \sqrt{\texttt{NOT}}. A simple check, such as an attempt to construct a truth table, should persuade you that there is no such operation in logic. It may seem reasonable to argue that since there is no such operation in logic, \sqrt{\texttt{NOT}} is impossible. Think again!

Figure 1.9 shows a simple computation, two identical computational steps performed on two states labelled as 0 and 1, i.e. on one bit. An interplay of constructive and destructive interference makes some transitions impossible and the result is the logical \texttt{NOT}. Thus, quantum theory declares, the square root of \texttt{NOT} is possible. And it does exist! Experimental physicists routinely construct this and many other “impossible” gates in their laboratories. They are the building blocks of a quantum computer. Quantum theory explains the behaviour of \sqrt{\texttt{NOT}}, hence, reassured by the physical experiments that corroborate this theory, logicians are now entitled to propose a new logical operation \sqrt{\texttt{NOT}}. Why? Because a faithful physical model for it exists in nature.

Write a 2\times 2 matrix which describes the \sqrt{\texttt{NOT}} operation. Is there just one such a matrix? Suppose you are given a supply of Hadamard and phase gates with tuneable phase settings. How would you construct the \sqrt{\texttt{NOT}} gate?

A computation that, when repeated, gives exactly \texttt{NOT}. An unlabelled line means that it has probability 1, and the lack of a line corresponds to having probability 0.

Figure 1.9: A computation that, when repeated, gives exactly \texttt{NOT}. An unlabelled line means that it has probability 1, and the lack of a line corresponds to having probability 0.

1.10.7 Quantum bomb tester

You have been drafted by the government to help in the demining effort in a former war-zone.22 In particular, retreating forces have left very sensitive bombs in some of the sealed rooms. The bombs are configured such that if even one photon of light is absorbed by the fuse (i.e. if someone looks into the room), the bomb will go off. Each room has an input and output port which can be hooked up to external devices. An empty room will let light go from the input to the output ports unaffected, whilst a room with a bomb will explode if light is shone into the input port and the bomb absorbs even just one photon — see Figure 1.10.

Left — the passage of a photon through an empty room. Right — the passage of a photon through a room containing a bomb.

Figure 1.10: Left — the passage of a photon through an empty room. Right — the passage of a photon through a room containing a bomb.

Your task is to find a way of determining whether a room has a bomb in it without blowing it up, so that specialised (limited and expensive) equipment can be devoted to defusing that particular room. You would like to know with certainty whether a particular room had a bomb in it.

  1. To start with, consider the setup in Figure 1.11, where the input and output ports are hooked up in the lower arm of a Mach–Zehnder interferometer.23
    1. Assume an empty room. Send a photon to input port |0\rangle. Which detector, at the output port, will register the photon?
    2. Now assume that the room does contain a bomb. Again, send a photon to input port |0\rangle. Which detector will register the photon and with which probability?
    3. Design a scheme that allows you — at least some of the time — to decide whether a room has a bomb in it without blowing it up. If you iterate the procedure, what is its overall success rate for the detection of a bomb without blowing it up?
The Mach--Zehnder interferometer hooked up to the bomb-testing room.

Figure 1.11: The Mach–Zehnder interferometer hooked up to the bomb-testing room.

  1. Assume that the two beam splitters in the interferometer are different. Say the first beam-splitter reflects incoming light with probability r and transmits with probability t=1-r, and the second one transmits with probability r and reflects with probability t. Would the new setup improve the overall success rate of the detection of a bomb without blowing it up?

  2. There exists a scheme, involving many beam-splitters and something called the quantum Zeno effect, such that the success rate for detecting a bomb without blowing it up approaches 100%. Try to work it out, or find a solution on the internet.

1.10.8 More time, more memory

A quantum machine has N perfectly distinguishable configurations. What is the maximum number of computational paths connecting a specific input with a specific output after k steps of the machine?

Suppose you are using your laptop to add together amplitudes pertaining to each of the paths. As k and N increase you may need more time and more memory to complete the task. How does the execution time and the memory requirements grow with k and N? In particular, will you need more time, or more memory, or both?

1.10.9 Quantum Turing machines

The classical theory of computation is essentially the theory of the universal Turing machine — the most popular mathematical model of classical computation. Its significance relies on the fact that, given a large but finite amount of time, the universal Turing machine is capable of any computation that can be done by any modern classical digital computer, no matter how powerful. The concept of Turing machines may be modified to incorporate quantum computation, but we will not follow this path. It is much easier to explain the essence of quantum computation talking about quantum logic gates and quantum Boolean networks or circuits. The two approaches are computationally equivalent, even though certain theoretical concepts, e.g. in computational complexity, are easier to formulate precisely using the Turing machine model. The main advantage of quantum circuits is that they relate far more directly to proposed experimental realisations of quantum computation.

1.10.10 Polynomial = good; exponential = bad

In computational complexity the basic distinction is between polynomial and exponential algorithms. Polynomial growth is good and exponential growth is bad, especially if you have to pay for it. There is an old story about the legendary inventor of chess who asked the Persian king to be paid only by a grain of cereal, doubled on each of the 64 squares of a chess board. The king placed one grain of rice on the first square, two on the second, four on the third, and he was supposed to keep on doubling until the board was full. The last square would then have 2^{63}=9,223,372,036,854,775,808 grains of rice, more than has been ever harvested on planet Earth, to which we must add the grains of all previous squares, making the total number about twice as large. If we placed that many grains in an unbroken line we would reach the nearest star Alpha Centauri, our closest celestial neighbour beyond the solar system, about 4.4 light-years away.24 The moral of the story: if whatever you do requires an exponential use of resources, you are in trouble.

1.10.11 Big O

In order to make qualitative distinctions between how different functions grow we will often use the asymptotic big-O notation. For example, suppose an algorithm running on input of size n takes a n^2+bn+c elementary steps, for some positive constants a, b and c. These constants depend mainly on the details of the implementation and the choice of elementary steps. What we really care about is that, for large n, the whole expression is dominated by its quadratic term. We then say that the running time of this algorithm grows as n^2, and we write it as O(n^2), ignoring the less significant terms and the constant coefficients. More precisely, let f(n) and g(n) be functions from positive integers to positive reals. You may think of f(n) and g(n) as the running times of two algorithms on inputs of size n. We say f=O(g),25 which means that f grows no faster than g, if there is a constant c>0 such that f(n)\leqslant c g(n) for all sufficiently large values of n. Essentially, f=O(g) is a very loose analogue of f \leqslant g. In addition to the big-O notation, computer scientists often use \Omega for lower bounds: f=\Omega (g) means g=O(f). Again, this is a very loose analogue of f \geqslant g.

  1. When we say that f(n)=O(\log n), why don’t we have to specify the base of the logarithm?

  2. Let f(n)=5n^3+1000n+50. Is f(n)=O(n^3), or O(n^4), or both?

  3. Which of the following statements are true?

    1. n^k=O(2^n) for any constant k
    2. n!=O(n^n)
    3. if f_1=O(g) and f_2=O(g), then f_1+f_2=O(g).

1.10.12 Imperfect prime tester

There exists a randomised algorithm which tests whether a given number N is prime.26 The algorithm always returns \texttt{yes} when N is prime, and the probability it returns \texttt{yes} when N is not prime is \epsilon, where \epsilon is never greater than a half (independently, each time you run the algorithm). You run this algorithm (for the same N) r times and each time the algorithm returns \texttt{yes}. What is the probability that N is not prime?

1.10.13 Imperfect decision maker

Suppose a randomised algorithm solves a decision problem, returning \texttt{yes} or \texttt{no} answers. It gets the answer wrong with a probability not greater than \frac12-\delta, where \delta>0 is a constant.27. If we are willing to accept a probability of error no larger than \epsilon, then it suffices to run the computation r times, where r=O(\log 1/\epsilon).

  1. If we perform this computation r times, how many possible sequences of outcomes are there?
  2. Give a bound on the probability of any particular sequence with w wrong answers.
  3. If we look at the set of r outcomes, we will determine the final outcome by performing a majority vote. This can only go wrong if w>r/2. Give an upper bound on the probability of any single sequence that would lead us to the wrong conclusion.
  4. Using the bound 1-x\leqslant e^{-x}, conclude that the probability of our coming to the wrong conclusion is upper bounded by e^{-2r\delta^2}.

  1. Max Born, “Zur Quantenmechanik der Stoßvorgänge”, Zeitschrift für Physik 37 (1926), 893–867.↩︎

  2. 1+z+z^2+\ldots + z^n= \frac{1-z^{n+1}}{1-z}↩︎

  3. This is a slightly modified version of a bomb testing problem described by Avshalom Elitzur and Lev Vaidman in Quantum-mechanical interaction-free measurement, Found. Phys. 47 (1993), 987-997.↩︎

  4. Read about Mach–Zehnder interferometers in Chapter 3.↩︎

  5. One light year (the distance that light travels through a vacuum in one year) is 9.4607\times10^{15} metres.↩︎

  6. f=O(g) is pronounced as “f is big-oh of g”.↩︎

  7. Primality used to be given as the classic example of a problem in \texttt{BPP} but not \texttt{P}. However, in 2002 a deterministic polynomial time test for primality was proposed by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena. Thus, since 2002, primality has been in \texttt{P}.↩︎

  8. This result is known as the Chernoff bound.↩︎