8.2 Statistical mixtures
Let us start with probability distributions157 over state vectors.
Suppose Alice prepares a quantum system and hands it over to Bob, who subsequently measures observable
This way of expressing the average value makes a clear separation between the contributions from the state preparation and from the choice of the measurement.
We have two operators inside the trace:
Now, suppose Alice prepares the quantum system in one of the (normalised, but not necessarily orthogonal) states
It is important to note that a mixture of states is very different from a superposition of states: a superposition always yields a definite state vector, whereas a mixture does not, and so must be described by a density operator.
Let’s be extra clear about this distinction between superpositions and statistical mixtures.
If Alice had prepared the system in the superposition
What Bob does know is the ensemble of states
Once we have
Alice flips a fair coin. If the result is heads then she prepares the qubit in the state
|0\rangle , and if the result is tails then she prepares the qubit in the state|1\rangle . She gives Bob the qubit without revealing the result of the coin-flip. Bob’s knowledge of the qubit is described by the density matrix\frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} \frac{1}{2} & 0 \\0 & \frac{1}{2} \end{bmatrix}. Alice flips a fair coin. If the result is heads then she prepares the qubit in the state
|+\rangle\coloneqq\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle) , and if the result is tails then she prepares the qubit in the state|-\rangle\coloneqq\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle) . Bob’s knowledge of the qubit is now described by the density matrix\begin{aligned} \frac{1}{2}|+\rangle\langle+| + \frac{1}{2}|-\rangle\langle-| &= \frac{1}{2} \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\\frac{1}{2} & \frac{1}{2} \end{bmatrix} +\frac{1}{2} \begin{bmatrix} \frac{1}{2} & -\frac{1}{2} \\-\frac{1}{2} & \frac{1}{2} \end{bmatrix} \\&= \begin{bmatrix} \frac{1}{2} & 0 \\0 & \frac{1}{2} \end{bmatrix}. \end{aligned} Alice flips a fair coin, having already picked an arbitrary pair of orthonormal states
|u_1\rangle and|u_2\rangle . If the result is heads then she prepares the qubit in the state|u_1\rangle , and if the result is tails then she prepares the qubit in the state|u_2\rangle . Since any two orthonormal states of a qubit form a complete basis, the mixture\frac{1}{2}|u_1\rangle\langle u_1|+\frac{1}{2}|u_2\rangle\langle u_2| gives\frac{1}{2}\mathbf{1} .
As you can see, these three different preparations yield precisely the same density matrix and are thus statistically indistinguishable. In general, two different mixtures can be distinguished (in a statistical, experimental sense) if and only if they yield different density matrices. In fact, the optimal way of distinguishing quantum states with different density operators is still an active area of research.
For brevity, we often simply say “probability distribution” to mean “a finite set of non-negative real numbers
p_k such that\sum_k p_k=1 ”.↩︎If
M is one of the orthogonal projectorsP_k describing the measurement, then the average\langle P_k\rangle is the probability of the outcomek associated with this projector.↩︎A pure state can be seen as a special case of a mixed state, where all but one the probabilities
p_i equal zero. So by talking about mixed states, we’re still able to talk about everything that we’ve already seen up to this point.↩︎This description is not one that we have seen before — it’s not a linear combination of kets, but instead a linear combination of projectors!↩︎