3.6 Some quantum dynamics

We will finish this chapter with a short aside on some more fundamental quantum theory. Although this isn’t our main focus — we will happily black box away this stuff, happy in the knowledge that some scientists in a lab somewhere have already packaged everything up into nice quantum logic gates that we can use — it is a nice opportunity to talk about other aspects of the subject that might be of interest.

The time evolution of a quantum state is a unitary process which is generated by a Hermitian operator called the Hamiltonian, which we denote by \hat{H}. The Hamiltonian contains a complete specification of all interactions within the system under consideration — in general, it may change over time. In an isolated system, the state vector |\psi(t)\rangle changes smoothly in time according to the Schrödinger equation: \frac{\mathrm{d}}{\mathrm{d}t} |\psi(t)\rangle =-\frac{i}{\hbar} \hat{H}|\psi(t)\rangle. In the same way that Newton’s second law describes certain future behaviour of a classical system given some initial knowledge, Schrödinger’s equation describes the future behaviour of a quantum system given some initial knowledge.

The first approach towards classical mechanics that you might meet is the Newtonian framework, where we talk about the equations that are satisfied by forces. It is Newton’s second law that we usually apply the most in order to describe the behaviour of classical systems, and it is usually stated as \mathbf{F}=m\mathbf{a}, where m is mass and \mathbf{a} is acceleration. But really the notion of “force” is not a fundamental one — a slightly more instructive way of writing Newton’s second law for a system whose mass can change over time is as \mathbf{F}=\frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t}, where \mathbf{p}=m\mathbf{v} is (linear) momentum: the product of mass (a scalar) with velocity (a vector).

Instead of talking about forces within a system, we can instead describe things entirely in terms of either position and velocity (where the latter is just the time derivative of the former) — using coordinates (\mathbf{q},\dot{\mathbf{q}}), where \mathbf{q} (confusingly) stands for “position”, and we write \dot{\mathbf{q}} to mean \frac{\mathrm{d}}{\mathrm{d}t}\mathbf{q} — or position and momentum — using coordinates (\mathbf{q},\mathbf{p}), where (again, confusingly) \mathbf{p} stands for momentum (maybe it’s like “pneumatic”, and we should call it “pmomentum”).

If we take either of these two approaches, then we have a suitable replacement for Newton’s second law:

  1. The first approach results in Lagrangian mechanics, where we have some function \mathcal{L}(t,\mathbf{q}(t),\dot{\mathbf{q}}(t)) called the Lagrangian, and study the Euler–Lagrange equations \frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial\mathcal{L}}{\partial \dot{\mathbf{q}}}\right) = \frac{\partial\mathcal{L}}{\partial\mathbf{q}} which is a second-order differential equation.
  2. The second approach results in [Hamiltonian mechanics], where we have some function \mathcal{H}(t,\mathbf{p}(t),\mathbf{q}(t)) called the Hamiltonian, and study the Hamilton equations \begin{aligned} \frac{\mathrm{d}\mathbf{q}}{\mathrm{d}t} &= \frac{\partial\mathcal{H}}{\partial\mathbf{p}} \\\frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t} &= \frac{\partial\mathcal{H}}{\partial\mathbf{q}} \end{aligned} which is a pair of first-order differential equations.

These two important functions, the Lagrangian and the Hamiltonian, are given by the total energies of the system: the former is the difference of the kinetic and potential energies; the latter is the sum of the kinetic and potential energies.

There are many situations where one framework is more useful than the other, but in quantum physics we normally find the Hamiltonian approach more easier than the Lagrangian, since momentum is a conserved quantity, whereas velocity is not. In fact, the Hamiltonian approach is hidden all over the place: the position and momentum operators in quantum physics are truly fundamental, and will show up again when we talk about uncertainty principles in Section 4.6.

Here \hbar is a very (very) small number known as the Planck constant. Physicists often pick a unit system such that \hbar is equal to 1, to make calculations simpler. But in SI units, 2\pi\hbar is exactly81 equal to 6.62607015\times10^{-34} joules per hertz.

As a historical note, Planck’s constant \hbar has its roots right in the very birth of quantum physics, since it shows up in the equation for the energy of a photon. More generally, in 1923 de Broglie postulated that the ratio between the momentum and quantum wavelength of any particle would be 2\pi\hbar. Even before this, it turned up in 1905 when Einstein stated his support for Planck’s idea that light is not just a wave, but simultaneously consists of tiny packets of energy, called quanta (whence the name quantum physics!), which we now call photons.82 We will see the Planck constant turn up again when we talk about uncertainty principles in Section 4.6.

Back to quantum dynamics. For time-independent Hamiltonians \hat{H}(t)=\hat{H}, the formal solution of the Schrödinger equation is given by |\psi(t)\rangle = e^{-\frac{i}{\hbar}\hat{H}t}|\psi(0)\rangle. Note that the function |\psi(t)\rangle thus obtained is separable: it is written as a product of two functions e^{-\frac{i}{\hbar}\hat{H}t}\cdot|\psi(0)\rangle, where the first is purely time dependent, and the second has no time dependence. In fact, the time-dependent part is exactly the phase factor U(t)=e^{-\frac{i}{\hbar}\hat{H}t}, and we know that this does not affect the resulting probabilities: ||\psi(t)\rangle|^2=|U(t)|^2||\psi(0)\rangle|^2=||\psi(0)\rangle|^2. This means that ||\psi(t)\rangle|^2 is constant throughout time — we call such a state stationary, or refer to it as a standing wave.

We will not delve into a proper study of the Schrödinger equation — this is the subject of entire books already, and deserves a lengthy treatment — but it is nice to mention at least one worked example (although we will skip almost all of the details!), since its applications are commonplace in day-to-day life.

In the time-independent case, the Schrödinger equation can simply be written as \hat{H}|\psi\rangle=E|\psi\rangle, where E is the total energy of the system. When written like this, we can sneak a glimpse at what the Hamiltonian is really all about: it is some operator whose eigenstates are solutions of the Schrödinger equation, and whose eigenvalues are the corresponding energy levels.

One particularly instructive situation to consider is that of a particle in a box: we have some 1-dimensional region of space in which a particle is free to move around, but outside of this finite segment there is infinite potential energy, restricting the particle from moving beyond this region. It turns out that, in this case, the Hamiltonian is given by \hat{H} = -\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2} and the general solution to the resulting Schrödinger equation can be shown to be \psi(x) = C\sin(kx)+D\cos(kx) where k=n\pi/L for some positive integer n, and where L is the length of the potential-free region. This implies (after some algebra) that the energy E=E_n of the solution with k=n\pi/L is equal to E_n = \frac{(2\pi n\hbar)}{8mL^2}.

What is utterly unique and important to quantum physics is not really this specific fraction, but the fact that the possible energy levels of the system are purely discrete — energy cannot be any real value, as is the case in the classical world, but it can only take values within some discrete set \{E_1,E_2,\ldots\}.

But what are the applications of this particle in a box? Well, this phenomena of a system having a discrete energy spectrum when restrained to small enough spaces is known as quantum confinement, and quantum well lasers are laser diodes which have a small enough active region for this confinement to occur. Such lasers are arguably the most important component of fiber optic communications, which form the underlying foundations of the internet itself.

Before moving on to understand the relevance of this to what we have already been discussing, let us take a moment to see why we might have expected to stumble across such a solution as e^{-\frac{i}{\hbar}\hat{H}t} (or, from the opposite point of view, how we could derive the Schrödinger equation). We start with state vectors, which we want to evolve according to transition operators — we have already justified why we should think about representing these transitions by matrices (namely because matrices simply package up all the multiplication and addition in the “right” way). But now we want these evolutions to be continuous, whatever that might formally mean.

For a start, this means that we want not only to be able to multiply the matrices that represent these transitions, but also to do the inverse: take any transition and “chop it up” into smaller time chunks, viewing any evolution T as a sequence T_nT_{n-1}\ldots T_1 of evolutions T_i that take place on a shorter time scale. Directly then, we want to be able to consider roots (square roots, cube roots, and so on) of our matrices, which means that they must at the very least have complex entries.

But let us try to take this continuity requirement a bit more seriously: say that any transition T is parametrised by a real parameter t, which we will think of as “time”. It makes sense to ask for T(t+t')=T(t)T(t') for any t,t'\in\mathbb{R}, and to say that “at time 0, things are exactly how we found them”, i.e. T(0)=\mathbf{1}. But we know how to solve for such requirements: take T(t)=\exp(tX), where X is an arbitrary complex matrix! This also solves the problem of wanting to take roots, since T(t)^{\frac{1}{n}}=T(t/n), and T(t)^{-1}=T(-t).

Next, as we’ve already mentioned, complex matrices have a polar form — analogous to how any z\in\mathbb{C} can be written as z=re^{i\varphi}, we can write any complex matrix Z as Z=RU, where R is positive semi-definite and U is unitary. In this decomposition, just as for the polar decomposition z=re^{i\varphi}, the R corresponds to “stretching” and the U corresponds to “rotation”. But we don’t want to have to worry about convergence issues, and the idea of “exponential stretching” sounds like it might give us some problems, so let us just consider Z=RU with R=\mathbf{1}, i.e. just unitary matrices. And if we want T(t) to be unitary, then it suffices to take X to be anti-Hermitian.

In summary, from just asking for our evolutions to be continuous and not have any convergence issues, we end up with the conclusion that we are interested in evolutions described by exponentials of anti-Hermitian matrices, i.e. U(t)=\exp(itX) for some Hermitian matrix X.

This correspondence between so-called one-parameter unitary groups — families (U_t)_{t\in\mathbb{R}} of unitary operators (satisfying some analytic property) — and Hermitian operators, given by U_t=e^{itA}, is known as Stone’s theorem (on one-parameter unitary groups).

For example, if we consider the translation operators T_t, which are defined by T_t(\psi)(x)=\psi(x+t), then we have the corresponding Hermitian operator -i\frac{\mathrm{d}}{\mathrm{d}x}, which is known (for good reason) as the momentum operator. In fancy words, this says that 1-dimensional motion is infinitesimally generated by momentum.

Now, to relate this to the earlier parts of this chapter, we note that the Hamiltonian of a qubit can always be written in the form H = E_0\mathbf{1}+\omega(\vec{n}\cdot\vec{\sigma}), hence \begin{aligned} U(t) &= e^{-i\omega t \vec n\cdot\vec\sigma} \\&= (\cos\omega t)\mathbf{1}- (i\sin\omega t)\vec{n}\cdot\vec{\sigma} \end{aligned} which is a rotation with angular frequency \omega about the axis defined by the unit vector \vec n.


  1. The kilogram is now defined in SI in terms of the Planck constant, the speed of light, and the atomic transition frequency of of caesium-133.↩︎

  2. The whole history of quantum physics, arguably starting with the black-body problem, accounting for the Rayleigh–Jeans law, and leading on to the discovery of the photoelectric effect, is a wonderful story, but one that we do not have the space to tell here.↩︎