## 0.2 Euclidean vectors and vector spaces

We assume that you are familiar with Euclidean vectors — those arrow-like geometric objects which are used to represent physical quantities, such as trajectories, velocities, or forces. You know that any two velocities can be added to yield a third, and the multiplication of a “velocity vector” by a real number is another “velocity vector”. So a linear combination of vectors is another vector: if v and w are vectors, and \lambda and \mu are numbers (rational, real, or complex, for example), then \lambda v+\mu w is another vector. Mathematicians have simply taken these properties and defined vectors as anything that we can add and multiply by numbers, as long as everything behaves in a nice enough way. This is basically what an Italian mathematician Giuseppe Peano (1858–1932) did in a chapter of his 1888 book with an impressive title: Calcolo geometrico secondo l’Ausdehnungslehre di H. Grassmann preceduto dalle operazioni della logica deduttiva. Following Peano, we define a vector space as a mathematical structure in which the notion of linear combination “makes sense”.

More formally, a complex vector space is a set V such that, given any two vectors a and b (that is, any two elements of V) and any two complex numbers \alpha and \beta, we can form the linear combination \alpha a+\beta b, which is also a vector in V. There are certain “nice properties” that vector spaces things must satisfy. Addition of vectors must be commutative and associative, with an identity (the zero vector, which is often written as \mathbf{0}) and an inverse for each v (written as -v). Multiplication by complex numbers must obey the two distributive laws: (\alpha+\beta)v = \alpha v+\beta v and \alpha (v+w) = \alpha v+\alpha w.

A more succinct way of defining a vector space is as an abelian group endowed with a scalar action of a field. This showcases vector spaces as a particularly well behaved example of a more general object: modules over a ring.

A subspace of V is any subset of V which is closed under vector addition and multiplication by complex numbers. Here we start using the Dirac bra-ket notation and write vectors in a somewhat fancy way as |\text{label}\rangle, where the “label” is anything that serves to specify what the vector is. For example, |\uparrow\rangle and |\downarrow\rangle may refer to an electron with spin up or down along some prescribed direction, and |0\rangle and |1\rangle may describe a quantum bit holding either logical 0 or 1. As a maybe more familiar example, the set of binary strings of length n is a vector space over the field \mathbb{Z}/2\mathbb{Z} of integers mod 2; in the case n=2 we can write down all the vectors in this vector space in this notation: |00\rangle, |01\rangle, |10\rangle, |11\rangle, where e.g. |10\rangle+|11\rangle=|01\rangle (addition is taken mod 2). These are often called ket vectors, or simply kets. (We will deal with “bras” in a moment).

A basis in V is a collection of vectors |e_1\rangle,|e_2\rangle,\ldots,|e_n\rangle such that every vector |v\rangle in V can be written (in exactly one way) as a linear combination of the basis vectors: |v\rangle=\sum_{i=1}^n v_i|e_i\rangle. The number of elements in a basis is called the dimension of V.9 The most common, and prototypical, n-dimensional complex vector space (and the space which we will be using most of the time) is the space of ordered n-tuples of complex numbers, usually written as column vectors: |a\rangle = \begin{bmatrix}a_1\\a_2\\\vdots\\a_n\end{bmatrix} with a basis given by the column vectors |e_i\rangle that are 0 in every row except for a 1 in the i-th row: |e_1\rangle = \begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix} \qquad |e_2\rangle = \begin{bmatrix}0\\1\\\vdots\\0\end{bmatrix} \qquad\ldots\qquad |e_n\rangle = \begin{bmatrix}0\\0\\\vdots\\1\end{bmatrix} and where addition of vectors is done component-wise, so that \left(\sum_{i=1}^n v_i|e_i\rangle\right)+\left(\sum_{i=1}^n w_i|e_i\rangle\right) = \sum_{i=1}^n (v_i+w_i)|e_i\rangle or, in column vectors, \begin{gathered} |v\rangle = \begin{bmatrix}v_1\\v_2\\\vdots\\v_n\end{bmatrix} \qquad |w\rangle = \begin{bmatrix}w_1\\w_2\\\vdots\\w_n\end{bmatrix} \\\alpha|a\rangle+\beta|b\rangle = \begin{bmatrix}\alpha v_1+\beta w_1\\\alpha v_2+\beta w_2\\\vdots\\\alpha v_n+\beta w_n\end{bmatrix} \end{gathered}

Throughout the course we will deal only with vector spaces of finite dimensions. This is sufficient for all our purposes and we will avoid many mathematical subtleties associated with infinite dimensional spaces, for which we would need the tools of functional analysis.

Formally, whenever we say n-dimensional Euclidean space, we mean the real vector space \mathbb{R}^n.

1. Showing that this definition is independent of the basis that we choose is a “fun” linear algebra exercise.↩︎