## Why qubits, subsystems, and entanglement?

One question that is rather natural to ask at this point is the following:

If entanglement is so fragile and difficult to control, then why bother?
Why not perform your computations in one singly physical system that has as many quantum states as we normally have labels for the states of qubits?
Then we could label these quantum states in the same way as we normally label the qubits, and give *them* computational meaning.

This suggestion, although possible, gives a very inefficient way of representing data, known as the **unary encoding**.
For serious computations, we *need* subsystems.
Here is why.

Suppose you have n physical objects, and each object has k distinguishable states.
If you can access each object *separately* and put it into any of the k states, then, with only n operations, you can prepare any of the k^{n} different configurations of the combined systems.
Without any loss of generality, let us take k=2 and refer to each object of this type as a **physical bit**.
We label the two states of a physical bit as 0 and 1.
So any collection of n physical bits can be prepared in 2^{n} different configurations, which can be used to store up to 2^{n} numbers (or binary strings, or messages, or however you want to interpret these things).
In order to represent numbers from 0 to N-1 we just have to choose n such that 2^n\geqslant N.

Suppose the two states in the physical bit are separated by the energy difference \Delta_E>0, i.e. that it costs \Delta_E units of energy to to switch a physical bit from one state to the other.
Then a preparation of any particular configuration will cost no more than E=n \Delta_E=(\log_2 N)\Delta_E units of energy.

In contrast, if we choose to encode N configurations into one chunk of matter, say, into the first N energy states of a single harmonic oscillator with the same energy separation \Delta_E between states, then, in the worst case (i.e. going from the ground state 0 to the most excited state N) one has to use E=N\Delta_E units of energy.
For large N this gives an exponential gap in the energy expenditure between the binary encoding using physical bits, and the unary encoding using energy levels of harmonic oscillators: (\log_2 N)\Delta_E vs N\Delta_E.

Of course, you might try to switch to a different choice of realisation for the unary encoding, such as a quantum system that has a finite spread in the energy spectrum.
For example, by operating on the energy states of the hydrogen atom, you can encode any number from 0 to N-1, and we are guaranteed not to spend more than E_{\mathrm{max}}= 13.6\,\mathrm{eV} (otherwise the atom is ionised).
The snag is that, in this case, some of the electronic states will be separated by an energy difference to the order of E_{\mathrm{max}}/N, and to drive the system selectively from one state to another one has to tune into the frequency E_{\mathrm{max}}/\hbar N, which requires a sufficiently long **wave packet** in order for the frequency to be well defined, and consequently the
interaction time is of order N(\hbar/E_{\mathrm{max}}).

That is, we spend *less energy*, but the trade off is that we have to spend *more time*.

It turns out that whichever way we try to represent the number N in the unary encoding (i.e. using N different states of a single chunk of matter), we end up depleting our physical resources (such as energy or time, or even space) at a much greater rate than in the case when we use subsystems.
This plausibility argument indicates that, for efficient processing of information, the system must be divided into subsystems — for example, into physical bits.