I'm having trouble understanding the so-called "digitization of errors" argument in QEC.
Suppose I have to encode my logical qubit into $n$ physical qubits to do error correction. I will use some encoder - let's call it $E$ - and note that the encoder cannot depend on the input state. After encoding, we then have the physical qubits in some state $E(\vert\psi\rangle)$.
Suppose the physical qubits are subjected to some noise such that the new state they have is $E(\vert\psi'\rangle)$ where $\vert\psi' \rangle \approx_\delta \vert\psi\rangle$ (this represents closeness in trace distance) for small $\delta$.
A decoder cannot distinguish between these two states, since they can be arbitrarily close in trace distance as $\delta\rightarrow 0$. So the decoder will output $\vert\psi\rangle$ in two cases
- The input state was $\vert\psi'\rangle$ and there was no noise.
- The input state was $\vert\psi\rangle$ and there was some noise.
I suspect the answer has to do with quantum computing never using states that are too close to each other. Indeed, if the final state after a computation was $\vert\psi\rangle$ or $\vert\psi'\rangle$ and these are close in trace distance, then the measurement statistics would also be very similar.
Can someone elucidate exactly what assumptions are made about the state during a quantum computation, how distinguishable they need to be and how that allows us to use error correction techniques?