arXiv:1210.1782v1 [quant-ph] 5 Oct 2012 Revisiting foundations for the fault-tolerant quantum computation M.I. Dyakonov Laboratoire Charles Coulomb, Universit´ e Montpellier II, CNRS, France The hopes for scalable quantum computing rely on the “threshold theorem”: once the error per qubit per gate is below a certain value, the methods of quantum error correction allow indefinitely long quantum computations. The proof is based on a number of assumptions, which are supposed to be satisfied exactly, like axioms, e.g. zero undesired interactions between qubits, etc. However in the physical world no continuous quantity can be exactly zero, it can only be more or less small. Thus the “error per qubit per gate” threshold must be complemented by the required precision with which each assumption should be fulfilled. This issue was never addressed. In the absence of this crucial information, the prospects of scalable quantum computing remain uncertain. PACS numbers: The idea of quantum computing is to store informa- tion in the values of 2 N complex amplitudes describing the wavefunction of N two-level systems (qubits), and to process this information by applying unitary transforma- tions (quantum gates), that change these amplitudes in a precise and controlled manner [1]. The value of N needed to have a useful machine is estimated as 10 3 or more. Note that even 2 1000 ∼ 10 300 is much, much greater than the number of protons in the Universe. Since the qubits are always subject to various types of noise, and the gates cannot be perfect, it is widely rec- ognized that large scale, i.e. useful, quantum computa- tion is impossible without implementing error correction. This means that the 10 300 continuously changing quan- tum amplitudes of the grand wavefunction describing the state of the computer must closely follow the desired evo- lution imposed by the quantum algorithm. The random drift of these amplitudes caused by noise, gate inaccu- racies, unwanted interactions, etc., should be efficiently suppressed. Taking into account that all possible manipulations with qubits are not exact, it is not obvious at all that er- ror correction can be done, even in principle, in an analog machine whose state is described by at least 10 300 contin- uous variables. Nevertheless, it is generally believed (for example, see [2]) that the prescriptions for fault-tolerant quantum computation [3–6] using the technique of error- correction by encoding [7, 8] and concatenation (recursive encoding) give a solution to this problem. By active in- tervention, errors caused by noise and gate inaccuracies can be detected and corrected during the computation. The so-called “threshold theorem” [9–11] says that, once the error per qubit per gate is below a certain value es- timated as 10 -6 − 10 -4 , indefinitely long quantum com- putation becomes feasible. Thus, the theorists claim that the problem of quantum error correction is resolved, at least in principle, so that physicists and engineers have only to do more hard work in finding the good candidates for qubits and approaching the accuracy required by the threshold theorem [12, 13]. However, as it was clearly stated in the original work (but largely ignored later, especially in presentations to the general public, Ref. [13] is one example) the mathematical proof of the threshold theorem is founded on a number of assumptions (axioms): 1. Qubits can be prepared in the |00000...00〉 state. New qubits can be prepared on demand in the state |0〉, 2. The noise in qubits, gates, and measurements is uncorrelated in space and time, 3. No undesired action of gates on other qubits, 4. No systematic errors in gates, measurements, and qubit preparation, 5. No undesired interaction between qubits, 6. No “leakage” errors, 7. Massive parallelism: gates and measurements are applied simultaneously to many qubits, and some others. While the threshold theorem is a truly remarkable mathematical achievement, one would expect that the underlying assumptions, considered as axioms, would un- dergo a close scrutiny to verify that they can be reason- ably approached in the physical world. Moreover, the term “reasonably approached” should have been clari- fied by indicating with what precision each assumption should be fulfilled. So far, this has never been done (as- sumption 2 being an exception [14, 15]), if we do not count the rather naive responses provided in the early days of quantum error correction [16–18]. It is quite normal for a theory to disregard small effects whose role can be considered as negligible. But not when one specifically deals with errors and error correction. A method for correcting some errors on the assumption that other (unavoidable) errors are non-existent is not acceptable, because it uses fictitious ideal elements as a kind of golden standard [19]. Below are some trivial observations regarding manipu- lation and measurement of continuous variables. Suppose that we want to know the direction of a classical vector, like the compass needle. First, we never know exactly what our coordinate sys- tem is. We choose the x, y, z axes related to some phys- ical objects with the z axis directed, say, towards the Polar Star, however neither this direction, nor the angles between our axes can be defined with an infinite preci-