A Note on the Complexity of Reliability in Neural Networks Piotr Berman ∗† Ian Parberry ∗‡ Georg Schnitger ∗† Abstract It is shown that in a standard discrete neural network model with small fan-in, tolerance to random malicious faults can be achieved with a log-linear increase in the number of neurons and a constant factor increase in parallel time, provided fan-in can increase arbitrarily. A similar result is obtained for a nonstandard but closely related model with no restriction on fan-in. 1 Introduction One advantage that biological neural systems have over conventional computers is their ability to perform reliable computations with unreliable hardware. Carver Mead (quoted in [6]) has observed that: “The brain has this wonderful property - you can go through and shoot out every tenth neuron and never miss them”. A plausible interpretation of this observation is that correct computations can be carried out with high probability when one of of ten neurons are destroyed at random. We say that a circuit is reliable if it performs correctly with high probability in the presence of random faults. That is, if the neurons are damaged independently with low probability, then with high probability the circuit still computes the same function. We will show that discrete neural networks can be made reliable with a small increase in size and depth (at most a low-degree polynomial, and a constant factor, respectively). The results in this paper can be re-expressed in a stochastic model in which the gates in the circuit are noisy in the sense that they fail independently with probability ǫ. Similar work has been done on the simulation of standard circuits of fan-in 2 Research sponsored by the Air Force Office of Scientific Research, Air Force Systems Command, USAF, under grant number AFOSR 87-0400. Research supported by NSF grant CCR-8805978. Dept. of Computer Science, Penn State Univ., 333 Whitmore Lab., University Park, PA 16803. Dept. of Computer Sciences, Univ. of North Texas, P.O. Box 13886, Denton, TX 76203–3886. 1