The Epoch Interpretation of Learning John G. Carney and P ´ adraig Cunningham Department of Computer Science University of Dublin Trinity College Ireland John.Carney,Padraig.Cunningham@cs.tcd.ie Abstract In this paper we propose a simple, alternative interpretation of back- propagation learning. We call this the “epoch interpretation of learn- ing” and show how it can be used to improve the performance of early- stopping based techniques used for improving generalization perfor- mance in neural networks. Experiments performed on noisy, non-linear foreign exchange rate data demonstrate that networks built using an early-stopping technique that uses the epoch interpretation of learning on average out-perform networks built using a conventional early-stopping technique by 11%. 1 Introduction The ultimate goal of the neural network researcher is to build networks that provide opti- mal generalization performance. Once a network is trained, we wish it to perform well on examples that are not included in the training set. There are a large variety of techniques described in the literature that attempt to do this. One of the most popular is early-stopping. The basic idea of early-stopping is to terminate training when some estimate of generaliza- tion error begins to increase. If training is not stopped early and continues to convergence then there is a danger that the network will over-fit its training data and will not generalizise well to new, unseen examples. In this paper we build upon the principles of early-stopping to develop an alternative in- terpretation of learning. We show how this “epoch interpretation of learning” can be used to develop techniques that improve generalization performance more effectively than con- ventional early-stopping approaches. Experiments performed on highly non-linear, noisy foreign exchange rate data provide empirical evidence that using the epoch interpretation of learning can significantly improve the generalization performance of neural networks.