A Random Matrix Theory Approach to the Adaptive Gradient Generalisation Gap Diego Granziol ∗ AI Theory Huawei diego@robots.ox.ac.uk Nicholas Baskerville Bristol University n.p.baskerville@bristol.ac.uk Xingchen Wan Oxford University xwan@robots.ox.ac.uk Samuel Albanie Oxford University samuel.albanie@robots.ox.ac.uk Stephen Roberts Oxford University sjrob@robots.ox.ac.uk Abstract We conjecture that the inherent difference in generalisation between adaptive and non-adaptive gradient methods stems from the increased estimation noise in the flattest directions of the true loss surface. We demonstrate that typical schedules used for adaptive methods (with low numerical stability or damping constants) serve to bias relative movement towards flat directions relative to sharp directions, effectively amplifying the noise-to-signal ratio and harming generalisation. We further demonstrate that the numerical stability/damping constant used in these methods can be decomposed into a learning rate reduction and linear shrinkage of the estimated curvature matrix. We then demonstrate significant generalisation improvements by increasing the shrinkage coefficient, closing the generalisation gap entirely in both Logistic Regression and Deep Neural Network experiments. Finally, we show that other popular modifications to adaptive methods, such as decoupled weight decay and partial adaptivity can be shown to calibrate parameter updates to make better use of sharper, more reliable directions. 1 Introduction The success of deep neural networks across a wide variety of tasks, from speech recognition to image classification, has drawn wide-ranging interest in their optimisation. Adaptive gradient optimisers, which alter the per-parameter learning rate depending on historical gradient information, lead to significantly faster convergence of the training loss than non adaptive methods, such as stochastic gradient descent (SGD) with momentum [43]. Popular examples include Adam [31], AdaDelta [60] and RMSprop [49]. However, for practical applications the final test set results are more important than the training performance. For many image and language problems of interest, the test set performance of adaptive gradient methods is significantly worse than SGD [54]—a phenomenon that we refer to as the adaptive generalisation gap. As a consequence of this effect, many state-of-the-art models, especially for image classification datasets such as CIFAR [59] and ImageNet [57, 11], are still trained using SGD with momentum. Although less widely used, another class of adaptive * granziol.me Preprint. Under review. arXiv:2011.08181v4 [stat.ML] 26 Jul 2021