1 Neural Decoding with Kernel-based Metric Learn- ing Austin J. Brockmeier 1 , John S. Choi 2 , Evan G. Kriminger 1 , Joseph T. Francis 2, 3 , and Jose C. Principe 1 1 Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, U.S.A. 2 Joint Program in Biomedical Engineering, NYU Polytechnic School of Engineering and SUNY Downstate, Brooklyn, NY 11203, U.S.A. 3 Department of Physiology and Pharmacology, State University of New York Down- state Medical Center, Robert F. Furchgott Center for Neural & Behavioral Science, Brooklyn, NY 11203, U.S.A. Keywords: Decoding, dependence, kernels, metric learning, spike trains Abstract When studying the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus—exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neu- ral decoding task. In particular, neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of met- rics for individual neurons exist, a method to optimally combine single-neuron metrics into multi-neuron, or population-based, metrics is lacking. We pose the problem of opti- mizing multi-neuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve non-linear dimensionality reduction methods for exploratory neural analysis.