INSTITUTE OF PHYSICS PUBLISHING JOURNAL OF PHYSICS A: MATHEMATICAL AND GENERAL
J. Phys. A: Math. Gen. 35 (2002) 2379–2394 PII: S0305-4470(02)28638-2
Training the integrate-and-fire model with the
informax principle: I
Jianfeng Feng
1, 3
, Hilary Buxton
1
, and Yingchun Deng
2
1
COGS, Sussex University, Brighton BN1 9QH, UK
2
Department of Mathematics, Hunan Normal University, Changsha 410081, People’s Republic
of China
Received 7 September 2001, in final form 31 December 2001
Published 1 March 2002
Online at stacks.iop.org/JPhysA/35/2379
Abstract
In terms of the informax principle, and the input–output relationship of the
integrate-and-fire (IF) model, IF neuron learning rules are developed. For
supervised learning and with uniform weight of synapses (the theoretically
tractable case), we show that the derived learning rule is stable and the stable
state is unique. For unsupervised learning, within physiologically reasonable
parameter regions, both long-term potentiation (LTP) and long-term depression
(LTD) could happen when the inhibitory input is weak, but LTD cannot be
observed when inhibitory input is strong enough. When both LTP and LTD
occur, LTD is observable when the output of the postsynaptic neuron is faster
than pre-synaptic inputs, otherwise LTP is observable, as observed in recent
experiments. Learning rules of general cases are also studied and numerical
examples show that the derived learning rule tends to equalize the contribution
of different inputs to the output firing rates.
PACS numbers: 87.18.Sn, 87.19.La, 05.10.Gg, 05.40.-a
1. Introduction
Learning or synaptic plasticity is of vital importance for biological systems [1]. In the present
paper, we develop a learning rule, which is applicable to solving engineering problems [12]
and is based upon (biophysical) models of a cell. The learning rule is derived under the
principle of the maximization of the mutual information of input–output, which has been
proposed and widely used in artificial neuron networks [2, 4, 18]. Due to recent developments
in modelling single neurons, we know exactly the input–output relationship of some neuron
models such as the integrate-and-fire (IF) model [27] and IF-FHN model [10] etc. Combining
these two approaches together, we are able to develop learning rules relying on the input–output
relationship of a neuron.
We first consider an ideal case where all synaptic strengths are identical. For supervised
learning, by which we mean that the input and output firing rates of a neuron are fixed,
3
http://www.cogs.susx.ac.uk/users/jianfeng
0305-4470/02/102379+16$30.00 © 2002 IOP Publishing Ltd Printed in the UK 2379