IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 5, OCTOBER 2010 1387
Learning in Closed-Loop Brain–Machine Interfaces:
Modeling and Experimental Validation
Rodolphe Héliot, Member, IEEE, Karunesh Ganguly, Jessica Jimenez, Member, IEEE, and
Jose M. Carmena, Senior Member, IEEE
Abstract—Closed-loop operation of a brain–machine interface
(BMI) relies on the subject’s ability to learn an inverse transforma-
tion of the plant to be controlled. In this paper, we propose a model
of the learning process that undergoes closed-loop BMI operation.
We first explore the properties of the model and show that it is
able to learn an inverse model of the controlled plant. Then, we
compare the model predictions to actual experimental neural and
behavioral data from nonhuman primates operating a BMI, which
demonstrate high accordance of the model with the experimental
data. Applying tools from control theory to this learning model
will help in the design of a new generation of neural information
decoders which will maximize learning speed for BMI users.
Index Terms—Brain–machine interfaces (BMIs), internal
model, macaque monkey, motor learning.
I. I NTRODUCTION
A
BRAIN-MACHINE interface (BMI) is a direct com-
munication pathway between the brain and an external
artificial system. It enables its user to execute voluntary motor
actions with an artificial system such as a computer or a robot.
Neural signals recorded from the brain are fed into a decoding
algorithm which transforms them into a motor plan, which is
then streamed to the artificial actuator. A closed control loop is
established via the subject’s visual feedback of the prosthetic
device. Following this approach, recent studies have led to
impressive demonstrations of nonhuman primates and humans
controlling prosthetic devices through a BMI [1]–[10].
Manuscript received August 14, 2009; revised November 1, 2009; accepted
November 9, 2009. Date of publication December 15, 2009; date of current
version September 15, 2010. The work of R. Héliot was supported by the
French “Délégation générale pour l’armement’ under Grant PDE 07C0067.
The work of K. Ganguly was supported by the American Heart Association.
The work of J. Jimenez was supported by the Graduate Assistance in Areas
of National Need Fellowship. The work of J. M. Carmena was supported by
the Christopher and Dana Reeve Foundation. This paper was recommended by
Associate Editor J. D. R. Millán.
R. Héliot is with the Department of Electrical Engineering and Computer
Sciences and the Helen Wills Neuroscience Institute, University of California,
Berkeley, CA 94720 USA.
K. Ganguly is with the Neurology and Rehabilitation Services, San Francisco
VA Medical Center, San Francisco, CA 94121 USA. He is also with the
Department of Electrical Engineering and Computer Sciences and the Helen
Wills Neuroscience Institute, University of California, Berkeley, CA 94720
USA, and also with the Department of Neurology, University of California,
San Francisco, CA 94143 USA.
J. Jimenez is with the Department of Electrical Engineering and Computer
Sciences, University of California, Berkeley, CA 94720 USA.
J. M. Carmena is with the Department of Electrical Engineering and
Computer Sciences, the Program in Cognitive Science, and the Helen Wills
Neuroscience Institute, University of California, Berkeley, CA 94720 USA
(e-mail: carmena@eecs.berkeley.edu).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSMCB.2009.2036931
Fig. 1. During manual control, the animal physically performs a 2-D center-
out reaching task using the right arm while the neural activity is recorded. Under
brain control, the animal performs a similar center-out task using a computer
cursor under direct neural control through a decoder trained during manual
control.
The decoding algorithm plays a crucial role within the BMI
loop since it transforms the neural activity into a motor plan.
Typically, these decoders have been built from natural arm
movement data, i.e., fitting neural activity to the subject’s
behavior, such as the kinematics of the arm in a Cartesian or
joint space (Fig. 1, left panel). These are commonly known as
‘biomimetic’ decoders because the motor transformation that
they perform is designed to reproduce the biological counter-
part as closely as possible. More recently, it has been shown
that the decoder does not need to be biomimetic for the subject
to be able to control a BMI [9], [10]: An arbitrary motor
transformation can be used. The decoder is then used online
to predict the behavior of the prosthetic device based on neural
recordings, typically referred to as “brain control” (Fig. 1, right
panel). Various types of models have been used as decoders
for BMI experiments, with the most common being the Wiener
filter [1], [3], [10] and the population vector algorithm (PVA)
[2], [5].
Good predictive power of the decoder on the manual task
data does not guarantee good performance of the BMI sys-
tem during online closed-loop operation. Indeed, it has been
reported [11] that sophisticated decoding algorithms that ex-
hibit very good offline performance may only lead to minor
improvements in closed-loop experiments. This may result
from recorded neurons exhibiting different firing properties
when performing the BMI task as opposed to the manual
task. BMI control differs from manual control in that feedback
information is scarce, often limited to visual feedback, and the
dynamics of the prosthetic device differ dramatically from the
properties of natural motor control. Ultimately, we are inter-
ested in performance during online closed-loop BMI operation.
Several BMI studies have reported functional changes in
the neurons involved in closed-loop BMI control [2], [3],
[7], [10], [12]. In particular, these studies have shown that
1083-4419/$26.00 © 2009 IEEE