Automatic Symbolic Modelling of
Co-evolutionarily Learned Robot Skills
Agapito Ledezma, Antonio Berlanga and Ricardo Aler
Universidad Carlos III de Madrid Avda. de la Universidad, 30, 28911, Leganes
(Madrid). Spain
Abstract Evolutionary based learning systems have proven to be very
powerful techniques for solving a wide range of tasks, from prediction to
optimization. However, in some cases the learned concepts are unreadable
for humans. This prevents a deep semantic analysis of what has been
really learned by those systems. We present in this paper an alternative
to obtain symbolic models from subsymbolic learning. In the first stage, a
subsymbolic learning system is applied to a given task. Then, a symbolic
classifier is used for automatically generating the symbolic counterpart
of the subsymbolic model.
We have tested this approach to obtain a symbolic model of a neural net-
work. The neural network defines a simple controller of an autonomous
robot. A competitive coevolutive method has been applied in order to
learn the right weights of the neural network. The results show that the
obtained symbolic model is very accurate in the task of modelling the
subsymbolic system, adding to this its readability characteristic.
1 Introduction
The use of evolutionary computation (EC) techniques for software development
suffers in some aspects from analogous problems to other software development
methodologies or paradigms. In particular, we will focus in this paper in the
declarative representation of the evolutionary generated descriptions; that is,
how we (humans) interpret the output of the EC systems (their generated knowl-
edge).
In the case of the application we present here, robot control, there are many
types of knowledge that could be acquired by means of EC in order to build
such systems. Examples are the internal model of robots, models of other robots,
communication strategies, or reasoning heuristics. One way of automating this
task consists on learning those models by either applying genetic algorithms [1],
evolutionary strategies [2], classifier systems [3], or genetic programming [4].
Another view of this type of tasks is centered on the representation structure of
the output: the systems can generate rules [5], neural networks [6], etc. When
the output is represented in terms of subsymbolic structures (such as neural
networks), it is very difficult to interpret the results in order to extract general
conclusions on the correctness of the learned knowledge, its possible drawbacks,
or the definition of improvements.
J. Mira and A. Prieto (Eds.): IWANN 2001, LNCS 2084, pp. 799-806, 200l.
© Springer-Verlag Berlin Hdelberg 2001
1