Structured Sound Based Language for Emotional Robotic Communicative
Interaction
Aladdin Ayesh
School of Computer Technology
De Montfort University
The Gateway
Leicester LE1 9BH
email: aayesh@dmu.ac.uk
Abstract— Sound is perhaps the most elementary and yet
common communication vehicle used by humans and animals
alike. Similarly, reactive robots that are based on animal intel-
ligence require real-time simple communication mechanisms to
imitate their animal examples. However, emotions are not often
considered in developing reactive robots interaction rules and
communication. Therefore, these robots, unlike their natural
examples, are not capable of expressing basic emotions such as
happiness and excitement, fear and stress, anger or contentment
as the case with many animals.
I. I NTRODUCTION
Emotion modeling is often associated with cognitive
agents that are capable of reasoning about themselves [1].
However, one may argue that emotions are reactive triggers
and most suited in reactive agents [2], [3]. Taking this
viewpoint we embark on developing a musical language for
communication and emotional expression that can be used in
reactive agents. The idea is to imitate the animal behavior in
their use of sounds to express themselves and communicate.
Using a musical language allows us to draw on a large lit-
erature from musicology and musical technology [4], [5], [6].
In fact, [6] provides an interesting sound propagation model
for communication between agents. Such work may form
a sound manipulation base for our work that is presented
here whilst our work provides a high level representational
language. We do not aim, however, to discuss or to develop
the sound recognition and synthesis at this stage. Instead,
our aim is to present operators for the synthesis and analysis
of musical messages to provide an alternative language
to speech acts with simpler formats and yet emotionally
expressive.
The use of musical language provides musical syntax
and grammar for expressive communication. However, the
complexity of that grammar can be easily maintained at a
simple level that suits reactive robots.
In this paper, the syntax of our proposed Musical Lan-
guage for Emotionally Interactive Robots (MLEIR) is
presented. The paper starts with a background in section
II establishing the biological background and the practical
requirements of the language. Section IV provides the syntax
of the language. We use both extended BNF and Z notations
in developing the language, sections V and VI. Finally, we
discuss in section VII the implementation of a version of
MLEIR on Lego Mind Storm robots.
II. PRELIMINARIES
A. Human Computer Interaction (HCI)
The benefits of using sound in general and music in
particular as communications mechanism in robots may be
questionable. First question may come to mind, what is
the difference between using musical sound and normal
signaling? The answer is that in signaling including sound
signaling we encode straightforward messages that are deter-
ministic and exact. Living creatures are rarely do so except
perhaps in the socially disciplined species such as ants. Even
humans with their highly developed natural languages the
underlying tonal expression in the human voice delivers
more than the spoken message. However, human’s vocal
expressions are far more complex and vary, it may prove
difficult to analyze and to model.
Another question is what is the added value of using
musical language? There are several benefits of using musical
language that may become more apparent If we consider
the practical applications of this research in providing some
alternative mechanisms in human-machine interaction. Music
has the advantage of combining the simple sound, which can
be used in basic signal communication, and the structure
of a language that can be extended and modified. From a
human viewpoint, musical tones and sentences are often easy
to recognize and remember.
To give examples of applications of the research reported
here in the field of HCI, one may propose a musical walking
stick for the blind and an educational assistant to the children.
In the case of the musical walking stick, the stick can read
different markings on the ground producing associated music
themes. It is easier for a blind person to recognize musical
sounds rather than worrying about synthesized linguistic
statements. Also, it is easier to generate musical sounds,
which means faster processing on relatively simple process-
ing chips.
The children educational assistant can be in the form of a
robotic toy, a handheld device or embedded in any simple toy.
Again, the fact that musical sounds associated with learning
material makes the learning process more pleasing to the
The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), Hatfield, UK, September 6-8, 2006
1-4244-0565-3/06/$20.00 ©2006 IEEE. 135