689
International Journal on Advances in Intelligent Systems, vol 7 no 3&4, year 2014, http://www.iariajournals.org/intelligent_systems/
2014, © Copyright by authors, Published under agreement with IARIA - www.iaria.org
Modelling Spatial Understanding:
Using Knowledge Representation to Enable Spatial Awareness and Symbol Grounding
in a Robotics Platform
Martin Lochner, Charlotte Sennersten, Ahsan Morshed, and Craig Lindley
CSIRO Computational Informatics (CCI) Autonomous Systems (AS)
Commonwealth Scientific and Industrial Research Organization (CSIRO)
Hobart, Tasmania, Australia
Contact: martin.lochner@csiro.au, charlotte.sennersten@csiro.au,
ahsan.morshed@csiro.au, craig.lindley@csiro.au
Abstract—Robotics in the 21st century will progress from
scripted interactions with the physical world, where human
programming input is the bottleneck in the robot’s ability to
sense, think and act, to a point where the robotic system is able
to autonomously generate adaptive representations of its
surroundings, and further, to implement decisions regarding
this environment. A key factor in this development will be the
ability of the robotic platform to understand its physical space.
In this paper, we describe a rationale and framework for
developing spatial understanding in a robotics platform, using
knowledge representation in the form of a hybrid spatial-
ontological model of the physical world. Further, we describe
the proposed CogOnto (cognitive ontology) model, which
enables symbol grounding for a cognitive computing system,
using sensor data gathered from diverse and heterogeneous
sources, associated with humanly crafted symbolic descriptors.
While such a system may be implemented with classical
ontologies, we discuss the advantages of non-hierarchical
modes of knowledge representation, including a conceptual
link between information processing ontologies and
contemporary cognitive models.
Keywords-Human Robot Interaction; Artificial
Intelligence; Autonomous Navigation; Knowledge
Representation; Symbol Grounding; Spatial Ontology.
I. INTRODUCTION
The process of transitioning away from hard-coded
robotics applications, which carry out highly pre-determined
actions such as the traditional manufacturing robot, is
already well underway. This paper follows our previous
work [1] in which we describe a methodology for using
ontological data representation to encode 3D spatial
information in robotics applications. With notions such as
cloud robotics [2] entering the zeitgeist, and highly
publicized events such as the Defense Advanced Research
Projects Agency (DARPA) Robotics Challenge (Dec. 19-21,
2013, Miami FL) bringing public attention to these
advances, it is foreseeable that robots will be entering the
mainstream realm of human activity – more than in fringe
applications (robotic vacuum cleaner; children’s toys), but
in key areas such as caring for the aged [3], operating
vehicles [4], disaster management [5], and undertaking
autonomous scientific investigation [6].
The hurdles that must be overcome in reaching these
goals, however, are neither few nor small. This can be
plainly seen, for example in the aforementioned 2013
Robotics Challenge, in which simple spatial tasks that are
routine for a human being (open a door, climb a ladder) are
still critically difficult for even the most advanced and
highly funded robotics projects. While the state-of-the-art is
impressive, it is evident that physical robotics hardware is
far in advance of the control systems that are in place to
guide the robot. The challenge is, thus, to develop systems
whereby a robot can perceive a physical space and
understand its position in that space, the components that
exist within the space, and how it can or should interact with
these components in order to achieve implicit or explicit
goals. This is furthermore impacted by the requirement that
robotic systems be able to operate in outdoor environments
where distributed connections may not be available;
however, describing the development of long-range data
networks for robotic communication is beyond the scope of
this paper.
While there are a number of ways that the problem of
providing a robot with a spatial understanding can be
approached (e.g., neuro-fuzzy reasoning [7], dynamic
spatial relations via natural language [8]) it is our
proposition that leveraging the current advancements in
knowledge representation via ontologies [9][10], in
combination with an understanding of human spatial-
cognitive processing [11][12], and enabled by real-time
scene modeling [13] will provide a powerful and accessible
methodology for enabling spatial understanding and
interaction in a mobile robotics platform. As argued by
Sennersten et al. [14], the advantage of using cloud-based
repositories of perceptual data annotated with ontology and
metadata information is to take advantage of humanly-
tagged examples of sense data (e.g., images) to overcome
the symbol grounding problem. Symbol grounding refers to
the need for symbolic structures to have valid associations
with the things in the world that they refer to. Achieving
symbol grounding is an ongoing challenge for robotics and
other intelligent systems [15]. Using cloud-based
annotations attached to sensory exemplars takes advantage
of the human ability to ground symbols, obviating the need