Bottom-Up/Top-Down Coordination in a MultiAgent Visual Sensor Network
*
F. Castanedo, M.A. Patricio, J. Garc´ıa and J.M. Molina
University Carlos III of Madrid
Computer Science Department
Applied Artificial Intelligence Group
Avda. Universidad Carlos III 22, 28270-Colmenarejo (Madrid)
{fcastane, mpatrici, jgherrer}@inf.uc3m.es, molina@ia.uc3m.es
Abstract
In this paper an approach for multi-sensor coordination in
a multiagent visual sensor network is presented. A Belief-
Desire-Intention model of multiagent systems is employed.
In this multiagent system, the interactions between several
surveillance-sensor agents and their respective fusion agent
are discussed. The surveillance process is improved using a
bottom-up/top-down coordination approach, in which a fu-
sion agent controls the coordination process. In the bottom-
up phase the information is sent to the fusion agent. On the
other hand, in the top-down stage, feedback messages are
sent to those surveillance-sensor agents that are performing
an inconsistency tracking process with regard to the global
fused tracking process. This feedback information allows
to the surveillance-sensor agent to correct its tracking pro-
cess. Finally, preliminary experiments with the PETS 2006
database are presented.
1. Introduction
A multiagent visual sensor network is a distributed network
of several intelligent software agents with visual capabili-
ties [1]. An intelligent software agent is a computational
process that has several characteristics [2]: (1) ”reactivity”
(allowing agents to perceive and respond to a changing en-
vironment), (2) ”social ability” (by which agents interact
with other agents) and (3) ”proactiveness” (through which
agents behave in a goal-directed fashion). Wooldridge and
Jennings give a strong notion of agent which also uses
mental components such as belief, desire and intentions
(BDI).The BDI model is one of the best known and studied
models of practical reasoning [3]. It is based on a philo-
sophical model of human practical reasoning, originally de-
veloped by M. Bratman [4] and reduces the explanation for
complex human behavior to a motivational stance [5]. This
means that the causes for actions are always related to the
*
Funded by projects Ministerio de Fomento (SINPROB), CICYT
TEC2005-07186 and CAM MADRINET S-0505/TIC/0255
human desires ignoring other facets of human motivations
to act. And finally, it also uses, in a consistent way, psycho-
logical concepts that closely correspond to the terms that
humans often use to explain their behavior.
In a visual sensor network, the integration of the results
obtained from multiple visual sensors can provide more ac-
curate information than using a single visual sensor [6] [7].
This allows, for example, a improved tracking accuracy in
a surveillance system. However, data fusion must be per-
formed with due care, because even though multiple visual
sensors provide more information of the same object; this
information could be inconsistent between them. A lot of
reasons could provide inconsistent or wrong information
in a visual sensor network when objects are being tracked.
First, the object could be affected by shadow [8] when it is
being tracked. This shadow could be originated by external
conditions. Second, external conditions could affect the ac-
curacy of the tracking process. For example: changes in il-
lumination conditions, a sudden increment in wind velocity,
all of them affect to the foreground detector and therefore
the global tracking process. Another problem which a mul-
tiagent visual sensor network must take into account are the
partial occlusions of the objects that are being tracked [1].
In our proposed visual sensor system the data fusion
process is carried out by a fusion agent in the multiagent
visual sensor network. The fusion agent informs to each
surveillance-sensor agent which are being performing an in-
consistent tracking. The main objective of our approach is
to coordinate the network of visual sensors, and the fusion
agent is the manager of this coordination process.
This paper focus on the interactions between several
surveillance-sensor agents and their respective fusion agent
in order to solve the specific problems of inconsistencies. In
the next section the related work using multiagent systems
in a visual sensor network is reviewed and our multiagent
approach is presented. Later, we further explain the bottom-
up/top-down coordination in a multiagent visual sensor net-
work. Then we present experimental results of the proposed
method and finally the conclusions of this research.
1
978-1-4244-1696-7/07/$25.00 ©2007 IEEE.