IFAC PapersOnLine 50-1 (2017) 11191–11196
ScienceDirect ScienceDirect
Available online at www.sciencedirect.com
2405-8963 © 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Peer review under responsibility of International Federation of Automatic Control.
10.1016/j.ifacol.2017.08.1243
© 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Keywords: Underwater simulator; Benchmarking; Underwater intervention; Robotics;
Dehazing.
1. INTRODUCTION
During the last 8-years period (i.e. 2009-2016) the IRS-
Lab research group has been very active working in the
underwater robotics manipulation field under three differ-
ent research projects: RAUVI (Sanz et al., 2010), TRITON
(Sanz et al., 2013a) and the ongoing MERBOTS, funded
by the Spanish Ministry, and FP7-TRIDENT (Sanz et al.,
2013b), funded by the European Commission. All these
projects have been coordinated with several partners and
with a high complexity in both, hardware and software
components. Moreover, these projects were targeted to
common objectives, dealing with underwater intervention
systems to be validated in sea conditions at the end.
As a consequence, all the partners need to be sure that
their part of the system and their algorithms will work
properly when the system is completely assembled and
tested. With this aim, a simulator that allows the re-
searchers to introduce the model of the whole system, as
well as a realistic scenario for testing their algorithms, was
considered to be an extremely important tool. In addition
to the simulator, benchmarking capabilities can help the
researchers to compare different algorithms and better un-
derstand their limitations and robustness, making possible
their improvement.
⋆
This work was partly supported by Spanish Ministry of Economy
and Competitiveness under grant DPI2014-57746-C3 (MERBOTS
Project), by Universitat Jaume I grant PID2010-12 and PhD grants
PREDOC/2012/47 and PREDOC/2013/46, by Generalitat Valen-
ciana PhD grant ACIF/2014/298 and PROMETEO/2016/066 grant.
Regarding benchmarking in robotics, a big effort has been
made over the last years. In fact, some recent European
projects, like FP7-BRICS (Best Practice in Robotics),
were devoted to this specific context as in Nowak et al.
(2010). Moreover, following previous research in this con-
text, such as DEXMART (2009), it is clear that: Com-
paring results from different approaches and assess the
quality of the research is extremely difficult in the robotics
research field. Furthermore, trying to do it when the robot
is interacting with the real world is even more complicated.
Several definitions of benchmarks have been proposed, but
in this paper the one stated at Dillman (2004) will be used.
In it, benchmarks are defined as numerical evaluation of
results being repeatability, independency and unambiguity
the main aspects of these metrics.
In order to simulate the experiments, the UWSim simula-
tor (Prats et al., 2012) and a benchmarking tool, which is
highly integrated with the simulator, were developed (see
figure 1). Moreover, a methodology that allows researchers
to work in different conditions and increasing gradually the
level of difficulty has been designed. This methodology also
helps to improve the scenarios for the benchmarking, thus
obtaining increasingly a more realistic one.
One of the main drawbacks of autonomous underwater
interventions is the need to interpret the hazardous envi-
ronment. For instance, being able to detect and recognize
objects in degraded images to grasp and manipulate them.
This is the reason why many works have been presented in
the underwater image processing context, Raimondo and
Silvia (2010) offers a review of them. Although many works
J. P´ erez
*
J. Sales
*
A. Pe˜ nalver
*
J. J. Fern´ andez
*
D. Fornas
*
J. C. Garc´ ıa
*
R. Mar´ ın
*
P. J. Sanz
*
*
Computer Science and Engineering Department,
University of Jaume-I, Castell´on, Spain. (e-mail: japerez@uji.es)
Abstract: Field experiments in underwater robotics research require a big amount of resources
in order to be able to test the system in sea conditions. Moreover, sea conditions are
constantly changing making impossible to reproduce specific situations. For these reasons,
testing, comparing and evaluating different algorithms in similar conditions is an utopic
situation. In order to deal with this, a framework that mixes real experiments and a simulated
environment is proposed to allow objective comparison of algorithms in an scenario as close as
possible to field experiments. This is possible using real sensors in a controllable environment, for
instance a water tank, adding simulated hostile conditions difficult to reproduce in a controlled
environment such as water turbidity, composing a Hardware In the Loop (HIL) framework. This
framework is formed by UWSim, an underwater simulator, and a benchmarking module able to
measure the performance of external software. This setup is used in a search and recovery use
case to compare different tracking algorithms, predicting the effect of water turbidity in them.
The results allow to choose the best option without the need of dealing with field experiments.
Benchmarking water turbidity effect on
tracking algorithms
⋆