Arab J Sci Eng
https://doi.org/10.1007/s13369-017-2917-0
RESEARCH ARTICLE - COMPUTER ENGINEERING AND COMPUTER SCIENCE
A Vision-Based Real-Time Mobile Robot Controller Design Based
on Gaussian Function for Indoor Environment
Emrah Dönmez
1
· Adnan Fatih Kocamaz
1
· Mahmut Dirik
1
Received: 18 March 2017 / Accepted: 24 October 2017
© King Fahd University of Petroleum & Minerals 2017
Abstract In this study, a visual servoing go-to-goal behav-
ior controller is designed to control a differential drive mobile
robot for a static target. Inputs for the controller method
are based on a weighted graph or a triangle trigonometry
kinematic model. The controller is designed with general
Gaussian function by adapting the differential drive mobile
robot dynamics. State parameters of dynamics are obtained
by processing images in real time. It is aimed to develop
an efficient internal sensor-independent visual-based control
method. The single-head camera takes image frames from
indoor environment. A real-time tracking process tracks the
robot and target in sequential frames. The distances between
graph nodes or the angles between edges are assigned as
main control inputs according to utilized kinematic model.
The velocity of wheels is computed for both models by using
the general Gaussian function. We compare our method with
two classical control methods that are PID and fuzzy-PID.
Control of mobile robot has been made with high accuracy
by using the designed visual-based controller.
Keywords Visual servoing · Gaussian function ·
Triangle/graph models · Real-time control
B Emrah Dönmez
emrahdonmez@msn.com
Adnan Fatih Kocamaz
fatih.kocamaz@inonu.edu.tr
Mahmut Dirik
mahmut.dirik@inonu.edu.tr
1
Department of Computer Engineering,
Faculty of Engineering,
˙
Inönü University, Malatya, Turkey
1 Introduction
Control task is a challenging issue in robotic applications.
There is a remarkable number of studies generally focusing
on internal sensor-based control with classical based methods
such as PID, fuzzy control, fuzzy PI and heuristics [1–5].
The control task is generally applied with global position
and angular heading information by control procedures [6, 7].
These methodical tasks depend on data from internal sensors
like accelerometer, gyroscope, encoders and external sensors
like range sensors, infrared, thermal camera. By using these
sensory informations, the angular states are calculated and
parameters for controllable parts are updated to form next
motion.
Visual-based control methods aim to control a dynamic
system by utilizing visual features acquired from images
ensured by one or multiple cameras [8–11]. In other words,
controlling a robot can be modeled through a visual per-
ception infrastructure. This is done by applying image
processing techniques on each of the frames acquired from
the imagining device. The aim of all visual-based controls is
decreasing errors and the motion cost to an admissible level.
There are two types of errors in robotic systems: systematic
and non-systematic errors. Systematic errors generally stem
from the encoder, sensor and physical structure of robot parts.
On the other hand, non-systematic errors generally stem from
sliding, hitting, falling and so on. Eventually, aim of all robot
control methods is amortizing such errors until accomplish-
ing the given task [12]. The main advantages of the visual
servoing are that it requires fewer sensor data, suitable to con-
trol multiple robots, internal and external sensors on robots
generally are not needed, in terms of scalability; it provides
more operating area by increasing imagining devices and so
on. Visual servoing is implemented in a wide range of robotic
studies. In early studies, robotic arm manipulators have
123