203
ISSN 0146-4116, Automatic Control and Computer Sciences, 2019, Vol. 53, No. 3, pp. 203–213. © Allerton Press, Inc., 2019.
Adaptive Force-Vision Control of Robot Manipulator
Using Sliding Mode and Fuzzy Logic
N. Djelal
a,
*, N. Saadia
a,
**, and A. Ramdane-Cherif
b,
***
a
Laboratory of Robotics, Parallelism and Embedded Systems, University of Science and Technology Houari Boumediene,
USTHB P.O Box 32 El Alia, Bab Ezzouar Algiers, 16111 Algeria
b
Laboratory LISV, University of Versailles Saint-Quentin en Yvelines,
10/12 Avenue de l’Europe, Velizy, 78140 France
*e-mail: ndjelal@usthb.dz
**e-mail: saadia_nadia@hotmail.com
***e-mail: rca@lisv.uvsq.fr
Received May 28, 2018; revised November 5, 2018; accepted November 8, 2018
Abstract—An adaptive sliding mode controller based on fuzzy logic is proposed to control a manipu-
lator robot over unknown surface trajectory using force-vision tracking, considering uncertainties of
the kinematic, dynamic, and camera models. In this work we show that the robot can track the desired
trajectories overcoming the model’s uncertainties, the use of the sliding mode to reject the distur-
bances and converge much faster, a nonlinear sliding surface proposed to regulate the convergence
speed in order to illuminate the overshoot of the system response, thanks to the online fuzzy logic
adaption, used to generate the equivalent control. The system’s stability has been validated using Lya-
punov criteria. So as to show the performance of the proposed control law, we performed simulations
consisting of a series of tests in various conditions. The obtained results allowed us to validate the
robustness of the controller towards the payload variations and the model’s uncertainties.
Keywords: Force-vision controller, adaptive fuzzy logic, sliding mode control, visual servoing,con-
strained surface, uncertain kinematic and dynamic
DOI: 10.3103/S0146411619030027
1. INTRODUCTION
Force-vision control of a constrained robot, towards mainstream use from multiple robot applica-
tions', required the contact of the robot end-effector with an environment, it is necessary to control the
both articulated position and the interaction force. Unfortunately, there are a lot of systems not amenable
to modeling by physical and electric laws, because these systems may be; complex with a high degree of
nonlinearity so its variables difficult to be measured particularly in the case of interaction with an
unknown environment. So, it is possible to overcome the modeling imprecision by using artificial intelli-
gence techniques to can estimate unknown parameters of such a system with unstructured uncertainties.
Many researches focused on vision and force modalities to concept a control structure for performing
intelligent tasks with safety. Prats et al. [1, 2], have proposed a hybrid force-vision control structure
inspired by a conventional force-position scheme, in order to perform interaction tasks of the robot with
its environment. Nevertheless, in reality, we don’t have the exact model of the unstructured environment;
in addition, this force-vision control law is strongly attached to the proposed value of the stiffness of the
environment, and the fixed gains of the vision controller make it weak in the fast variations of the visual
features. They used a kinematic model of the robot instead of the dynamic model that induces imprecision
in case of fast movement and in payload variation.
Dean-Le et al. [3], proposed an adaptive visual and force based on sliding mode implemented in planar
robots with friction uncertainties. However, they don’t consider the interaction on unknown surfaces and
they used a simplified image Jacobian valid in the case of the planar robot.
Cheah et al. [4, 5], wherein they proposed to achieve an adaptive force-vision control law in the pres-
ence of the uncertainties of the internal and external parameters where they project a sliding vision surface
in the task space through the estimated Jacobian of the whole system i.e. the vision and robot. However,
they do not consider the local minima problem in the estimation, also the uncertainties in the camera