Original Article Proc IMechE Part I: J Systems and Control Engineering 1–24 Ó IMechE 2018 Article reuse guidelines: sagepub.com/journals-permissions DOI: 10.1177/0959651818800908 journals.sagepub.com/home/pii Divergent trinocular vision observers design for extended Kalman filter robot state estimation Edgar Alonso Martı ´nez-Garcı ´a 1 , Joaquı ´n Rivero-Jua´rez 2 , Luz Abril Torres-Me´ndez 3 and Jorge Enrique Rodas-Osollo 1 Abstract Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trino- cular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangu- lated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual odometry results were compared with the robot’s dead-reckoning kinematics, and both are combined as a recur- sive position controller. One observer model design is based on the analytical geometric multi-view approach. The other observer model has fundamentals on multi-view lateral optical flow, which was reformulated as nonspatial–temporal and is modeled by an exponential function. This work presents the analytical deductions of the models and formulations. Experimental validation deals with five main aspects: multi-view correction, a geometric observer for range measure- ment, an optical flow observer for range measurement, dead-reckoning and visual odometry. Furthermore, comparison of positioning includes a four-wheel odometer, deterministic visual observers and the observer–extended Kalman filter, compared with a vision-based global reference localization system. Keywords Visual odometry, trinocular sensor, extended Kalman filter, feature-based modeling, observer design, robot vision, sen- sor design, robot navigation, dynamic modeling Date received: 27 July 2017; accepted: 24 August 2018 Introduction Robotic systems require precise information about their environment from their sensors to accomplish numer- ous useful tasks. The real-world tasks of autonomous robots may require the simultaneous execution of one or more of the following robotic functions: path plan- ning, collision-free navigation, localization, tasks sche- duling, trajectory control, mapping, object recognition, and other task-specific perception capabilities. These capabilities depend on state observers, which provide estimates of the robot’s state from measurement models when the physical state of the system cannot be deter- mined by direct sensor observations. Instead, inferred states are observed using sensing models. In multiple applications of mobile robotics, a large branch of deter- ministic methods (analytical functional forms) and state-estimation methods (statistical and probabilistic) are inherently based on traditional robotic sensors modeled by observers. For instance, exteroceptive devices have increasingly been utilized to collect and combine heterogeneous types of data, such as monocu- lar vision, light detection and ranging radars (LiDARs), rings of ultrasonic sonar, red-green-blue with depth (RGB-D) or Kinect devices, and stereo systems (Figure 1(a) and (b)). Autonomous robot control relies on 1 Laboratorio de Robo´tica, Institute of Engineering and Technology, Universidad Auto´noma de Ciudad Jua´rez, Ciudad Jua´rez, Mexico 2 Universidad Tecnolo´gica de Ciudad Jua´rez, Ciudad Jua´rez, Mexico 3 Robotics Active Vision Group, Center for Research and Advanced Studies of the National Polytechnic Institute (CINVESTAV-IPN), Saltillo, Mexico Corresponding author: Edgar Alonso Martı ´nez-Garcı ´a, Laboratorio de Robo´tica, Institute of Engineering and Technology, Universidad Auto´noma de Ciudad Jua´rez, Ciudad Jua´rez 32310, Mexico. Email: edmartin@uacj.mx