Path Planning for Image-based Control of Wheeled Mobile Manipulators Moslem Kazemi, Kamal Gupta, Mehran Mehrandezh Abstract—We address the problem of incorporating path planning with image-based control of a wheeled mobile ma- nipulator (WMM) performing visually-guided tasks in complex environments. The WMM consists of a wheeled (non-holonomic) mobile platform and an on-board robotic arm equipped with a camera mounted at its end-effector. The visually-guided task is to move the WMM from an initial to a desired location while respecting image and physical constraints. We propose a kinodynamic planning approach that explores the camera state space for permissible trajectories by iteratively extend- ing a search tree in this space and simultaneously tracking these trajectories in the WMM configuration space. We utilize weighted pseudo-inverse Jacobian solutions combined with a null space optimization technique to effectively coordinate the motion of the mobile platform and the arm. We also present the preliminary results obtained by executing the planned trajectories on a real WMM system via a decoupled control scheme where the on-board arm is servo controlled along the planned feature trajectories while the mobile platform is simultaneously controlled along its trajectory using a state feedback tracking method. I. I NTRODUCTION Wheeled mobile manipulators are major efforts to bring both mobility and manipulation capabilities to human en- vironments. To move autonomously and accomplish tasks robustly in complex environments high-level global motion planning techniques should be closely integrated with sensor- based control of such systems. Many efforts have been devoted to both motion planning and sensor-based control with promising advances in each individual area over the past decades. However, the integration of planning and control, in particular for complex systems such as a WMM, remains a challenging topic, and also crucial towards fully autonomous and robust solutions. Among various sensory inputs, vision has gained lots of attention and found many applications in robotic solutions due to its rich input providing a large amount of information with a high frequency. A great deal of research has been devoted to developing vision-based control strategies for robotics application, leading to an active area of research called Visual Servoing [1]. The main idea in visual servoing is to use vision feedback to control the motion of the robot for performing a task. In contrast to position-based visual servoing (PBVS), where the control is performed in the Cartesian space based on 3-D information retrieved from image, in image-based visual servoing (IBVS) techniques, Moslem Kazemi is with the Robotics Institute, Carnegie Mellon Univer- sity, PA, USA moslemk@cmu.edu, Kamal Gupta is with Simon Fraser University, BC, Canada kamal@sfu.ca, and Mehran Mehrandezh is with University of Regina, SK, Canada mehran.mehrandezh@uregina.ca. Fig. 1. SFU wheeled mobile manipulator system (a Powerbot mobile platform with an on-board 6-DOF Schunck robotic arm) reaches a desired location by tracking a target object. the feedback is defined based on image features and the control loop is closed directly within the image. This results in a more robust control in presence of calibration and modeling errors, and hence adding to the popularity of IBVS. In [2], through simple, yet, effective examples, Chaumette outlined the potential problems of stability and convergence of IBVS techniques: singularities in image Jacobian leading to an unstable behavior, and reaching local minima due to the existence of multiple camera poses yielding the same terminal image of the target. Moreover, in IBVS techniques there is no direct control over the image/camera/robot trajec- tories induced by the servoing loop in the image and physical spaces. Therefore, these trajectories might violate the image and/or physical constraints. The aforementioned challenges have recently motivated the researchers to incorporate path planning strategies into the visual servo loop. The main idea of path planning for visual servoing is to plan and generate feasible image feature trajectories while accounting for the constraints, and then to servo the robot along the planned trajectories. A detailed review of existing path planning techniques for visual servoing is provided in [3]. A number of techniques aimed at interpolating a path directly in the image space between the initial and desired images without using any knowledge of camera calibration or target model (e.g., see [4], [5]). Potential fields have been employed in the context of visual servoing in face of constraints (e.g., field of view and joint limits [6], or obstacle avoidance behavior [7]). Some other techniques aimed at finding globally optimal paths with respect to various costs, for example distance from the image boundary, length of the path traversed by the robot, and energy expenditure. These approaches employ polynomial parametrization of camera paths (e.g., [8]), or utilize tools from optimal control (e.g., [9]). The convergence problems of potential field-