Adaptive 2.5D Visual Servoing of Kinematically Redundant Robot Manipulators
In this paper, the 3-Dimensional (3D) position and orientation of a camera held by the end-effector of a robot manipulator is regulated to a constant desired position and orientation despite (i) the lack of depth information of the actual or desired camera position from a target, (ii) the lack of a 3D model of the target object, and (iii) parametric uncertainty in the dynamic model of the robot manipulator. Specifically, by fusing 2D image-space and 3D task-space information (i.e., 2.5D visual servoing) while actively adapting for unknown depth information, a task-space kinematic controller is developed that is proven to ensure asymptotic regulation of the position and orientation of the camera. Based on the desire to enhance the robustness of the control design, the integrator backstepping approach is then utilized to develop a joint torque control input to ensure asymptotic regulation of the position and orientation of the camera, which is held by the end-effector of a kinematically redundant robot manipulator, despite parametric uncertainty in the dynamic model of the robot. The stability of each controller is proven through a Lyapunov-based stability analysis. The performance of the torque control input is demonstrated through simulation results.
Simulation results are provided to demonstrate the performance of the proposed controller. Figure 1 and 2 illustrate the translation and rotation error performance, respectively. The parameter estimate for d is presented in figure 3, while the torque control inputs are depicted in figure 4.
For more information concerning this research, please refer to the following publication:
Y. Fang, D. M. Dawson, W. E. Dixon and M. S. de Queiroz, "Homography-Based Visual Servoing of Wheeled Mobile Robots", Proc. of the IEEE Conference on Decision and Control, December, 2002, to appear.