2.5D Visual Servoing with a Fixed Camera

Abstract

In this paper, we investigate the translational and rotational motion of the end-effector of a robot under visual feedback from a fixed camera. We acheive an exponential stability result for the regulation of the end-effector to a desired location and orientation. Specifically, by utilizing visual information from one fixed camera, we capture the motion of 4 points located in a fictitious plane attached to the end-effector of the robot that allows us to set up the control problem for the 6 - DOF motion of the end-effector in cartesian space. By assuming knowledge of the camera intrinsic parameters, we obtain the rotational motion of the end-effector through a homography decomposition while utilizing the pixel motion of the four points to obtain the translation information. The stability of the controller is proven through a Lyapunov-based stability analysis. The performance of the algorithm is demonstrated through simulation results.

Introduction

Robotic systems employ sensor-based control strategies for efficient operation as well as to obtain robustness against disturbances and/or modeling uncertainties/inaccuracies. Typically, robots utilize encoders to sense joint movements; velocity information is obtained through tachometers or by employing backwards difference algorithm on the joint positions. This approach works well with robots with finite degrees of freedom since a Jacobian matrix can be employed on the joint velocities to obtain robot end-effector position/velocity in the task-space. However, for hyperredundant robots (i.e., robots with ideally infinite degrees of freedom), it becomes difficult to estimate the forward kinematics of the robot (i.e., the task-space coordinates of the end-effector asre not easily obtainable). Visual feedback of the end-effector's position and orientation in task-space using a fixed camera comes in as a handy approach to this otherwise cumbersome task. Moreover, any robot operating in an unstructured environment is more robustly controlled with a vision system that obtains the position information for both the robot and the obstacles in the environment of the robot. Vision-based systems also have the additional advantage of allowing for non-contact measurements of the environment. Moreover, vision systems can be used for both on-line trajectory planning and feedforward/feedback control (i.e., visual servoing). An overview of the state-of-the-art in robot visual servoing can be found in [9,13]

The results from vision-based research can broadly be classified as Image-Based Visual Servoing (IBVS) and Position-Based Visual Servoing (PBVS) techniques. As is well known, both of these approaches suffer from deficiencies. In the last few years, partitioned approaches have been developed that fuse 3D task-space information with 2D image-space information to overcome many of the shortcomings of PBVS and IBVS approaches. Recently, Malis and Chaumette [1, 2, 11, 12] proposed various kinematic control strategies (coined 2.5D visual servo controllers) by exploiting the fact that the interaction between translation and rotation components can be decoupled through a homography. Specifically, information from the 3D task-space (obtained either through a given 3D model or more interestingly through a projective Euclidean reconstruction) is utilized to regulate the rotation error system while information from the 2D image-space is utilized to control the translation error system. In [5], Deguchi proposed two algorithms to decouple the rotation and translation components using a homography and an epipolar condition. Specifically, Deguchi decomposed the translation and rotation components through a homography and stated that the 2.5D controller given in [2] can be utilized, and as as alternate method, Deguchi developed a kinematic controller that utilizes task-space information to regulate the translation error and image-space information to regulate the rotation error. More recently, Corke and Hutchinson [4] developed a new hybrid image-based visual servoing scheme in order to decouple rotation and translation components about the z-axis form the remaining degrees of freedom so as to address the problem of desirable image-space trajectories resulting in undesirable Cartesian trajectories. One drawback of the aforementioned controllers is that they require a constant estimate of the depth information that is then utilized in lieu of the exact value. That is, as stated in [12], an off-line learning stage is required to estimate the distance of the desired camera position to the reference plane. Motivated by the desire to compensate for the aforementioned depth information, [3] developed an adaptive kinematic controller to ensure uniformly ultimately bounded (UUB) set-point regulation of the image point errors while compensating for the unknown depth information, provided conditions on the translations velocity and the bounds on uncertain depth parameters are satisfied. Motivated by the work of Malis et. al., Fang et. al. 6 designed a task-space kinematic controller to ensure asymptotic regulation of the position and orientation of the camera while actively adapting for unknown depth information.

In this paper, we consider the design of a kinematic controller for the 6 - DOF motion of a robot end-effector under pure visual feedback using a fixed camera configuration. Under the requirements/contraints that: i) the camera calibration matrix is completely known, ii) pixel coordinate information is available for at least 4 coplanar points (no 3) of which are collinear) located on a fictitious plane attached to the robot end-effector, and iii) depth information for the end-effector is unavailable, we obtain an exponential stability result for regulating the robot end-effector to a desired position and orientation. With the robot end-effector in its dsired position and orientation, pixel coordinates for the 4 coplanar points previously described are assumed to be known a priori. Specifically, we utilize a homography decomposition to obtain a parameterization for the rotation between the current and desired end-effector orientation, while pixel information for one of the four points as well as a manipulation of parameters obtained from the homography decomposition are utilized to control the translational motion of the end-effector to its desired location. A Lyapunov based stability argument is provided to validate the algorithm design.

System Setup

Simulation Results

Fig.1 Tracking Error Vector

Fig.2 Rotational Error Vector

Fig.3 Translational Control Input Vector

Fig.4 Rotational Control Input Vector

Conclusion

In this project, a kinematic controller was developed that forces a robot end-effector to its desired location and orientation exponentially fast while utilizing a fixed camera for visual feedback. Exact model knowledge of that camera's intrinsic parameters is assumed. The rotational controller utilizes information obtained from a decomposition of the Euclidean homography while the translational controller is based upon direct pixel information obtained by the camera as well as an analog of the depth ratio information obtained via a manipulation of the parameters obtained from the Euclidean homography decomposition. Simulation results demonstrate the efficacy of the proposed control strategy of [28] and [30]. Since the controller is designed to be differentiable, robot dynamic effects can easily be incorporated via backstepping design methods [6]. Future studies will involve algorithm designs that are independent of the camera intrinsic parameter information.

Conference Papers

For more details on this research, please refer the following conference paper:

  • J. Chen, A. Behal, D. Dawson and Y. Fang "2.5D Visual Servoing with a Fixed Camera".

E-Mail RAMAL Home Page