Adaptive Homography-based Visual Servo Tracking
In this research project, a homography-based adaptive visual servo controller is developed to enable a robot end-effector to tracka desired Euclidean space trajectory as determined by a sequence of images for both th camera-in-hand and fixed camera configurations. To achieve the objective, a Lyapunov-based adaptive control strategy is employed to actively compensate for the lack of unknown depth measurements and the lack of an object model. The error systems are constructed as a hybrid of pixel information and reconstructed Euclidean variables obtained by comparing the images and decomposing a homographic relationship. Simulation results are provided to demonstrate the performance of the developed controller for the fixed camera configuration.
A key issue that impacts camera-based visual servo control is the relationship between the Euclidean-space and the image-space. One factor that impacts this relationship is the fact that the image-space is a 2 dimensional (2D) projection of the 3D Euclidean-space. To compensate for the lack of depth information from the 2D image data, some researchers have focused on the use of alternate sensors (e.g., laser and sound ranging technologies). While some applications may be suited to alternative vision sensors that provide depth information, many applications are ill suited for such technologies. Other researchers have explored the use of a camera-based vision system in conjunction with other sensors along with some sensor fusion method or the use of additional cameras in a stereo configuration that triangulate on corresponding images. However, the practical drawbacks of incorporating additional sensors include: increased cost, increased complexity, decreased reliability, and increased processing burden to condition and fuse sensor data. Motivated by these practical insights, recent research has focused on monocular camera-based visual servo strategies that rely on analytic techniques to address the lack of depth information. One strategy that has recently been employed involves the use of partitioning methods that exploit a combination of reconstructed 3D Euclidean information and 2D image-space information. For example, in the series of papers by Malis and Chaumette (e.g.,[1,2,18,19]) various kinematic control strategies exploit the fact that the interaction between translation and rotation components can be decoupled through a homography. Specifically, information combined from the task-space (obtained through a Euclidean reconstruction from the image data) and the 2D image-space is utilized to regulate the translation and rotation error systems. In , Deguchi utilizes a homography relationship and an epipolar condition to decouple the rotation and translation components and then illustrates how two types of visual controllers can be developed from the decoupled information. Corke and Hutchinson  also developed a hybrid image-based visual servoing scheme that decouples rotation and translation components from the remaining degrees of freedom. One drawback of some of the aforementioned controllers are claims (without a supporting proof) that a constant, best-guess estimate of the depth information can be utilized in lieu of the exact value. Motivated by the desire to actively compensate for unmeasurable depth information, Conticelli developed an adaptive kinematic controller in  to ensure uniformly ultimately bounded (UUB) set-point regulation, provided conditions on the translational velocity and the bounds on uncertain depth parameters are satisfied. In , Conticelli et al. proposed a 3D depth estimation procedure that exploits a prediction error provided a positive definite condition on the interaction matrix is satisfied. In  and , Fang et al. recently developed 2.5D visual servo controllers to asymptotically regulate a manipulator end-effector and a mobile robot, respectively, by developing an adaptive update law that actively compensates for an unknown depth parameter. In , Fang et al. also developed a camera-in-hand regulation controller that incorporated a robust control structure to compensate for uncertainty in the extrinsic calibration parameters.
After examining the literature, it is clearly evident that much of the previous visual servo controllers have only been designed to address the regulation problem. That is, the objective of most control designs is to force a hand-held camera to a Euclidean position defined by a static reference image. Unfortunately, many practical applications require a robotic system to move along a predefined or dynamically changing trajectory. For example, a human operator may predefine an image trajectory through a high-level interface, and this trajectory may need to be modified on-the-fly to respond to obstacles moving in and out of the environment. Moreover, it is well known that a regulating controller may produce erratic behavior and require excessive initial control torques if the initial error is large. Motivated by the need for new advancements to meet visual servo tracking applications, previous research has concentrated on developing different types of path planning techniques in the image-space (e.g., see [6,21,22,23]. More recently, Mezouar and Chaumette developed a path-following image-based visual servo algorithm in  where the path to a goal point is generated via a potential function that incorporates motion constraints. In , Cowan et al. develop a hybrid position/image-space controller that forces a manipulator to a desired endpoint while avoiding obstacles and ensuring the object remains in the field-of-view by avoiding pitfalls such as self-occlusion.
In contrast to the approaches in  and  in which a path is planned as a means to reach a desired setpoint, hybrid tracking controllers are developed in this paper where the robot end-effector is required to track a prerecorded time-varying reference trajectory. To develop the hybrid controllers, a homography-based visual servoing approach is utilized. The motivation for using this approach is that the visual servo control problem can be incorporated with a Lyapunov-based control design strategy to overcome many practical and theoretical obstacles associated with more traditional purely image-based approaches. Specifically, one of the challenges of this problem is that the translation error system is corrupted by an unknown depth related parameter. By formulating a Lyapunov-based argument, an adaptive update law is developed to actively compensate for the unknown depth parameter. In addition, the proposed approach facilitates: i) translation/rotational control in the full six degree-of-freedom task-space without the requirement of an object model, ii) partial servoing on pixel data that yields improved robustness and increases the likelihood that the centroid of the object remains in the camera field-of-view , and iii) the use of an image Jacobian that is only singular for multiples of 2$\pi $, and hence, eliminates the serious problem of singular image Jacobians inherent in many of the purely image-based controllers. The homography-based controllers in this paper target both the fixed camera and the camera-in-hand configurations. The control development for the fixed camera problem is presented in detail, and the camera-in-hand problem is included as an extension.
Fig.1 Desired Translational Trajectory
In this paper, an adaptive visual servo controller is developed for the fixed camera configuration to enable the end-effector of a robot manipulator to track a desired trajectory determined by an a priori available sequence of images. The controller is formulated using a hybrid composition of image-space pixel information and reconstructed Euclidean information that is obtained via projective homography relationships between the actual image, a reference image, and the desired image. To achieve the objective, a Lyapunov-based adaptive control strategy is employed to actively compensate for the lack of unknown depth measurements and unknown object model parameters. Based on the development for the fixed camera controller, an extension is provided to enable a camera held by a robot end-effector to track a desired trajectory determined from a sequence of images (i.e., camera-in-hand tracking). Simulation results were provided to demonstrate the performance of the controller for the fixed camera problem.
For more details on this research, please refer the following conference paper: