Homography-Based Visual Servo Tracking Control of a Wheeled Mobile Robot


A visual servo tracking controller is developed in this research work for a monocular camera system mounted on an underactuated wheeled mobile robot (WMR) subject to nonholonomic motion constraints (i.e., the camera-in-hand problem). A prerecorded image sequence (e.g., a video) of three target points is used to define a desired trajectory for the WMR. By comparing the target points from the prerecorded sequence with the corresponding target points in the live image, projective geometric relationships are exploited to construct a Euclidean homography. The information obtained by decomposing the Euclidean homography is used to develop a kinematic controller. A Lyapunov-based analysis is used to develop an adaptive update law to actively compensate for the lack of depth information required for the translation error system. Simulation results are provided to demonstrate the control design.


Wheeled mobile robots (WMRs) are often required to execute tasks in environments that are unstructured. Due to the uncertainty in the environment, an intelligent sensor that can enable autonomous navigation is well motivated. Given this motivation, researchers initially targeted the use of a variety of sonar and laser-based sensors. Some initial work also targeted the use of a fusion of various sensors to build a map of the environment for WMR navigation (see [17, 19, 28, 29, 31] and the references within). While this is still an active area of research, various shortcomings associated with these technologies and recent advances in image extraction/interpretation technology and advances in control theory have motivated researchers to investigate the sole use of camera-based vision systems for autonomous navigation. For example, using consecutive image frames and an object database, the authors of [18] recently proposed a monocular visual servo tracking controller for WMRs based on a linearized system of equations and Extended Kalman Filtering (EKF) techniques. Also using EKF techniques on the linearized kinematic model, the authors of [7] used feedback from a monocular omnidirectional camera system (similar to [1]) to enable wall following, follow-the-leader, and position regulation tasks. In [16], Hager et al. used a monocular vision system mounted on a pan-tilt-unit to generate image-Jacobian and geometry-based controllers by using different snapshots of the target and an epipolar constraint. As stated in [2], a drawback of the method developed in [16] is that the system equations became numerically ill-conditioned for large pan angles. Given this shortcoming, Burschka and Hager [2] used a spherical image projection of a monocular vision system that relied on teaching and replay phases to facilitate the estimation of the unknown object height parameter in the image-Jacobian by solving a least-squares problem. Spatiotemporal apparent velocities obtained from an optical flow of successive images of an object were used in [26] to estimate the depth and time-to-contact to develop a monocular vision guide robot. A similar optical flow technique was also used in [20]. In [9], Dixon et al. used feedback from an uncalibrated, fixed (ceiling-mounted) camera to develop an adaptive tracking controller for a WMR that compensated for the parametric uncertainty in the camera and the WMR dynamics. An image-based visual servo controller that exploits an object model was proposed in [30] to solve the WMR tracking controller (the regulation problem was not solved due to restrictions on the reference trajectory) that adapted for the constant, unknown height of an object moving in a plane through Lyapunov-based techniques. In [21] and [33], visual servo controllers were recently developed for systems with similar underactuated kinematics as WMRs. Specifically, Mahony and Hamel [21] developed a semi-global asymptotic visual servoing result for unmanned aerial vehicles that tracked parallel coplanar linear visual features while Zhang and Ostrowski [33] used a vision system to navigate a blimp.

In contrast to the previous image-based visual servo control approaches, novel homography-based visual servo control techniques have been recently developed in a series of papers by Malis and Chaumette (e.g., [3], [4], [22], [23], [24]). The homography-based approach exploits a combination of reconstructed Euclidean information and image-space information in the control design. The Euclidean information is reconstructed by decoupling the interaction between translation and rotation components of a homography matrix. As stated in [24], some advantages of this methodology over the aforementioned approaches are that an accurate Euclidean model of the environment (or target image) is not required and potential singularities in the image-Jacobian are eliminated (i.e., the image-Jacobian for homography-based visual servo controllers is typically triangular). Motivated by the advantages of the homography-based strategy, several researchers have recently developed various regulation controllers for robot manipulators (see [5], [6], [8], [11], and [13]). In [12], a homography-based visual servo control strategy was recently developed to asymptotically regulate the position/orientation of a WMR to a constant Euclidean position defined by a reference image, despite unknown depth information.

In this paper, a homography-based visual servo control strategy is used to force the Euclidean position/orientation of a camera mounted on a WMR (i.e., the camera in hand problem) to track a desired time-varying trajectory defined by a prerecorded sequence of images. By comparing the features of an object from a reference image to features of an object in the current image and the prerecorded sequence of images, projective geometric relationships are exploited to enable the reconstruction of the Euclidean coordinates of the target points with respect to the WMR coordinate frame. The tracking control objective is naturally defined in terms of the Euclidean space, however, the translation error is unmeasurable. That is, the Euclidean reconstruction is scaled by an unknown distance from the camera/WMR to the target, and while the scaled position is measurable through the homography, the unscaled position error is unmeasurable. To overcome this obstacle, a Lyapunov-based control strategy is employed that provides a framework for the construction of an adaptive update law to actively compensate for the unknown depth-related scaling constant. While similar techniques as in [12] are employed for the Euclidean reconstruction from the image data for the WMR system, new development is required in this paper to develop a tracking controller. In contrast to visual servo methods that linearize the system equations to facilitate EKF methods, the Lyapunov-based control design in this paper is based on the full nonlinear kinematic model of the vision system and the mobile robot system.

Methodology and Work Description

Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular vision system for acquiring depth information, for path planning, and for servo control. To achieve this objective, the corresponding featue points between a prerecorded image trajectory and the current image are compared and a homography is constructed. By decomposing the homography, rotation and translation information can be decoupled from the images and used as feedback in an adaptive, Lyapunov-based control strategy.

To demonstrate the performance of this technique, a mobile robot platform typically used in Department of Energy D & D operations was equipped with a single camera and an additional computer.

  • A target with four feature points was constructed.
  • An intensity thresholding image processing algorithm was developed and implemented to capture the image features.
  • A desired image trajectory was recorded.
  • A Singular Value Decomposition-based algorithm was implemented that extracted rotation and translation error information from a comparison of the prerecorded and current images.
  • A controller was implemented to force the robot to follow the desired image-space trajectory.

Experimental Setup

To implement the adaptive tracking controller as experimental testbed (see figure below) was constructed in the Oak Ridge National Laboratory, Mobile Robot Test Facility. The WMR testbed consists of the following components: a modified K2A WMR (with an inclusive Pentiumm 133 MHz personal computer (PC)) manufactured by Cybermotion Inc., a Dalsa CAD-6 camera that captures 955 frames per second with 8-bit gray scale at a 260*260 resolution, a Road Runner Model 24 vido capture board, adn two Pentium-based PCs.

Experimental Results

Fig.1 Desired Translation

Fig.2 Desired Rotation

Fig.3 Translation Error

Fig.4 Rotation Error

Fig.5 Parameter Estimate

Fig.6 Linear and Angular Velocity control inputs

Fig.7 Drive and Steer motor torque inputs


In this research, the position/orientation of a WMR is forced to track a desired time-varying trajectory defined by a prerecorded sequence of images. To achieve the result, multiple views of three target points were used to develop Euclidean homographies. By decomposing the Euclidean homographies into separate translation and rotation components, reconstructed Euclidean information was obtained for the control development. A Lyapunov-based stability argument was used to design an adaptive update law to compensate for the fact that the reconstructed translation signal was scaled by an unknown depth parameter. The impact that the development in this paper makes is that a new analytical approach has been developed using homography-based concepts to enable the position/orientation of a WMR subject to nonholonomic constraints to track a desired trajectory generated from a sequence of images, despite the lack of depth measurements. Simulation results were provided to illustrate the performance of the controller. Our future efforts will target the development of analytical Lyapunov-based methods for WMR visual servo tracking using an off-board camera similar to the problem in [9] without the restriction that the camera by fixed perpendicular to the WMR plane of motion and experimentally demonstrating the developed controller.

Conference Papers

For more details on this research, please refer the followingconference paper:

  • J. Chen, D. M. Dawson, W. E. Dixon and A. Behal "Adaptive Homography-based Visual Servo Tracking".

E-MailRAMAL HomePage