Vision-Based Nonlinear Control of Planar Robot Manipulators


In this research filed, we consider the design of visual servoing controllers for a planar robot manipulator with both fixed camera and camera in hand configurations. Given different levels of modeling uncertainty for this system, we develop several position tracking and setpoint controllers that account for the nonlinear robot dynamics and compensate for parametric uncertainties associated with the mechanical parameters and/or the camera calibration parameters.

Background and Previous Work

It is well known that in order for robotic systems to operate in an efficient manner, sensor-based control is imperative. In most sensor-based, position control systems for robot manipulators, feedforward and feedback terms in the control algorithm are computed using position/velocity information obtained by sensors located at the robot links (e.g., encoders, resolvers, tachometers, etc.). When the robot is operating in an unstructured environment, an interesting approach is to utilize a vision system for obtaining the position information required by the controller. In this approach, the vision system mimics the human sense of sight and allows for non-contact measurement. Taken to the extreme, the vision system can be used for both on-line trajectory planning and feedforward/feedback control (i.e., visual servoing).

There seems to be a consensus that to extract high-level performance from vision-based robotic systems, the control system must incorporate information about the dynamics/kinematics of the robot and the calibration parameters of the camera system. The camera calibration parameters are composed of the so-called intrinsic parameters (i.e., image center, camera scale factors, and camera magnification factor) and extrinsic parameters (i.e., camera position and orientation). As stated in [1], few vision-based controllers have been proposed that take into account the nonlinear robot dynamics. That is, most designs assume that the robot is a perfect positioning device with negligible dynamics, and thereby, reduce the problem to that of kinematic control based on camera observations ({e.g.},[2]). One of the first vision-based control designs which incorporated the robot dynamics can be found in [3]; however, the vision system was modeled as a simple rotation transformation. More recently in [4], Bishop et. al. emphasized the importance of adequate calibration of the vision system with respect to the robot and the environment. As noted in [4], while a variety of techniques have been proposed for off-line camera calibration, only a few approaches were aimed at the more interesting problem of on-line calibration under closed-loop control. Recently, some attention has been given to the design of vision-based controllers that simultaneously account for the two problems mentioned above. That is, the development of control schemes accompanied by stability analyses that guarantee the convergence of the position error while taking into account uncalibrated camera effects and the mechanical dynamics of the robot. Specifically, Kelly and Marquez [5] considered a more representative model of the camera-robot system (in comparison to the approach of [6] to design a setpoint controller which compensated for unknown intrinsic camera parameters but required perfect knowledge of the camera orientation. Later, Kelly [1] redesigned the setpoint controller of[6] to also take into account uncertainties in the camera orientation and produce a local asymptotic stability result. The result given in [1] required exact knowledge of the robot gravitational term and required that the difference between the estimated and actual camera orientation be restricted to the interval ( -90 ,90). In [4], Bishop and Spong developed an inverse dynamics-type, position tracking control scheme (i.e., exact model knowledge of the mechanical dynamics) with on-line adaptive camera calibration that guaranteed global asymptotic position tracking; however, convergence of the position tracking error required that the desired position trajectory to be persistently exciting. In [7], Maruyama and Fujita proposed setpoint controllers for the camera-in-hand configuration; however, the proposed controllers required exact knowledge of the camera orientation and assumed the camera scaling factors to be the same value for both directions

Our Work

In this research area, we consider the design of visual servoing controllers for a planar robot manipulator for both fixed camera and camera in hand configurations. Given different levels of modelling uncertainty for this system, we develop position tracking controllers which account for the nonlinear robot dynamics and compensate for parametric uncertainties associated with the mechanical parameters and/or the camera calibration parameters.

Experimental Setups

To illustrate the real-time performance of the controllers developed, we constructed two different experimental testbeds, (see Figures below) that consisted of the following components:

  • (i) an Integrated Motion Inc. 2-link, revolute, direct-drive robot manipulator
  • (ii) a Dalsa CAD-6 camera that captures 955 frames per second with 8-bit gray scale at a 256 times 256 resolution,
  • (iii) a Road Runner Model 24 video capture board, and
  • (iv) two Pentium II-based personal computers (PCs) operating under the real-time operating system QNX.
One PC performs the image processing and sends the information to the other PC where the control algorithms and I/O operations associated with the robot manipulator are implemented. The connection and data transfer between the PCs are provided via a fast Ethernet connection.

For the fixed camera controllers, the Dalsa camera, with lens of 0.08 m focal length, was mounted 1.2 m above the robot workspace, and a LED was placed at the tip of the robot's second link to mark the end-effector as well as the moving target

For the camera-in-hand controller, the Dalsa camera was mounted at the end effector of the robot and a LED was placed 1.0 m below the robots workspace to determine the desired setpoint.

For both experiments a thresholding algorithm was used to detect the position of the LED in the camera frame. The control algorithms were implemented in C++, and run using the real-time control environment Qmotor. Data acquisition and control implementation were performed at 1 kHz via a Servo-To-Go I/O Board and an in-house built interfacing circuitry. The time derivative calculations were implemented using a standard, backwards difference/filtering algorithm.

Experimental Setup for Fixed Camera Controllers

Experimental Setup for Camera-In-Hand Controllers


For more details on this research, please refer the following publication:

  • E. Zergeroglu, D. M. Dawson, M.S. de Queiroz and A. Behal "Vision Based Nonlinear Tracking Controllers with Uncertain Robot-Camera Parameters", Proc. IEEE/ASME Int. Conf. Advanced Intelligent Mechatronics, pp.854-859 Atlanta, GA, Sept. 1999, Also submitted to IEEE/ASME Transactions on Mechatronics
  • E. Zergeroglu, D. M. Dawson, M.S. de Queiroz and S. Nagarkatti/P.Setlur "Robust Visual-Servo Control of Robot Manipulators in the Presence of Uncertainty" Proc of the 1999 IEEE Int. Conf. on Decision and Control, pp.4137-4142, Phoenix, Arizona, Dec. 1999, Also submitted to International Journal of Control
  • E. Zergeroglu, D. M. Dawson, Y. Fang and A. Malatpure "Adaptive Camera Calibration Control of Planar Robots: Elimination of Camera Space Velocity Measurements", IEEE International Conference on Control Applications, Anchorage, Alaska, September 2000, accepted to appear.

E-Mail RAMAL Home Page