Identification of a Moving Object's Velocity with a Fixed Camera


In this research project, a continuous estimator strategy is utilized to asymptotically identify the six degree-of-freedom velocity of a moving object using a single fixed camera. The design of the estimator if facilitated by the fusion of homography-based techniques with Lyapunov design methods. Similar to the stereo vision paradigm, the proposed estimator utilizes different views of the object from a single camera to calculate 3D information from 2D images. In contrast to some of the previous work in this area, no explicit model is used to describe the movement of the object; rather, the estimator is constructed based on bounds on the object's velocity, acceleration, and jerk.


Often in an engineering application, one is tempted to use a camera to determine the velocity of a moving object. However, as stated in [8], the use of a camera requires one to interpret the motion of a 3-dimensional (3D) object through 2D images provided by the camera. That is, the promary problem is 3D information is compressed or nonlinearly transformed into 2D information; hence, techniques or methods must be developed to obtain 3D information despite the fact that only 2D information is available. To address tge identification of the object's velocity (i.e., the motion parameters), many researchers have developed various approaches. For example. if a model for the object's motion is known, an observer can be used to estimate the object's velocity [10]. In [20], a window position predictor for object tracking was utilized. In [12], an observer for estimating the object velocity was utilized; however, a description of the object's kinematics must be known. In [9], the problem of identifying the motion and shape parameters of a planar object undergoing Riccati motion was examined in great detail. In [13], an autoregressive discrete time model is used to preduct the location of features of a moving object. In [1], trejectory filtering and prediction techniques are utilized to track a moving object. Some of the work [24] involves the use of camera-centered models Grant. that compute values for the motion parameters at each new frame to produce the motion of the object. In [2] and [21], object-centered models are utilized to estimate the translation and the center of rotation of the object. In [25], the motion parameters of an object are determined via a stereo vision approach.

While it is difficult to make broad statements concerning much of the previous work on velocity identification, it does seem that a good amount of effort has been focused on developing system theory-based algorithms to estimate the object's velocity or compensate for the object's velocity as part of a feedforward control scheme. For example, one might assume that object kinematics can be described as follows
          dx/dt=y(x)f      (1)
where x(t), dx(t)/dt denote the object's position vector and object's velocity vector, respectively, Y(x) denotes a known regression matrix, and f denots an unknown, constant vector. As illustrated in [11], the object model of (1) can be used to describe many types of object models (e.g., constant-velocity, and cyclic motions). If x(t) is measurable, it is easy to imagine how adaptive control techniques [22] can be utilized to formulate an adaptive upadate law that could compensate for unknown effects represented by the parameter f for a typical control problem. In addition, if x(t) if persistently exciting [22], one might be able to also show that the unknown paramter f could be identified asymptotically. In a similar manner, robust control strategies or learning control strategies could be used to compensate for unknown object kinematics under the standard assumptions for these types of controllers (e.g., see [17] and [18].

While the above control techniques provide, different methods for compensating for unknown object kinematics, these methods do not seem to provide much help with regard to identifying the object's velocity if not much is known about the motion of the object. That is, from a systems theory point of view, one must develop a method of asymptotically identifying a time-varying signal with as little information as possible. This problem is made even more difficult because the sensor being used to gather the information about the object is a camera, and as mentioned before, the use of a camera requires one to interpret the motion of a 3D object from 2D images. To attack this double-loaded problem, we fuse homography-based techniques with a Lyapunov synthesized estimator to asymptotically identify the object's unknown velocity. Similar to the stereo vision paradigm,, the proposed approach uses different views of the object from a single camera to calculate 3D information from 2D images. The homography-based techniques are based on fixed camera work presented in [3] which relies on the camera-in-hand work presented in [14]. The continuous, Lyapunov-based estimation strategy has its roots in an example developed in [19] and the general frameworj developed in [26]. The only requirements on the object are that its velocity, acceleration, and jerk be bounded, and that a single geometric length between two feature points on the object be known a priori.

Experimental Setups

  • Puma560 6 DOF robot manipulator.
  • 1 PC dedicated to robot control - 1GHz AMD based PC with one ServoToGo I/O board, running QNX Momentics real-time OS, and the Robotic Platform software for robot control.
  • 1 PC dedicated to Velocity Observer - 2GHz Intel based PC interfaced to a Dalsa DS-4x-300K262 262fps camera through RoadRunner 24M framegrabber board. Velocity Observer is implemented in C++ using the following software tools: QMotor real-time control software, GNU Scientific library, QMath and QWidgets.
  • A plexiglass plate with six high intensity LEDs mounted on the robot end-effector. This will serve as the set of targets for the Velocity Observer.
experimental setup
Figure 5: Experimental setup.

Test Plan

  • Mount the camera in a fixed location, facing the robot end-effector with target LEDs.
  • Develop software (using Robotic Platform tools) to move the robot end-effector along a trajectory such that all target LEDs are visible to the fixed camera at all times (to avoid occlusion related problems).
  • Fix Euclidean co-ordinate frames on the camera and the target plane.
  • Measure the Euclidean co-ordinates of two of the target points relative to the target co-ordinate frame.
  • Log the 6 DOF velocity of the targets (expressed relative to the target coordinate frame) and the 6DOF observed velocity from the Velocity Observer system. Compare results.

Experimental Results

  • Position error must be within two centimeters in translation and within two degrees in rotation.
  • Observer system must be effective for a wide range of rotational and translational motion of targets relative to the field-of-view of the camera.

During experiments it was found that the homography algorithm from Dr Kenichi Kanatani(2) performed better in terms of stability to pixel noise compared to the least-squares method. We therefore use his algorithm to compute the homography. The downside is its computational complexity, enabling us to run the observer only at about 150Hz, instead of 1000Hz.

Simulation Results

In theory the pixel co-ordinates of a minimum of four coplanar points on the object are required. We used six targets and the standard least-squares method for estimation of homography matrix in simulations. Figure 1 shows the implementation block diagram.

Block diagram
Figure 1: Simulation block diagram.

The simulation was developed on QMotor 3.0. The Robotic Platform Math Library and GNU Scientific Library were used for mathematical computations. The sampling frequency was 1-2 kHz.

screenshot of simulation in QMotor
Figure 2: A screenshot of the simulation in QMotor. Click on image for a full-sized view.
simulated object velocity
Figure 3: Simulated velocity along each of the 6 DOF axis of the object.
error curves
Figure 4: Error curves for velocity estimation.


In this research, we presented a continuous estimator strategy that can be utlized to asymptotically identify the six degree of freedom velocity of a moving object using a single fixed camera. The design of the estimator is based on a novel fusion of homography-based vision techniques and Lyapunov control design tools. The only requirements on the objects are that its velocity and its first two time derivatives be bounded, and that a single geometric length between two feature points on the object be known a priori. Future work will concentrate on experimental valication of the proposed estimators as well as its ramidications for other vision-based applications. Specifically, it seems that the proposed estimator might be able to be utilized in a typical camera-in-hand application that requires a robot manipulator end-effector to track a moving object. The applicability of the proposed approach for this type of object tracking applications is well moticated since the estimator does not require an explicit model for describing the movement of the object.

Conference Papers

For more details on this research, please refer the following conference paper:

  • V. K. Chitrakaran, D. M. Dawson, W. E. Dixon and J. Chen "Identification of a Moving Object's Velocity with a Fixed Camera".

E-Mail RAMAL Home Page