Identification of a Moving Object's Velocity with a Fixed Camera
In this research project, a continuous estimator strategy is utilized to asymptotically identify the six degree-of-freedom velocity of a moving object using a single fixed camera. The design of the estimator if facilitated by the fusion of homography-based techniques with Lyapunov design methods. Similar to the stereo vision paradigm, the proposed estimator utilizes different views of the object from a single camera to calculate 3D information from 2D images. In contrast to some of the previous work in this area, no explicit model is used to describe the movement of the object; rather, the estimator is constructed based on bounds on the object's velocity, acceleration, and jerk.
Often in an engineering application, one is tempted to use a camera to determine the velocity of a moving object. However, as stated in , the use of a camera requires one to interpret the motion of a 3-dimensional (3D) object through 2D images provided by the camera. That is, the promary problem is 3D information is compressed or nonlinearly transformed into 2D information; hence, techniques or methods must be developed to obtain 3D information despite the fact that only 2D information is available. To address tge identification of the object's velocity (i.e., the motion parameters), many researchers have developed various approaches. For example. if a model for the object's motion is known, an observer can be used to estimate the object's velocity . In , a window position predictor for object tracking was utilized. In , an observer for estimating the object velocity was utilized; however, a description of the object's kinematics must be known. In , the problem of identifying the motion and shape parameters of a planar object undergoing Riccati motion was examined in great detail. In , an autoregressive discrete time model is used to preduct the location of features of a moving object. In , trejectory filtering and prediction techniques are utilized to track a moving object. Some of the work  involves the use of camera-centered models Grant. that compute values for the motion parameters at each new frame to produce the motion of the object. In  and , object-centered models are utilized to estimate the translation and the center of rotation of the object. In , the motion parameters of an object are determined via a stereo vision approach.
While it is difficult to make broad statements concerning much of the previous work on velocity identification, it does seem that a good amount of effort has been focused on developing system theory-based algorithms to estimate the object's velocity or compensate for the object's velocity as part of a feedforward control scheme. For example, one might assume that object kinematics can be described as follows
While the above control techniques provide, different methods for compensating for unknown object kinematics, these methods do not seem to provide much help with regard to identifying the object's velocity if not much is known about the motion of the object. That is, from a systems theory point of view, one must develop a method of asymptotically identifying a time-varying signal with as little information as possible. This problem is made even more difficult because the sensor being used to gather the information about the object is a camera, and as mentioned before, the use of a camera requires one to interpret the motion of a 3D object from 2D images. To attack this double-loaded problem, we fuse homography-based techniques with a Lyapunov synthesized estimator to asymptotically identify the object's unknown velocity. Similar to the stereo vision paradigm,, the proposed approach uses different views of the object from a single camera to calculate 3D information from 2D images. The homography-based techniques are based on fixed camera work presented in  which relies on the camera-in-hand work presented in . The continuous, Lyapunov-based estimation strategy has its roots in an example developed in  and the general frameworj developed in . The only requirements on the object are that its velocity, acceleration, and jerk be bounded, and that a single geometric length between two feature points on the object be known a priori.
Figure 5: Experimental setup.
During experiments it was found that the homography algorithm from Dr Kenichi Kanatani(2) performed better in terms of stability to pixel noise compared to the least-squares method. We therefore use his algorithm to compute the homography. The downside is its computational complexity, enabling us to run the observer only at about 150Hz, instead of 1000Hz.
In theory the pixel co-ordinates of a minimum of four coplanar points on the object are required. We used six targets and the standard least-squares method for estimation of homography matrix in simulations. Figure 1 shows the implementation block diagram.
Figure 1: Simulation block diagram.
Figure 2: A screenshot of the simulation in QMotor. Click on image for a full-sized view.
Figure 3: Simulated velocity along each of the 6 DOF axis of the object.
Figure 4: Error curves for velocity estimation.
In this research, we presented a continuous estimator strategy that can be utlized to asymptotically identify the six degree of freedom velocity of a moving object using a single fixed camera. The design of the estimator is based on a novel fusion of homography-based vision techniques and Lyapunov control design tools. The only requirements on the objects are that its velocity and its first two time derivatives be bounded, and that a single geometric length between two feature points on the object be known a priori. Future work will concentrate on experimental valication of the proposed estimators as well as its ramidications for other vision-based applications. Specifically, it seems that the proposed estimator might be able to be utilized in a typical camera-in-hand application that requires a robot manipulator end-effector to track a moving object. The applicability of the proposed approach for this type of object tracking applications is well moticated since the estimator does not require an explicit model for describing the movement of the object.
For more details on this research, please refer the following conference paper: