Identification of a Moving Object's Velocity with a Fixed CameraAbstractIn this research project, a continuous estimator strategy is utilized to asymptotically identify the six degreeoffreedom velocity of a moving object using a single fixed camera. The design of the estimator if facilitated by the fusion of homographybased techniques with Lyapunov design methods. Similar to the stereo vision paradigm, the proposed estimator utilizes different views of the object from a single camera to calculate 3D information from 2D images. In contrast to some of the previous work in this area, no explicit model is used to describe the movement of the object; rather, the estimator is constructed based on bounds on the object's velocity, acceleration, and jerk. IntroductionOften in an engineering application, one is tempted to use a camera to determine the velocity of a moving object. However, as stated in [8], the use of a camera requires one to interpret the motion of a 3dimensional (3D) object through 2D images provided by the camera. That is, the promary problem is 3D information is compressed or nonlinearly transformed into 2D information; hence, techniques or methods must be developed to obtain 3D information despite the fact that only 2D information is available. To address tge identification of the object's velocity (i.e., the motion parameters), many researchers have developed various approaches. For example. if a model for the object's motion is known, an observer can be used to estimate the object's velocity [10]. In [20], a window position predictor for object tracking was utilized. In [12], an observer for estimating the object velocity was utilized; however, a description of the object's kinematics must be known. In [9], the problem of identifying the motion and shape parameters of a planar object undergoing Riccati motion was examined in great detail. In [13], an autoregressive discrete time model is used to preduct the location of features of a moving object. In [1], trejectory filtering and prediction techniques are utilized to track a moving object. Some of the work [24] involves the use of cameracentered models Grant. that compute values for the motion parameters at each new frame to produce the motion of the object. In [2] and [21], objectcentered models are utilized to estimate the translation and the center of rotation of the object. In [25], the motion parameters of an object are determined via a stereo vision approach. While it is difficult to make broad statements concerning much of the previous work on velocity identification, it does seem that a good amount of effort has been focused on developing system theorybased algorithms to estimate the object's velocity or compensate for the object's velocity as part of a feedforward control scheme. For example, one might assume that object kinematics can be described as follows While the above control techniques provide, different methods for compensating for unknown object kinematics, these methods do not seem to provide much help with regard to identifying the object's velocity if not much is known about the motion of the object. That is, from a systems theory point of view, one must develop a method of asymptotically identifying a timevarying signal with as little information as possible. This problem is made even more difficult because the sensor being used to gather the information about the object is a camera, and as mentioned before, the use of a camera requires one to interpret the motion of a 3D object from 2D images. To attack this doubleloaded problem, we fuse homographybased techniques with a Lyapunov synthesized estimator to asymptotically identify the object's unknown velocity. Similar to the stereo vision paradigm,, the proposed approach uses different views of the object from a single camera to calculate 3D information from 2D images. The homographybased techniques are based on fixed camera work presented in [3] which relies on the camerainhand work presented in [14]. The continuous, Lyapunovbased estimation strategy has its roots in an example developed in [19] and the general frameworj developed in [26]. The only requirements on the object are that its velocity, acceleration, and jerk be bounded, and that a single geometric length between two feature points on the object be known a priori. Experimental Setups
Figure 5: Experimental setup. Test Plan
Experimental Results
During experiments it was found that the homography algorithm from Dr Kenichi Kanatani^{(2)} performed better in terms of stability to pixel noise compared to the leastsquares method. We therefore use his algorithm to compute the homography. The downside is its computational complexity, enabling us to run the observer only at about 150Hz, instead of 1000Hz. Simulation ResultsIn theory the pixel coordinates of a minimum of four coplanar points on the object are required. We used six targets and the standard leastsquares method for estimation of homography matrix in simulations. Figure 1 shows the implementation block diagram. Figure 1: Simulation block diagram. The simulation was developed on QMotor 3.0. The Robotic Platform Math Library and GNU Scientific Library were used for mathematical computations. The sampling frequency was 12 kHz. Figure 2: A screenshot of the simulation in QMotor. Click on image for a fullsized view. Figure 3: Simulated velocity along each of the 6 DOF axis of the object. Figure 4: Error curves for velocity estimation. ConclusionIn this research, we presented a continuous estimator strategy that can be utlized to asymptotically identify the six degree of freedom velocity of a moving object using a single fixed camera. The design of the estimator is based on a novel fusion of homographybased vision techniques and Lyapunov control design tools. The only requirements on the objects are that its velocity and its first two time derivatives be bounded, and that a single geometric length between two feature points on the object be known a priori. Future work will concentrate on experimental valication of the proposed estimators as well as its ramidications for other visionbased applications. Specifically, it seems that the proposed estimator might be able to be utilized in a typical camerainhand application that requires a robot manipulator endeffector to track a moving object. The applicability of the proposed approach for this type of object tracking applications is well moticated since the estimator does not require an explicit model for describing the movement of the object. Conference PapersFor more details on this research, please refer the following conference paper:
