Setpoint Regulation of Continuum Robots Using a Fixed Camera


In this research project, we investigate the problem of measuring the shape of a continuum robot using visual information from a fixed camera. Specifically, we capture the motion of a set of fictitious planes, each formed by four feature points,, defined at various strategic locations along the body of the robot. Then, utilizing expressions for the robot forward kinematics aas well as the decomposition of a homography relating a reference image of the robot to the actual robot image, we obtain the three dimensional shape information continuously. We then use this information to demonstrate the development of a kinematic controller to regulate the end effector of the robot to a constant desired position and orientation, while at the same time using its kinematic redundancy to satisfy a subtack objective such as obstacle avoidance.


Conventional robotic manipulators are designed as a kinematice chain of rigid links that bend at discrete joints to achieve a desired motion at its end-effector. These rigid-link robots have a limited number of joints, and all the joints are actuated by devices such as motors. The maneuverability and flexibility of such devices is limited by the number of actuated joints in them. In contrast, continuum robots [15] are robotic manipulators that draw inspiration from biological appendages like elephant trunks and squid tentacles and can bend anywhere along the length of their body. In theory, they have infinite mechanical degrees-of-freedom so that their end-effector can be positioned at a desired location and orientation while concurrently satisfying constraints in their work-space such as obstacles and tight spaces. However from as engineering persective, an important implication of such a design is that although such devices have a high kinematic redundancy, they are infinitely underactuated. A Variety of bending motions must be generated with only a fnite number of actuators. While there has been considerable progress in the area of actuation strategies for such robots (see references in [15]), the dual problem of sensing the configuration of such robots still remains a challenge. From a controls perspective, a reliable position controller would require an accurate position sensing mechanism. However, internal motion sensing devices such an encoders cannot be used to determine either the shape or the end-effector position of a continuum robot since there is no intuitive way to define links and joints on such a device. A literature syrvey reveals that a few indirect methods have been proposed by researchers to estimate the shape of continuum robots. In [5], a model has been proposed to relate internal bellow pressures to the position of the end-effector in a fluid operated device. A model that infers position from measurements of change in tendon length in tendon driven devices has been proposed is [10]. However, these methods do not have accuracies comparable to position sensing in rigid link robots because of the compliant nature of continuum devices. For example, in a tendon driven continuum robot, due to coupling of actuation between sections, verious sections of the robot can potentially change shape without the encoders detecting a change in tendon length or tension. Motivated by a desire to develop an accurate strategy for realtime shape sensing in such robots, Hannan et al. [12] implemented simple image processing techniques to determine the shape of the Elephant Trunk robotic arm at Clemson University, where images from a fixed camera were used to reconstruct the curvatures of various sections of the robot. This technique is only applicable to the case where the motion of the arm is restricted to a plane orthogonal to the optical axis of the camera. The project, however, demonstrated conclusively that there is a large difference in curvature measurements obtained from indirect cable measurements as compared to vision based strategy, and hence the information obtained from ad hoc indirect shape measurement techniques is indeed questionable.

Vision based techniques for shape sensing are appealing if they can be used to reconstruct the 3D pose of the robot without applying any conditions that constrain the maneuverability of the robot. In [7], it was shown that the correspondence between images of feature points lying on a plane, as obtained from two different cameras, is a collineation, and given the matrix of collineation, the position and orientation of the second camera and the plane can be recovered relative to the first camera. In lieu of images from a second camera, given a reference image of the plane and a knowledge of its rotation relative to the coordinate frame of the first camera, the position and orientation of the plane can be determined relative to the camera from images obtained using just the first camera alone. Exploiting this technique, in [2] Chen et al. presented the development of a kinematic controller for robot manipulators using visual feedback froma single fixed camera. In this project, we follow a similar approach in with regard to modelling the motion of various sections of a continuum robot relative to a fixed camera. Then, from decomposition of the homography and from the equations describing the forward kinematics of the robot as developed in [9], we show that the curvatures that define the shape of various sections of the robot can be fully determined. From the various kinematic control strategies for hyperredundant robots that have appeared in the robotics literature in the past (e.g., [13, 16], we use the work in [19] to develop a kinematic controller to demonstrate that the robot end-effector can be forced to any desired position and orientation using a sequence of images from single external video camera.

Experimental Setups

In order to verify the proposed technique, an experimental testbed is being setup with the following components:

(a) Clemson Elephant Trunk robot,
(b) DALSA CA-D6-0256W high-speed (955 frames-per-second) grayscale CCD video camera,
(c) Bitflow Roadrunner 24M framegrabber board,
(d) two Intel 2.4 GHz based Personal Computers (PC) running QNX 6.2.1 Momentics realtime operating system, and
(e) a teleoperator input system.

Multiple cameras may be used in a checkerboard pattern, along with multiple framegrabbers to cover the whole workspace of the robot and at the same time avoid problems such as occlusions. The image processing operations will be performed by one PC (or more PCs in case of multiple cameras) and data corresponding to visually tracked feature points on the robot are communicated to the second PC through a fast ethernet link. The kinematic control, motion planning and the corresponding I/O operations associated with the robot are performed by the second PC. This PC will have standard I/O boards (with A/D, D/A, digital I/O and encoder inputs) that provides a hardware interface to the actuators and sensors on the robot, as well as to the teleoperation inputs from a pair of 3 degree-of freedom joysticks. The control software is implemented in QMOTOR 3.0 [4], a PC based realtime multitasking graphical control environment.


In this research project, we presented a kinematic controller to exponentially regulate the end-effector of the continuumm robot to a desired position and orientation using visual feedback froma fixed camera. By exploiting homography based techniques and known kinematics of the robot, it was shown that the shape of the robot arm can be completely determined from 2D images from the camera. The only requirement if that a reference orientation of the end of each section of the robot must be known relatie to the camera coordinate frame. Future work will include simulations and experimental verification of this technique on the Elephant Trunk arm at Clemson University.

Conference Papers

For more details on this research, please refer the following conference paper:

  • Vilas K. Chitrakaran, Aman Behal and Darren M. Dawson "Setpoint Regulation of Continuum Robots Using a Fixed Camera".

E-Mail RAMAL Home Page