Associate Professor of Electrical and Computer Engineering
Ph.D., 1999 - Stanford University
M.S., 1996 - Stanford University
B.S.(Honors), 1993 - Clemson University
Office: 207-A Riggs Hall
Office Phone: 864.656.5912
Dr. Birchfield began his research at Stanford, where he was part of the team that won first place at the AAAI Mobile Robotics Competition of 1994, and where he was supported by a National Science Foundation Graduate Research Fellowship. From 1999 to 2003 he was a research engineer with Quindi Corporation, a startup company in Palo Alto, California, where he developed algorithms for intelligent audio and video and was the lead engineer and principal architect of the Meeting Companion product. His experience in software engineering has led him to develop and maintain open-source computer vision software, such as the Kanade-Lucas-Tomasi (KLT) feature tracker. Over the years he has worked with or consulted for various companies, including Sun Microsystems, SRI International, Canon Research Center, and Autodesk. His research interests are in computer vision, stereo correspondence, visual tracking, microphone array calibration, acoustic localization, and mobile robot navigation.
Vehicle segmentation, tracking, and classification: State departments of transportation around the country have installed thousands of cameras along the highway, primarily in urban areas. Because the amount of data is too much for manual processing, there is a need to automatically process live video from these cameras to determine vehicle counts, speeds, and classes (e.g., cars, trucks, or motorcycles). These data are important for applications such as traffic planning, incident detection, and roadway safety. Dr. Birchfield and his students are developing computer vision software to perform this automatic video processing in real time, as well as developing methods for automatically calibrating cameras to enable the software to work with pan-tilt-zoom cameras.
Minirhizotron image analysis: To study the effects of environmental changes upon ecosystems, plant researchers bury small transparent tubes (called minirhizotrons) at an angle in the ground next to the plants being studied. By sliding a miniature camera into each tube at regular intervals, still images of the roots seen through the tubes is collected. The result is an overwhelming amount of data that must be tediously analyzed by hand. The goal of this research is to automate the procedure of extracting, identifying, and measuring roots in minirhizotron images to facilitate such data collection and analysis.
Mobile robot mapping and navigation: The ability of a mobile robot to navigate a new environment, build a map of the environment, and follow a path between two locations is important for many applications. For example, a courier robot may need to deliver items from one office to another in the same building, or even in a different building; a delivery robot may need to transport parts from one machine to another in an industrial setting; a robot may need to travel along a prespecified route to give a tour of a facility; or a team of robots may need to follow the path taken earlier by a scout robot. In this research, we are developing software to enable automatic map-building and navigation of a mobile robot using a single off-the-shelf camera mounted on the front. Because the algorithms generally do not require calibration, they are easy to use and applicable to mass deployment. One of the aspects of this research is looking at ways of combining the two representations that psychologists have determined are important for humans to navigate an environment, namely route (images captured at run time) and survey (top-down) representations.