A Defense Technology Blog
See All Posts
  • 3D Imaging Coming Closer for UGVs
    Posted by Paul McLeary 3:39 PM on Nov 19, 2008

    One of the big limitations of camera-equipped unmanned ground and air vehicles is the “soda straw” effect: the two dimensional video feed a mounted camera sends back to the user is limited by what the camera is currently looking at. This also limits spatial awareness, since when dealing with ground vehicles, it may be difficult to assess just how far the robot has moved down a street, or how many turns it has taken in a building.  

    blog post photo
    'bot with camera sensors. Photo: MITRE

    The MITRE Corporation is working on a partial fix for that problem, and has mounted four commercial, off the shelf cameras in a 180 degree configuration on an iRobot PackBot that are capable of virtually mapping the environment the ‘bot is moving though, giving its operator a three dimensional image of what’s ahead. Dubbed “3D SLAM” (Simultaneous Localization and Mapping), the idea is to allow the robot’s operator to receive real time 3D images of the robot’s environment, and to be able to pull back for a wider picture, to get a full and complete map of what the robot is seeing, and what it has seen.

    Scott Robbins, principal investigator for the 3-D SLAM project at MITRE says that in addition to the commercial off the shelf 3-D sensors, including stereo vision, the robot uses flash Lidar (light detection and ranging) sensors, both of which are kinds of two-and-a-half dimension sensors. 

    “That’s not the typical line scanning Lidar that you see in a lot of robots,” Robins says, “this is something different. It gives you 3D video.” Like a camera image it gives you color, and for each pixel you also get a range or distance value. “So we take that three dimensional data and we use that to do simultaneous localization and mapping. As the robot moves, the sensor data moves as well. We track the motion of the sensor data as the robot moves through the environment and use that relative motion to compute the vehicle’s motion with respect to the environment, so we invert the motion so we calculate what part of the scene is standing still and figure out how much the robot is moving. That gives us the position of the 3-D sensor data in global space which allows us to build a 3-dimesnsional map of the environment as the robot moves.”

    blog post photo
    Image sent back to the user. Photo: MITRE

    The live 3D data is streamed back over an off the shelf wireless connection from the robot to the operator’s station (a ruggedized laptop) where the user is presented with the integrated 3D map. An embedded computer mounted on the robot handles the sensor data, and the team does its localization mapping by using landmarks that are automatically picked out of the image data by software designed in-house, and those landmarks are tracked in three dimensions from one frame to another to another. Robbins also says that MITRE has written its software to be able to reject moving objects, so that the ‘bot doesn’t use them as landmarks. 

    The company is still “several years away” from being able to field the technology, Robbins says, adding that “we’re going to be spending the next year essentially taking this research system and ruggedizing the hardware and software and testing and evaluating it in progressively more realistic conditions.”

    Tags: MITRE, robots, ar99

  • Recommend
  • Report Abuse

Comments on Blog Post