Interpretation of visual and range data for robotics — 3d maps and color
Interpretation of visual and range data for robotics — 3d maps and color
Dietrich Paulus (Koblenz, Allemagne) -
7 Juillet 2011
Abstract :
Robots must prove their skills for autonomous exploration of the environment in several leagues in RoboCup. In the @home-league the environment is made like a living room ; in the rescue-league the robots search for victims in collapsed buildings. Both challenges will be introduced shortly to motivate the technological problems that will be detailed in the following.
The exploration of an unknown environment is possible by sensors that capture data from varying positions. This results in a map where the robot marks its own position. As location and environment are unknown, this sounds like a egg-hen problem known as SLAM. For the 2d case this is considered to be solved. For 3d sensors and maps this is subject of current research.
Several measurement systems are commercially available. We show their principles and possible use by examples. We also show our own solution.
In both scenarios, objects need to be found. This task is mostly accomplished by fusion of 3d data and color images. The task of object recognition and it’s underlying algorithmic problem, namely image segmentation, is not completely solved yet, although it has been investigated for many years.
We present recent work of color image segmentation.