A sweep of the 360 degrees view and the taking of many range samples yields a crude “map”. But this map is usually far from complete.
The second part of the process is to take many of these 360 degree scans and assemble them into a more complete map. As the robot begins to move around in it’s environment, it is able to estimate where is it relative to the current and previously scanned data (the process known as “localization”), and then take new scans and add them into the map. The process of performing localization and mapping together is referred to as “Simultaneous Localization And Mapping”, or just simply SLAM!
Shown above is an early stage map building task in progress. Initially, the mapping function has very limited data to work with: it can only derive an incomplete and simple map. The white dots represent the scan data coming from the LiDAR and the black dots represent data that has been taken from the LiDAR and placed into the robots map. Dark gray/green areas are areas that are unknown to the robot, whereas the light gray areas are clear space. As the robot moves around, more data is gathered from the LiDAR and this is added to the map until a complete picture is built up of the robots surroundings.
Fig. 2 – A map built using the R2D LiDAR sensor]
The picture above shows just such a map built using the SLAM process. Furthermore, the video ‘Rhoeby Hexapod ROS-based map building‘ shows the full process of a map being built.