Rapid growth in the market for mobile robotics has lead to increased demand for lower cost solutions to robotics navigation.

Hardware that’s traditionally used for this function is expensive and often over-specified for the given application. This situation has given rise to a new class of LiDAR devices, which are purpose-built, and aimed at drastically reducing the cost of entry into navigation-capable robotics systems.

In this article we delve into the subject of mobile robotics navigation based on the Robot Operating System (also known as ROS, see Using the ROS Navigation suite from the Open Robotics, we highlight a solution employing the Rhoeby Dynamics R2D LiDAR, a low-cost LiDAR device.

Solutions to the problem of mobile robotics navigation typically comprise several hardware and software components, including:

  • LiDAR
  • IMU (gyro / accelerometer)
  • localization
  • mapping
  • path planning
  • obstacle detection and avoidance

The above components when brought together realize the navigation system. Some video demonstrations of LiDAR-based navigation can be found here:

In order to understand in detail what is going on in the videos, we cover some basic terminology:

LiDAR: stands for Light Detection And Ranging, and is similar to radar, but uses light instead of radio waves.

IMU: The Inertial Measurement unit is the gyro / accelerometer device used to detect the angular movement of the robot

Localization: is the process of determining where the robot is located, relative to objects in it’s environment.

Mapping: is the process of building maps based on data acquired from one or more sensors.

Path planning: is the process of determining a path for the robot to follow, in order to reach a ‘goal’, whilst avoiding obstacles (a goal is just where you want the robot to go).

Obstacle detection: is the process of detecting objects in the environment that were not present during the mapping process, but are now nonetheless present.

Avoidance: is the process of path planning around dynamically occurring objects.


The LiDAR device plays a central role in the navigation process: it’s used to gather information about the objects surrounding the robot (walls, doors, etc).

Fig. 1 – Basic scanning in ROS

In the picture above ‘Basic scanning in ROS’, we see two things: the robot, as represented by the red circle, and the “range data” as represented by the white dots. This range data represents the basic information produced by the LiDAR device.

Internally, the LiDAR device is composed of a range measurement sensor that repeatedly transmits a pulse of light. This pulse of light hits a target (wall, person, cardboard box, etc), then bounces off and returns to the range measurement sensor. By measuring how long it takes for the light to travel out and return back, the sensor can determine the distance to the object. Additionally, the range sensor is mounted on a spinning platform that allows the device to take these range measurements at many points around a 360 degree sweep. As the range measurement sensor is rotated, range readings are taken rapidly (1000’s of samples per second), and this yields a two dimensional view of the entire surroundings of the robot.