The Simultaneous Localization and Mapping (SLAM) problem in robotics

By | October 29, 2006

Kalman FilterThe importance of solving the Simultaneous Localization and Mapping (SLAM) problem in robotics can never be overstated. SLAM has been declared as the most fundamental problem that needs to be solved if we want to build truly autonomous mobile robots. SLAM is about a robot being able to concurrently track its position and construct a map of its surrounding environment. Solving each of the two problems independently is easier, i.e., a robot can trivially estimate its position given a complete map of its environment and it can also construct a map from sensor data if its position is accurately known. Solving both problems at the same time is a chicken and the egg type problem. So what makes SLAM so difficult and how are researchers trying to solve it?

Noisy sensors and actuators make SLAM a difficult problem. Every time the robot moves, it relies on its sensors to compute the distance traveled. Since there is no such thing as a noise-free sensor, there is always a small error associated with every move. These errors add up very quickly and the robot becomes lost. Correcting these errors is easy if a good map of the robot’s operating environment is given; the robot can correct its estimated position by computing its relative position to landmarks that are present in the map and currently visible.

Statistical methods that have become popular in robotics give us powerful frameworks for handling uncertainty in sensor measurements and robot actions. These frameworks have allowed researchers to take large steps towards solving SLAM. For example, Monte Carlo methods such as Particle Filters (PFs) have successfully been used to solve SLAM for small environments. Another popular framework is the Extended Kalman Filter (EKF) that is an extension to the Kalman Filter (KF) for non-linear systems.

The robot’s sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of sonar sensors. Sebastian Thrun’s research group pioneered the use of laser sensors coupled with a probabilistic framework to construct the first long term service robots. Lasers are very accurate sensors simplifying SLAM. Recent work is moving towards the use of cameras as the primary sensing modality. A camera is a passive sensor with very high information content. Using vision, a robot can localize itself using common objects as landmarks. This approach is similar to the way people localize. However, the large amount of data acquired with a camera requires the invention of powerful algorithms that can quickly detect and match the same object among several images. Research in computer vision is currently investigating solutions to these problems.

As part of this blog, I will try to keep track of the most important advances in SLAM.

5 thoughts on “The Simultaneous Localization and Mapping (SLAM) problem in robotics

  1. Rich O'Hare

    I agree. Solving this problem will revolutize self-guided mobile machines.

    I am searching for a tracking solution for automotive manufacturing. I want to track the location of an operator tool (ie. wrench) in a 10ft cube to an accuracy of 1″. I am currently attaching the tool to mechanical arm with encoder feedback to track position. The mechanical linkage has obvious limitations and the solution I seek should ideally be ‘non-tethered’ or wireless.

    Line of sight is also a limitation becuase a direct transmitter to receiver path is not always available in this industrial environment.

    I have tested Inertial sensors, ultrasonic triangulation, and vision systems. All these technologies have inherent limitations. UWB appears to solve a number of the issues, but the precision is not available.

    Any advice on technologies that can accomplish this local positioning task is much appreciated.

    Regards,
    Rich

  2. Awesom-o

    Rich,

    if the tool is always attached to the end of a mechanical arm, then you can probably always track its position by reading the arm’s encoder values and computing the inverse kinematics. Another approach you might want to consider is to use multiple cameras such that the tool is always visible by at least 2 of them so that you can compute its 3D position (assuming that you know the relative distance between the cameras.) Hopes this helps. Cheers!

Comments are closed.