The importance of solving the Simultaneous Localization and Mapping (SLAM) problem in robotics can never be overstated. SLAM has been declared as the most fundamental problem that needs to be solved if we want to build truly autonomous mobile robots. SLAM is about a robot being able to concurrently track its position and construct a map of its surrounding environment. Solving each of the two problems independently is easier, i.e., a robot can trivially estimate its position given a complete map of its environment and it can also construct a map from sensor data if its position is accurately known. Solving both problems at the same time is a chicken and the egg type problem. So what makes SLAM so difficult and how are researchers trying to solve it?
Noisy sensors and actuators make SLAM a difficult problem. Every time the robot moves, it relies on its sensors to compute the distance traveled. Since there is no such thing as a noise-free sensor, there is always a small error associated with every move. These errors add up very quickly and the robot becomes lost. Correcting these errors is easy if a good map of the robot’s operating environment is given; the robot can correct its estimated position by computing its relative position to landmarks that are present in the map and currently visible.
Statistical methods that have become popular in robotics give us powerful frameworks for handling uncertainty in sensor measurements and robot actions. These frameworks have allowed researchers to take large steps towards solving SLAM. For example, Monte Carlo methods such as Particle Filters (PFs) have successfully been used to solve SLAM for small environments. Another popular framework is the Extended Kalman Filter (EKF) that is an extension to the Kalman Filter (KF) for non-linear systems.
The robot’s sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of sonar sensors. Sebastian Thrun’s research group pioneered the use of laser sensors coupled with a probabilistic framework to construct the first long term service robots. Lasers are very accurate sensors simplifying SLAM. Recent work is moving towards the use of cameras as the primary sensing modality. A camera is a passive sensor with very high information content. Using vision, a robot can localize itself using common objects as landmarks. This approach is similar to the way people localize. However, the large amount of data acquired with a camera requires the invention of powerful algorithms that can quickly detect and match the same object among several images. Research in computer vision is currently investigating solutions to these problems.
As part of this blog, I will try to keep track of the most important advances in SLAM.