Each cell starts at 0.5 (unknown). As the robot scans, cells are updated using Bayes' rule.
Lidar beam hits an obstacle → cell at that distance gets more occupied (probability increases toward 1.0)
Lidar beam passes through → cells along the beam path get more free (probability decreases toward 0.0)
Multiple observations of the same cell → probability converges to the true state
Log-odds trick: Instead of multiplying small probabilities (numerical issues), SLAM implementations use log-odds: l = log(p / (1-p)). Additions in log-odds space = multiplications in probability space.
Scan Matching: Aligning Scans
How does SLAM know where the robot moved between scans?
Odometry gives an initial guess (from wheel encoders) — but it drifts
Scan matching aligns consecutive lidar scans to refine the pose
Algorithm: find the rotation & translation that best aligns scan B to scan A
Scan at time t
→
Scan Matching
→
Scan at time t+1
→
Refined Pose
ICP (Iterative Closest Point) is a classic scan matching algorithm: iteratively finds closest points between scans, computes optimal transform, repeats until convergence.
Loop Closure: Fixing Accumulated Error
Even with scan matching, small errors accumulate over time
After a long drive, the robot's position estimate drifts from truth
Loop closure: the robot returns to a previously visited location
When the SLAM algorithm detects it has returned to a known place, it distributes the accumulated error back across the entire trajectory, correcting the map.
Before Loop Closure
Walls don't quite line up
Rooms appear slightly shifted
Corridors may look bent
After Loop Closure
Walls snap into alignment
Room shapes correct
Map becomes globally consistent
Google Cartographer
Cartographer is the SLAM algorithm installed in our Docker environment.
Developed by Google, open-sourced in 2016
Uses a submap approach: builds small local maps, then optimizes their alignment
Handles loop closure through pose graph optimization
Works with 2D lidar (our TurtleBot3) or 3D lidar
# Launch Cartographer with TurtleBot3
ros2 launch turtlebot3_cartographer cartographer.launch.py
What you'll see in RViz: The map builds in real-time as you drive. Gray = unknown, white = free, black = obstacles. The robot's position is tracked by a small arrow.
Saving & Using Maps
Once mapping is complete, save the map for later use (navigation).
# Save the current map
ros2 run nav2_map_server map_saver_cli -f ~/map
During mapping, multiple nodes communicate through these key topics:
Topic
Message Type
Purpose
/scan
LaserScan
Lidar distance readings
/odom
Odometry
Wheel encoder pose estimate
/map
OccupancyGrid
The map being built
/tf
TransformStamped
Coordinate frame transforms
/cmd_vel
TwistStamped
Velocity commands (your teleop input)
# Inspect Cartographer's subscriptions:
ros2 node info /cartographer_node
Textbook Connection: IAR Chapter 13
Section 13.1–13.2: Map representations — occupancy grids are one of several options (topological maps, feature maps)
Section 13.3: Probabilistic grid mapping — the Bayesian update rule we discussed
Section 13.4: SLAM problem formulation — why it's "one of the most important problems in robotics"
Key insight from the textbook: Mapping is fundamentally a probabilistic problem. Sensor noise, odometry drift, and environmental ambiguity mean we can never be 100% certain about any cell. The occupancy grid elegantly encodes this uncertainty.
Summary
Concept
What It Does
Lidar
Measures distances to obstacles in 360°
Occupancy Grid
Grid-based map — each cell: occupied, free, or unknown
SLAM
Builds map while tracking robot position
Scan Matching
Aligns consecutive scans to estimate motion
Loop Closure
Corrects drift when revisiting known areas
Cartographer
Google's SLAM algorithm — what we use in class
Activity 6: A real map (alic.pgm) is already in your repo — decode it. What do the pixel shades mean? What real-world area is mapped? Then trace the live SLAM data flow in Docker. → Activity 6