SLAM & Mapping
Week 11 • CMPSC 304 Robotic Agents
Press Space or → to advance
Where We Are
Activity 5
Nodes, Topics
→
Today
SLAM & Mapping
→
Project 4
Full Pipeline
- Activity 5: You drove a robot, inspected topics — you made a robot move
- Today: Where is the robot? What does the world look like? — building a map
- P4 Part 1: You'll build a complete map of a simulated environment
The Mapping Problem
Core question: How does a robot build a representation of its environment using only sensor data?
- The robot starts with no map
- It has sensors (lidar, cameras, etc.) that measure local surroundings
- It needs to combine many local measurements into a global map
- But to combine them correctly, it needs to know where it was for each measurement
Chicken-and-egg problem: To build a map, you need to know where you are. To know where you are, you need a map.
SLAM: Simultaneous Localization and Mapping
SLAM solves both problems at once: build the map and track position simultaneously.
Sensor Data
(lidar scans)
→
SLAM
Algorithm
→
Map +
Robot Pose
- Inputs: sensor readings (lidar), odometry (wheel encoders)
- Outputs: a map of the environment + the robot's estimated trajectory
- The algorithm iterates: each new scan updates both the map and the pose estimate
Lidar: How the Robot "Sees"
How It Works
- Emits laser beams in a 360° sweep
- Measures time of flight for each beam
- Converts to distance measurements
- TurtleBot3 LDS-02: 360 samples/revolution
- Published on
/scan topic
What You Get
- Array of distances at known angles
- Range: 0.12m to 3.5m (TurtleBot3)
- Close objects → short distances
- Open space → max range or
inf
- Visualized as red dots in RViz
ros2 topic echo /scan --once # See one lidar scan message
Occupancy Grids: The Map Representation
An occupancy grid divides the world into a grid of cells. Each cell stores the probability that it is occupied.
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
1.0
0.0
?
?
?
0.0
0.0
1.0
1.0
0.0
?
?
?
0.0
0.0
1.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
| Value | Meaning | Color in Map |
| 0.0 (free) | Definitely empty / traversable | White |
| 1.0 (occupied) | Definitely an obstacle / wall | Black |
| -1 / unknown | Not yet observed | Gray |
How Cells Get Updated
Each cell starts at 0.5 (unknown). As the robot scans, cells are updated using Bayes' rule.
- Lidar beam hits an obstacle → cell at that distance gets more occupied (probability increases toward 1.0)
- Lidar beam passes through → cells along the beam path get more free (probability decreases toward 0.0)
- Multiple observations of the same cell → probability converges to the true state
Log-odds trick: Instead of multiplying small probabilities (numerical issues), SLAM implementations use log-odds: l = log(p / (1-p)). Additions in log-odds space = multiplications in probability space.
Scan Matching: Aligning Scans
How does SLAM know where the robot moved between scans?
- Odometry gives an initial guess (from wheel encoders) — but it drifts
- Scan matching aligns consecutive lidar scans to refine the pose
- Algorithm: find the rotation & translation that best aligns scan B to scan A
Scan at
time t
→
Scan
Matching
→
Scan at
time t+1
→
Refined
Pose
ICP (Iterative Closest Point) is a classic scan matching algorithm: iteratively finds closest points between scans, computes optimal transform, repeats until convergence.
Loop Closure: Fixing Accumulated Error
- Even with scan matching, small errors accumulate over time
- After a long drive, the robot's position estimate drifts from truth
- Loop closure: the robot returns to a previously visited location
When the SLAM algorithm detects it has returned to a known place, it distributes the accumulated error back across the entire trajectory, correcting the map.
Before Loop Closure
- Walls don't quite line up
- Rooms appear slightly shifted
- Corridors may look bent
After Loop Closure
- Walls snap into alignment
- Room shapes correct
- Map becomes globally consistent
Google Cartographer
Cartographer is the SLAM algorithm installed in our Docker environment.
- Developed by Google, open-sourced in 2016
- Uses a submap approach: builds small local maps, then optimizes their alignment
- Handles loop closure through pose graph optimization
- Works with 2D lidar (our TurtleBot3) or 3D lidar
# Launch Cartographer with TurtleBot3
ros2 launch turtlebot3_cartographer cartographer.launch.py
What you'll see in RViz: The map builds in real-time as you drive. Gray = unknown, white = free, black = obstacles. The robot's position is tracked by a small arrow.
Saving & Using Maps
Once mapping is complete, save the map for later use (navigation).
# Save the current map
ros2 run nav2_map_server map_saver_cli -f ~/map
This creates two files:
| File | Contents |
map.pgm | Grayscale image — each pixel is one grid cell |
map.yaml | Metadata: resolution (m/pixel), origin, thresholds |
# map.yaml contents:
image: map.pgm
resolution: 0.050000 # 5 cm per pixel
origin: [-10.0, -10.0, 0.0]
occupied_thresh: 0.65
free_thresh: 0.196
Tips for Building Good Maps
✓ Do
- Drive slowly — fast motion distorts scans
- Cover the entire area methodically
- Return to start for loop closure
- Drive close to walls for clear readings
- Make smooth turns
✗ Don't
- Race through corridors
- Make sharp, jerky turns
- Leave large unknown areas
- Spin in place (confuses odometry)
- Stop mapping before revisiting areas
ROS2 Topics During SLAM
During mapping, multiple nodes communicate through these key topics:
| Topic | Message Type | Purpose |
/scan | LaserScan | Lidar distance readings |
/odom | Odometry | Wheel encoder pose estimate |
/map | OccupancyGrid | The map being built |
/tf | TransformStamped | Coordinate frame transforms |
/cmd_vel | TwistStamped | Velocity commands (your teleop input) |
# Inspect Cartographer's subscriptions:
ros2 node info /cartographer_node
Textbook Connection: IAR Chapter 13
- Section 13.1–13.2: Map representations — occupancy grids are one of several options (topological maps, feature maps)
- Section 13.3: Probabilistic grid mapping — the Bayesian update rule we discussed
- Section 13.4: SLAM problem formulation — why it's "one of the most important problems in robotics"
Key insight from the textbook: Mapping is fundamentally a probabilistic problem. Sensor noise, odometry drift, and environmental ambiguity mean we can never be 100% certain about any cell. The occupancy grid elegantly encodes this uncertainty.
Summary
| Concept | What It Does |
| Lidar | Measures distances to obstacles in 360° |
| Occupancy Grid | Grid-based map — each cell: occupied, free, or unknown |
| SLAM | Builds map while tracking robot position |
| Scan Matching | Aligns consecutive scans to estimate motion |
| Loop Closure | Corrects drift when revisiting known areas |
| Cartographer | Google's SLAM algorithm — what we use in class |