AI Seminar

Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments

Johannes Strom

This thesis investigates some of the sensing and perception challenges faced by
multi-robot teams equipped with LIDAR and camera sensors. Multi-robot teams
are ideal for deployment in large, real-world environments due to their ability to
parallelize exploration, reconnaissance or mapping tasks. However, such domains
also impose additional requirements, including the need for a) online algorithms (to
eliminate stopping and waiting for processing to finish before proceeding) and b)
scalability (to handle data from many robots distributed over a large area). These
general requirements give rise to specific algorithmic challenges, including 1) online
maintenance of large, coherent maps covering the explored area 2) online estimation of
communication properties in the presence of buildings and other interfering structure
and 3) online fusion and segmentation of multiple sensors to aid in object detection.
The contribution of this thesis is the introduction of novel approaches that leverage
grid-maps and sparse multi-variate gaussian inference to augment the capability of
multi-robot teams operating in urban, indoor-outdoor environments by improving the
state of the art of map rasterization, signal strength prediction, colored point cloud
segmentation, and reliable camera calibration.
In particular, we introduce a map rasterization technique for large LIDAR-based
occupancy grids that makes online updates possible when data is arriving from many
robots at once. We also introduce new online techniques for robots to predict the
signal strength to their teammates by combining LIDAR measurements with signal
strength measurements from their radios. Processing fused LIDAR+camera point
clouds is also important for many object-detection pipelines. We demonstrate a near
linear-time online segmentation algorithm to this domain. However, maintaining the
calibration of a fleet of 14 robots made this approach difficult to employ in practice.
Therefore we introduced a robust and repeatable camera calibration process that
grounds the camera model uncertainty in pixel error, allowing the system to guide
novices and experts alike to reliably produce accurate calibrations.

Sponsored by

Professor Edwin Olson