SLAM Architecture

Aerial to Ground Data Fusion

The Aerial to Ground data fusion is essential a computer vision AI model that finds features between Aerial and Terrestrial perspectives to determine the error in the vehicle’s positioning.

LEARN MORE

Monocular and Stereo Depth

This model derives Depth from both monocular and stereo cameras. The ground truthing for this model is provided either by Synthetic RGB data with corresponding depth shaders or a comounted Camera + LiDAR pair for real world datasets.

LEARN MORE

Structure from Motion SLAM

This model uses the current position and a set of depth images to determine the next relative position offset to produce a relative trajectory (dead reckoning).

LEARN MORE