Hyperspec Perception Stack

Our Perception Stack is a collection of self-teaching machine learning models that are powered by accelerated edge computing. Hyperspec enables the vehicle to construct and contextualize the scene in real-time and self-localize without a reference map.

Aerial to Ground

Patent-pending AI models that find features between aerial and terrestrial perspectives to determine the error in the vehicle’s positioning allowing localization without an HD map.

Vision SLAM

Depth, ground-truthing, and relative trajectory are determined from sensor fusion and a suite of AI models designed to create city-scale maps in real-time. 

Real-time Scene Construction

Replace map latency with a fast, edge computing stack and create boarder-less, ubiquitous autonomy without disengagements.

Sensor Kit

The Hyperspec Sensor Kit is a standalone unit that can quickly collect sensor data for ground truthing and annotation purposes. It is optimized to work with our annotation tools when hardware accelerated can perform live processing with little battery power.

AI Training Tools

The Hyperspec Annotation Tools are highly scalable and versatile with uses from mapping to ground truthing.