Demo Day

Drive Like Humans Do

Enable 95% of Roadways to become navigable by Hyperspec AI

3 MONTH ROADMAP

In the next 90 days, Hyperspec AI will release a prototype that demonstrates aerial to ground localization alongside a structure from motion SLAM pipeline which leverages 360° fisheye lens cameras.

6 MONTH ROADMAP

Hyperspec AI is focused on building a edge computing hardware stack that drastically accelerates image processing performance to enable high frame rate processing of SLAM imagery.

AERIAL 2 GROUND

Aerial Imagery both Oblique and Orthographic is present everywhere. This is a perfect data source for drift correction in Autonomous navigation.

FISHEYE DEPTH PREDICTION

The Fisheye camera lens has heavy distortion. We have trained depth prediction models speeccifically for fish eye lens cameras.

SFM SLAM PIPELINE

The signal from the sequential sensor data frames can be used to produce a relative dead reckoning based trajectory.

There is a silent technical revolution happening in Autonomy

We enable car manufacturers to unlock 95% of roadways for Self Driving

LIMITED MAP COVERAGE BOTTLENECKS INDUSTRY

Right now 97% of roadways are inaccessible to autonomous vehicles because of the limited coverage of high definition maps. Right now cities like Pheonix, Mountain View and San Francisco are mapped with a very limited footprint. 97% of the road network is untouched and autonomous vehicles don’t function outside the boundaries of the HD map.

PRE-CODED CONTEXT LIMITS VEHICLES

The vehicles lack the ability to see, think and act for themselves, but instead rely on high definition maps to inherit pre-encoded context about the environment. This architecture is unable to cope with stale map information or unplanned obstructions.

GEOFENCED AREAS

This means that outside the boundaries of these high definition maps, the autonomous vehicle will simply not function.

NON-SCALABLE DEPLOYMENT

Existing mapping suppliers are struggling to scale up their operations due to the high costs and slow turn around times.

TECHNICAL DOCUMENT

Our proprietary and patented 
Vision SDK compresses technology roadmaps

The Race for Autonomy is on. Do you have the best technology to compete? Learn how car manufacturers are benefitting by working with Hyperspec AI to accelerate their product roadmaps.

KPIs

Autonomous Vehicles are unblocked to travel on 95% of roadways because of their ability to precisely position and navigate themselves without prior High Definition Maps.

Test Drive Data

Drift Rate (Dead Reckoning)

Global Positioning Accuracy

Aerial to Ground Data Fusion

The Aerial to Ground data fusion is essential a computer vision AI model that finds features between Aerial and Terrestrial perspectives to determine the error in the vehicle’s positioning.

Monocular and Stereo Depth

This model derives Depth from both monocular and stereo cameras. The ground truthing for this model is provided either by Synthetic RGB data with corresponding depth shaders or a comounted Camera + LiDAR pair for real world datasets.

Structure from Motion SLAM

This model uses the current position and a set of depth images to determine the next relative position offset to produce a relative trajectory (dead reckoning).

Online Map Learning

Create real time maps using Surrounding Cameras and LiDAR to replace static hand-coded HD maps.

Contact us

Schedule a 15 minute Call with the Founder