HD Maps vs Real Time Maps

High Definition (HD) maps are digital representations of the physical environment, typically used for autonomous vehicle navigation. They are a critical component of autonomous vehicle systems, as they provide precise, up-to-date information about the location and layout of the roads, lanes, intersections, and other features of the environment. HD maps are typically structured into several…
Read more

MLOps and Regression Testing

Introduction: 2D and 3D annotation tools are commonly used in a variety of applications, including computer vision, video analysis, and robotics. These tools allow human annotators or machine learning frameworks to label and classify objects or events in images or videos, providing important information for training and improving machine learning models. However, manual annotation can…
Read more

RoadMentor Ground Truthing (Localization)

Introduction: RoadMentor’s cloud-based aerial to ground fusion technology is a revolutionary approach to improving the accuracy of ego-location in vehicles. By combining data from onboard sensors with airborne imagery and point clouds, RoadMentor’s technology is able to significantly improve the localization of ego-vehicles. This white paper will provide an overview of the RoadMentor technology and…
Read more

Tesla vs Waymo School of Thought

Tesla and Waymo are two of the leading companies in the field of autonomous vehicles, and their approaches to achieving autonomy differ in some key ways. While both companies have made significant progress in developing autonomous driving technology, we believe that Tesla’s approach is ultimately the better one. Here’s why: Tesla is focused on deploying…
Read more

Airflow and MLOps

Introduction: In the field of autonomous vehicles, data plays a crucial role in the development and deployment of machine learning models. These models must be trained on large datasets and constantly updated with new data in order to maintain their accuracy and performance. However, the process of collecting, cleaning, and preparing data for machine learning…
Read more

Neural Radiance Fields

Neural radiance fields (NRFs) are a new and innovative approach to perception in autonomous vehicles that are changing the game for self-driving technology. NRFs are a type of machine learning model that can be used to understand and interpret the environment around an autonomous vehicle, enabling it to navigate and operate safely. One of the…
Read more

The Silicon Valley Advantage

Autonomous driving is a rapidly advancing field with the potential to revolutionize the transportation industry. As such, it is important for companies working in this space to have access to the best talent in order to stay competitive. But where can these companies find the top talent in the field of autonomous driving? One region…
Read more

Hyperspec RoadMentor

Hyperspec AI’s RoadMentor technology is revolutionizing the way that machine learning (ML) models are verified and validated. Rather than relying on traditional methods that focus on average performance, RoadMentor concentrates on edge cases and corner cases to conquer the long tail of ML model performance. One of the main challenges of ML model deployment is…
Read more

Robotaxi vs ADAS

Advanced driver assistance systems (ADAS) and robotaxis are two different approaches to self-driving technology. While ADAS systems are designed to assist human drivers, robotaxis are fully autonomous vehicles that do not require a human driver. One company that has embraced the ADAS approach is Tesla. The company has gradually developed its self-driving capabilities through incremental…
Read more

Scaling Point Cloud Data

Point cloud data is a type of 3D data that represents the surface of an object or environment as a set of discrete points in 3D space. It is often used in applications such as 3D scanning, virtual reality, and computer vision. While point cloud data can be very useful and versatile, it can also…
Read more

Product Guide

Self driving cars are going to drive the next industrial revolution by freeing up valuable human resources and by democratizing mobility & accessibility. The technology is quite promising and has tremendous amount of scope. However, there are key bottlenecks that need to be addressed, in order for the systems to be economically viable. One of…
Read more

Scene Segmentation

The scene segmentation module is used to semantically describe the pixels in image data. It is useful for things like free space detection, object recognition, cross-view localization, image filtering based on label class, compression, etc.


Hear from our CEO 10 min pitch

Data Augmentation

We can take your existing video data and extract relevant information. Currently, we support the following: Object Recognition 2. Vision Odometry 3. Scene Segmentation 4. Depth Estimation

Data Processing

We support media transcoding in the cloud. You can easily convert mpeg2 video to H264 mp4. We also support HLS streams as well. Additionally, we also have the ability to remove privately identifiable information from video streams. The output video looks like this. The license plates get scrubbed and faces get blurred.

Data Collection provides data collection services to researchers, academics, engineers and self driving companies using our pre-configured sensor system. The deliverable will include a 4K or 8K equirectangular image with 360HFoV x 180 VFoV @ 30FPS. We also provide time synchronized GPS trace data. We charge $20 per km for each direction of data collect. We…
Read more


We provide Retrofit services by integrating with different vehicle form factors. This includes custom form factor integration of our sensors. Here are a couple example projects we have done for clients.

Calibration Services offers calibration services for autonomous vehicles. Using our proprietary sensor agnostic calibration service, we provide a vehicle configuration file that describes the vehicle frame of reference, the location of each sensor with respect to the vehicle frame. We co-mount our LiDAR sensor with your sensor stack to create a shared perspective with your sensor…
Read more

Sensor Kits

Product Description Part Description 4x Hyperspec 12MP HDMICSI Cameras 4K 60FPS cameras with HDR. 1x Hyperspec Edge Computer NVIDIA Xavier NX Module w/ 4x Camera Ports, 1x PCIe 4X interface, 1x Gigabit Interface, 9Vdc-36Vdc input power on Carrier board with 256GB microSD card 4x 1 meter microHDMI Cables Camera interface cables 1x 19Vdc Power Supply…
Read more


The visualizer allows you to explore out datasets through time and space. You can navigate as a time series. We are adding geospatial indexing soon and the ability to annotate and queries will be added as well.

Object Recognition

The Object Detection pipeline is supported both on the cloud and on the edge using a dockerized container. It assumes you are using an NVIDIA Cuda enabled machine.

SLAM SLAM algorithm without the usage of GPS or an IMU.

Hyperspec 12MP CSI Camera

Feature Specification Resolution 12.3 MP CMOS Lens Diagonal 7.857mm, Type (1/2.3) Back Illumination Y Stacked CMOS Sensor Y High Dynamic Range Y High SNR Y Resolution 4K2K @ 60 FPS, 1080P @ 240 FPS MIPI 2 Lane / 4 Lane 2 Wire Serial Compliant Y Fast Mode Transition Y Defect Pixel Correction Y Sensor Synchronization…
Read more

Hyperspec Edge Computer

Hypersec Edge Computer is an embedded system-on-module which includes an integrated 384-core Volta GPU with Tensor Cores, dual Deep Learning Accelerators (DLAs), 6-core NVIDIA Carmel ARMv8.2 CPU, 8GB 128-bit LPDDR4x with 51.2GB/s of memory bandwidth, hardware video codecs, and high-speed I/O including PCIe Gen 3/4, 14 camera lanes of MIPI CSI-2, USB 3.1.

Useful for deploying computer vision and deep learning to the edge, Hyperspec Edge Computer runs Linux and provides up to 21 TeraOPS (TOPS) of compute performance in user-configurable 10/15W power profiles.