Calibration Services offers calibration services for autonomous vehicles. Using our proprietary sensor agnostic calibration service, we provide a vehicle configuration file that describes the vehicle frame of reference, the location of each sensor with respect to the vehicle frame. We co-mount our LiDAR sensor with your sensor stack to create a shared perspective with your sensor systems. By co-registering features across different sensor modalities, we compute the transformation between the sensor frame of reference and the vehicle frame of reference. This is computed in a bore-sighting config that is delivered along side a virtual 3D representation of the sensor configurations.

How it works?

We first capture a point cloud from with the LiDAR sensor mounted on the vehicle. The point cloud data is loaded into Unity as a particle object.

Image result for unity point cloud

Once each point object is loaded into the Unity stack, we use a Unity class called the projector class.

Image result for unity color projectioon

Most game engines use this primitive to project patterns onto 3D point objects in the game scene. uses the same specification to project the RGB data from the camera onto the point cloud 3D point objects. You can then modify the camera perspective such as horizontal field of view, vertical field of view, add distortion matrices etc. The benefit of the approach is that the game engine is giving real time feedback on the quality of the boresighting parameters in the config. Using this approach it is quite easy to spatially align sensors by modifying the boresighting configurations. Even if you have tilt in pitch or roll, the Unity game engine allows you to define camera perspectives with 6DoF.

By fusing the RGB image and the point cloud, we can super impose the picture on the point cloud to validate the boresightinig configuration.

Each camera’s perspective can be copied into the boresighting config as intrinsic and extrinsic parameters.