Step 1) Upload ROS bag data
To begin the calibration process, upload the ROS bag data containing the raw sensor information. This data should include the camera images, LiDAR point clouds, and GNSS timestamps. You can use the ROS bag format to store and manipulate the data from different sensors.
Step 2) Decode ROS bag data into individual sensor frames
Once you have the ROS bag data, you need to decode it into individual sensor frames. You can use the
Step 3) Time synchronize the ROS bag data based on GNSS time
To ensure accurate calibration, synchronize the sensor data based on the GNSS timestamps. This step helps align the data from different sensors
to the same time instances. You can achieve this by iterating through the sensor data and interpolating or resampling the data points to match the GNSS time. Libraries like
rosbag in Python can be useful for handling the time synchronization process.
Step 4) Undistort the camera image using Fisheye GL
Fisheye GL is a WebGL library that enables the correction of lens distortion, particularly for fisheye lenses. To undistort the camera images, apply the Fisheye GL transformations using the camera’s intrinsic parameters. This step helps obtain a rectilinear image, which is crucial for accurate LiDAR projection mapping.
Step 5) Convert point cloud from Cartesian to spherical coordinates
To project the LiDAR point cloud onto the camera image, you need to convert the point cloud data from Cartesian to spherical coordinates. This transformation allows for better alignment of the LiDAR points with the camera perspective. You can use standard trigonometric formulas or dedicated libraries for this conversion.
Step 6) Project LiDAR points onto camera image from that specific timestamp
After converting the LiDAR point cloud to spherical coordinates, project the points onto the undistorted camera image. You can achieve this by calculating the corresponding pixel coordinates for each LiDAR point, considering the camera’s extrinsic parameters and field of view.
Step 7) Adjust spherical coordinates with transformations added to theta and phi representing pitch and yaw
Once you have projected the LiDAR points onto the camera image, you may notice misalignment between the data. To correct this, adjust the pitch and yaw angles in the spherical coordinates, and reproject the points until a satisfactory alignment is achieved.
Step 8) Adjust the camera aspect ratio to match the scaling of the LiDAR data projection
To ensure proper scaling between the camera and LiDAR data, adjust the camera’s aspect ratio accordingly. This step may involve changing the camera’s intrinsic parameters or rescaling the image canvas.
Step 9) Iteratively adjust pitch, yaw, and the intrinsic camera parameters until LiDAR + Camera data align
Through an iterative process, fine-tune the pitch, yaw, and camera parameters until the LiDAR and camera data are correctly aligned. This alignment is crucial for accurate sensor fusion and interpretation of the combined data.
Rendering multiple canvases for real-time feedback:
globalCompositeOperation property to control the blending of the layers, allowing users to see the alignment visually and make necessary adjustments.
Automating the calibration process:
While the manual process is effective, automating the calibration process can save time and reduce human error. Machine learning techniques, such as optimization algorithms or deep learning models, can be employed to automatically find the best set of parameters for aligning the LiDAR and camera data. By feeding the algorithm both the raw sensor data and the desired output, the system can learn to adjust the parameters iteratively and optimize the alignment.