I’m working on a pedestrian recognition project and am unsure about how to ‘undistort’ the LiDAR point clouds (see below, note the ‘parabolic’ points which make up the building on the right).
Will this present issues going forward? My main concern is properly aligning the annotation bounding boxes; will the bboxes share this distortion such that composite points can still be accurately identified?
Even if it won’t cause issues, not having the points aligned in visualizations is bothering me I’d assume any fixes would also have to be applied to the bbox coordinates in this case.
# Import nuScenes data nusc = NuScenes(version='v1.0-mini', dataroot='data/mini/', verbose=False) # Gather scene, sample, and sample data scene = nusc.scene sample = nusc.get('sample', scene['first_sample_token']) # First sample from the scene sample_data = nusc.get('sample_data', sample['data']['LIDAR_TOP']) # Get LiDAR data points point_cloud_path = 'data/mini/' + sample_data['filename'] point_cloud = LidarPointCloud.from_file(point_cloud_path) # Rotate and transform point cloud cs_record = nusc.get('calibrated_sensor', sample_data['calibrated_sensor_token']) point_cloud.rotate(Quaternion(cs_record['rotation']).rotation_matrix) point_cloud.translate(np.array(cs_record['translation']))