Details lidar motion distortion

Hi,

I would like to what corrections are performed to the lidar point cloud.

In the article “nuScenes: A multimodal dataset for autonomous driving” it is said that ego-vehicle’s localization is used to perform motion compensation. But points at the start and at the end of a sweep seem to fit “perfectly” even for dynamic objects. I was wondering if the dynamics of the perceived objects are taken into account too.

thanks

Hi. The motion compensation is only based on the relative ego motion during the capturing of one sweep. It does not take into account any other objects, which would likely be too error prone.