Why are there four steps to render the point clouds to img

Hi,

As topic, why is there four steps needed to render the point clouds to image coordinate?
Steps from here: link

take lidar pointcloud render to camera front for example:

  1. lidar sensor --> ego car
  2. ego car --> global
  3. global --> ego car
  4. ego car --> camera front sensor

Why are the step 2, 3 needed? Cuz I thought it can be done as:

  1. lidar sensor --> ego car
  2. ego car --> camera front sensor

Thanks a lot!

@chiu_kevin the ego pose at which the lidar point cloud was recorded is (very slightly) different from that of at which the image was recorded, since the ego is (usually) moving

Thus, for better accuracy during rendering of the lidar points on the corresponding image, the lidar point cloud is transformed to that of the ego pose at which the image was recorded

This transformation needs to be done via the global frame first since both ego poses (the one at which the lidar point cloud was recorded, and the one at which the image was recorded) are in the global frame

Got it, thanks a lot!