Ego pose of frames

Hi all

I have a question about the ego pose.

Is there pose information for frames that are not keyframes?

I understand how to access ego pose for the keyframes, but I can’t find ones for the non-keyframes.

The ego pose is very useful when we need to accumulate lidar sweeps, so I want to make it sure!

Any comments are welcome! thank you in advance

Hi,
please take a look at the diagram at https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/schema_nuscenes.md.

  • Sample represents the keyframes.
  • Sample_data represents an image or lidar/radar pointcloud. However, there is more than one lidar pointcloud for a sample, because all the non-keyframes also have sample_datas. From the document: “For non key-frames the sample_data points to the sample that follows closest in time.”
  • Finally, for each sample_data there is an ego pose. So you should have 6x 12Hz (camera) + 5x 13Hz (radar) + 1x 20Hz (lidar) = 157 ego poses per second.
  • To get the ego poses you can either retrieve all sample_datas for a particular sample, or you can navigate in time using the “prev” and “next” pointers for the sample datas of a particular sensor.
1 Like

From the schema_nuscenes, i can get the ego pose in every sensor’s timestamp.
Below are two questions:

  • But how can i get the ego pose in anns’ timestamp?
  • In fact, i want to transform the coordinates of annotations from world to radar:
    Pos_radar = T_radarToego * T_egoToworld * Pos_annsInworld.
    We only have the position, rotation, size of 3D-Box, so how can i calculate the corner points’ coordinate(Pos_annsInworld)?

Thanks in advance.

Thank you for your kind answer. That’s what I want to know!

  • I transform the timestamp of first sample and get the output below. But it seems that the rdar is more close to sample:
  • Thanks for your function:get_sample_data.But the detections in the anns-box seem a bit sparse, see photo below.
  • Yes, radar is not synchronized. Only camera and lidar are. We simply use the closest radar measurement in time. The radar frequency is 13Hz.
  • What do you mean by “the detections in the anns-box seem a bit sparse”? The top and bottom view in your figure seem to match. In general I recommend you use our rendering methods rather than writing your own.

In the bottom photo, the blue points is the detections of radar which come from the pcd file and the red rectanlge means the true object which is mapped from the 3D-Box(annotations).

  • The detections in anns-box seem a bit sparese means that there are only one or a little blue points in the red rectangle(for example, the car in the middle only has one bule radar reflected points). But i think there should be more reflected points by radar in one moving car.
  • I want to develop my own cluster, tracking radar algorithm by nuScenes datasets(Thanks a lot), so i have to know the original radar detections and true object lists. Becase your rendering methods can’t show the precise position of detections and annotations, so i have to develop a visualization program for myself.