Radar measurements do not show true depths


I have experienced a problem when using radar points in the dataset v.1.0-mini. I obtained radar points using RadarPointCloud.from_file_multisweep with the parameters given below:

chan = ”RADAR_FRONT”,

sample_rec = radar_sample_record

ref_chan = ”CAM_FRONT”,

nsweeps: = 1,

min_distance = 1.

Then, I projected those radar points onto the current frame using the function view_points from nuscenes.utils.geometry_utils . I also checked the pixel coordinates as you did after the projection.

The following figure shows us the first frame (sample) of the scene-0655 with token of ‘5991fad3280c4f84b331536c32001a04’ together with the bounding box of the vehicle with the token of ‘b6ded29415ae4ad2b76f0bf73fb674ce’, its projected center denoted by the green circle, and some radar points projected onto that vehicle denoted by the blue stars.


Using the box object the center of the vehicle is given by

box center = [-8.09666314, 0.88561165, 38.43211491]

However, the radar depths are so different from the depth of the object (z=38.43 in the reference coordinate system). Those four radar points on the image, their depths and pixel coordinates are

radar_depths = [36.0257, 53.669502, 55.486626, 62.085667]
radar_pixel_coordinates = [[563.4879 , 581.2913 , 562.82794, 592.8059 ],
[368.91064, 357.32034, 356.7815 , 354.03317]].

I could not figure out why the radar points except the one deviates too much from the position of the vehicle. For close objects, for example, the truck in the same frame, the radar depths are close to the truck’s center. I thought that the error can arise from multi-path. Then, the radar is not a reliable sensor to detect distant objects in your dataset. Actually, radar detections can results in disastrous results according to the detection of that vehicle (car) above. In addition, I used the default filters when obtaining the radar detections. They are valid and the probability of being FA is less than 25%:

p1 = array([-7.56566865e+00, 1.40690779e+00, 3.60257001e+01, 1.00000000e+00,
3.40000000e+01, 1.55000000e+01, -7.50000000e+00, -5.00000000e-01,
-1.63710788e-02, -3.90016916e-03, 1.00000000e+00, 3.00000000e+00,
1.90000000e+01, 1.90000000e+01, 0.00000000e+00, 1.00000000e+00,
1.60000000e+01, 3.00000000e+00])

p2 = array([-1.05083155e+01, 1.59943066e+00, 5.36695010e+01, 3.00000000e+00,
4.70000000e+01, 3.50000000e+00, -7.50000000e+00, -5.00000000e-01,
-8.13871715e-03, -1.78231602e-03, 1.00000000e+00, 3.00000000e+00,
2.00000000e+01, 2.00000000e+01, 0.00000000e+00, 1.00000000e+00,
1.60000000e+01, 3.00000000e+00])

p3 = array([-1.16818392e+01, 1.62971787e+00, 5.54866238e+01, 1.00000000e+00,
4.90000000e+01, 1.15000000e+01, -7.50000000e+00, -5.00000000e-01,
-1.46069797e-02, -3.41923651e-03, 1.00000000e+00, 3.00000000e+00,
1.90000000e+01, 1.90000000e+01, 0.00000000e+00, 1.00000000e+00,
1.60000000e+01, 3.00000000e+00])

p4 = array([-1.15855462e+01, 1.68734258e+00, 6.20856682e+01, 1.00000000e+00,
5.80000000e+01, 1.25000000e+01, -7.50000000e+00, -7.50000000e-01,
-5.34341782e-02, -1.11321202e-02, 1.00000000e+00, 3.00000000e+00,
1.90000000e+01, 1.90000000e+01, 0.00000000e+00, 1.00000000e+00,
1.60000000e+01, 3.00000000e+00])

Could you please explain to me why the radar detections deviate from the ground truth value (center of the bbox) too much?

Thank you,

To make sure nothing went wrong, please try RadarPointCloud.from_file() instead of RadarPointCloud.from_file_multisweep(). Perhaps you computed the distance to the wrong location.
To be honest, radar data is very nasty to work with:

  • It is not clear what filter settings to work with. Either you get lots of false positives or very few returns.
  • Whereas lidar and camera are synchronized in nuScenes, radar is not.
  • Possible calibration issues (although we haven’t seen too much of that).
  • Multi-path effects, especially on reflective surfaces like the car windows or reflective paint.

That said, there are some papers that successfully used nuScenes radar, e.g. https://arxiv.org/pdf/2007.14366.pdf.

Can you elaborate on what this means precisely? I am guessing you just take the radar information with a timestamp “closest” to the timestamp of the keyframe?

First, thanks for your explanation. However, I dont understand why we cannot use a member function of a RadarPointCloud class to get radar points. In addition, I did not use radar points between samples. The number of sweep is 1 and that frame is a sample in the scene-0655. In addition, I observe large differences between lidar and radar points in differrent regions. There are a few other paper like VRNet using radar points in ANN. Hovewer, we dont know how ANNs manipulate those points according to groundtruths in supervised learning. Maybe radar detections far from the ego-car are neglected or they do some corrections according to groundthruths. According to the examination results, those radar detections are not reliable for classical approaches to solve 3D position estimation.
Thank you

@Maike When the lidar rotates through the center of the camera FOV, it triggers the camera (actually slightly before to have the right timing). For radar there is no such synchronization. We simply associate the closest radar return with the sample (but we do discard all scenes without a sufficiently close radar return).

@I3aer I’m just saying it may be easier to debug if you use the RadarPointCloud.from_file() function. If you look at the radar from a BEV it seems quite reasonable, although noisy. If you look at outliers, you will see some large discrepancies

1 Like

Hi Holger,

Thanks for your reply. I read that paper you cited. In that paper they say “We keep only the (x; y) position of Radar targets and ignore the height position as it is often inaccurate (if it ever exists)” on under the subsection “Radar Voxel Representation”. This can be the reason. Anyway, if it is possible, could you please send me datasheet of the radar. I want to understand what those radar fields mean exactly.

Thank you.


Please send an email to holger.caesar@motional.com for the datasheet.