MapExpansion - semantic Segmentation Labels

Is there a quick way to get semantic Segmentation labels for Velodyne Point Clouds from the MapExpansion with the nuScenes DevKit?

Basically would need classes for: drivable_area, ego_lane, opposite_lane and parallel_to_ego_lane with same direction as ego.

This function renders the map annotations in the camera view:


I suggest you create a modified copy and return the results, rather than printing them.
Note however that this method is not optimized for speed (you should probably render these to disk).
Furthermore the results can be inaccurate on sloped roads as we do not model z/pitch/roll in the localization.
Here is an example: https://www.youtube.com/watch?v=wUBdMrnL6BU

Thank you so much holger,

that should make some things much easier. :slight_smile:
Is there also a quick way to project labels from images into point clouds?

The result I try to get would be a per point label of the velodyne point clouds.

Yes, you can use render_pointcloud_in_image(). Again the same caveat that this function is designed for displaying it on screen. If you just want the data you need to change it a bit.

I could manage to render the hd-map data into the velodyne point cloud.

I used use_flat_vehicle_coordinates=True due to render underlay_map mask. There is always a slight offset from the underlay_map to the actual polygon patches from map layers.

Could somebody give me a hint please, if that is caused by the viewpoint transformation matrix used in use_flat_vehicle_coordinates section and how to correctly apply affine_transformations to the 2D polygon coordinates to rectify map layers?

Yes, there is currently an issue with a few scenes where we used a different version of the map, which is not compatible with the ego poses in that scene. Apologies for that. We are working on fixing it.

I’m primarilly working with the nuscenes mini dataset, are there specific scenes where this problem is not occurring?

Will this be fixed in the next major release?

The scenes were this is particularly apparent are listed in https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/map_expansion/map_api.py#L944 . Yes we hope to fix this in the next major version.

Related to the segmentation labels of lidar point clouds, is there a fast way available from the nuscenes-devKit to identify which points of a point cloud are within some specific polygon, e.g. of drivable_area?

We work with shapely which should be sufficiently fast for us: https://streamhacker.com/2010/03/23/python-point-in-polygon-shapely/ . However, we haven’t done that for an entire point cloud, which may need custom speedups.