This function renders the map annotations in the camera view:
I suggest you create a modified copy and return the results, rather than printing them.
Note however that this method is not optimized for speed (you should probably render these to disk).
Furthermore the results can be inaccurate on sloped roads as we do not model z/pitch/roll in the localization.
Here is an example: https://www.youtube.com/watch?v=wUBdMrnL6BU
Yes, you can use render_pointcloud_in_image(). Again the same caveat that this function is designed for displaying it on screen. If you just want the data you need to change it a bit.
I used use_flat_vehicle_coordinates=True due to render underlay_map mask. There is always a slight offset from the underlay_map to the actual polygon patches from map layers.
Could somebody give me a hint please, if that is caused by the viewpoint transformation matrix used in use_flat_vehicle_coordinates section and how to correctly apply affine_transformations to the 2D polygon coordinates to rectify map layers?
Yes, there is currently an issue with a few scenes where we used a different version of the map, which is not compatible with the ego poses in that scene. Apologies for that. We are working on fixing it.
Related to the segmentation labels of lidar point clouds, is there a fast way available from the nuscenes-devKit to identify which points of a point cloud are within some specific polygon, e.g. of drivable_area?