I was wondering how I could get the coordinates for LiDAR points belonging to a particular annotation. I know annotations contain information about the number of LiDAR points but I was unsure about how to get the actual coordinates for those points. Currently, I have been trying the following script in an effort to get the coordinates of the bounding box and then filter out coordinates not within it.
from nuscenes.nuscenes import NuScenes
from nuscenes.utils.data_classes import LidarPointCloud, RadarPointCloud, Box
from pyquaternion import Quaternion
import matplotlib as plt
import numpy as np
import os
import pdb
current_dir = os.getcwd()
dataroot = os.path.join(current_dir,"data","sets","nuscenes")
nusc = NuScenes(version='v1.0-mini', dataroot=dataroot, verbose=True)
def filter(pointcloud,corners):
points = pointcloud.points
xmin = np.amin(corners[0])
xmax = np.amax(corners[0])
ymin = np.amin(corners[1])
ymax = np.amax(corners[1])
zmin = np.amin(corners[2])
zmax = np.amax(corners[2])
print("{}, {}, {}, {}, {}, {}".format(xmin,xmax,ymin,ymax,zmin,zmax))
count = 0
for x,y,z in zip(points[0],points[1],points[2]):
if xmin < x and xmax > x and ymin < y and ymax > y and zmin < z and zmax > z:
count += 1
print(count)
my_scene = nusc.scene[0]
#print(my_scene)
first_sample_token = my_scene['first_sample_token']
sample = nusc.get('sample', first_sample_token)
ann_token = sample['anns'][18]
ann = nusc.get('sample_annotation',ann_token)
print(ann)
lidar_token = sample['data']['LIDAR_TOP']
data_path,boxes,camera_intrinsic = nusc.get_sample_data(lidar_token,selected_anntokens=[ann_token])
lidar_rec = nusc.get('sample_data',lidar_token)
print(lidar_rec['filename'])
print(data_path)
print(boxes[0].corners())
pointcloud = LidarPointCloud.from_file(data_path)
'''
# Points live in their own reference frame. So they need to be transformed via global to the image plane.
# First step: transform the point cloud to the ego vehicle frame for the timestamp of the sweep.
cs_record = nusc.get('calibrated_sensor', lidar_rec['calibrated_sensor_token'])
pointcloud.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
pointcloud.translate(np.array(cs_record['translation']))
# Second step: transform to the global frame.
poserecord = nusc.get('ego_pose', lidar_rec['ego_pose_token'])
pointcloud.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
pointcloud.translate(np.array(poserecord['translation']))
'''
print(pointcloud.points)
filter(pointcloud,boxes[0].corners())
When I run this script on the Truck annotation used in the tutorials it tells me that there are 557 LiDAR points whereas the annotation metadata itself says there are only 495. I wasn’t sure if this was because the coordinates from the sample binary file are in a different reference frame so I tried putting them in the same frame (not sure if I am doing it correctly) with the commented out sections, but running that says there are 0 points.
Also, upon further examination I am not sure if filtering using the bounding box is even a good idea as I’m not sure how exactly the bounding box information was derived and if it even correlates to the LiDAR points. All in all, I need some advice on how to extract the LiDAR coordinates for particular annotations. Any explanation on what reference frames the bounding box and the coordinates from sample LiDAR data files would also be appreciated.