Get points of bounding box

Hello,

Is there any method which returns the points lying in a specific bounding box or have you labeled the points given their annotations, for example car/truck?

Thanks in advance.

Hi,
the method does not exist, but should be quite easy to implement using the fast points_in_box method: https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/utils/geometry_utils.py#L111
Note however that running this on the entire dataset would be quite slow as every single pointcloud must be loaded.

2 Likes

i am making a custom dataset from nuScenes dataset for my master thesis. i want to gather the position and velocity info front front radar data and convert it to speed and distance info.

for this i want radar points which are inside a particular bounding box and then calculate the speed and distance info.

I am a beginner in coding and i am struggling with the use of points_in_box method and getting a certain error that can’t understand but i know that this error could be solved if i understand how to use the points_in_box method.

I also want to know that am I using the Box class correctly?

Could you please help me with it?..please

here is the code: (this is not the entire code, just a code snippet)

For each annotation

    for ann_token in sample['anns']:

      # Get the annotation record

      ann_record = nusc.get('sample_annotation', ann_token)

      # Create a Box instance for the annotation

      ann_box = Box(ann_record['translation'], ann_record['size'], pq.Quaternion(ann_record['rotation']))

      # For each radar point

      for pt in pc:

        # Check if the box contains the point

        if points_in_box(ann_box, pt[:3, :].T).any():

          #print(f"Point {point} is within the bounding box of annotation {ann_token}")

          # Append the point to the list

          radar_points_in_box.append(pt[:3, :].T)

      # Create a dictionary with the box information and the radar points

      box_data = {

          'size': ann_box.wlh,

          'translation': ann_box.center,

          'rotation': ann_box.orientation,

          'radar_points': radar_points_in_box

      }

      # Append the dictionary to the dataset

      dataset.append(box_data)

return dataset

and i am getting this error:

IndexError Traceback (most recent call last)
in <cell line: 76>()
74
75 # Create custom dataset
β€”> 76 temp_dataset = create_custom_dataset(nusc)

in create_custom_dataset(nusc)
55 for pt in pc:
56 # Check if the box contains the point
β€”> 57 if points_in_box(ann_box,pt[:3, :].T).any():
58 #print(f"Point {point} is within the bounding box of annotation {ann_token}")
59 # Append the point to the list

IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

i made some changes but i am still facing some issues:

for scene in nusc.scene:

  # Get all sample tokens in the scene

  sample_tokens = nusc.field2token('sample', 'scene_token', scene['token'])

  # Get front camera and radar data for each sample in the scene

  for sample_token in sample_tokens:

    sample = nusc.get('sample', sample_token)

    # Get front-radar data

    radar_data = nusc.get('sample_data', sample['data']['RADAR_FRONT'])

    pcl_path = osp.join(nusc.dataroot, radar_data['filename'])

    # Load radar data

    radar_points = RadarPointCloud.from_file(pcl_path, dynprop_states=[0,1,2,3,6])

    pc = radar_points.points.T

    # Get radar-to-ego transformation

    radar_pose_rec = nusc.get('ego_pose', radar_data['ego_pose_token'])

    radar_cs_rec = nusc.get('calibrated_sensor', radar_data['calibrated_sensor_token'])

    radar_to_ego = transform_matrix(radar_cs_rec['translation'], pq.Quaternion(radar_cs_rec['rotation']), inverse=False)

    # Get front-camera data

    camera_data = nusc.get('sample_data', sample['data']['CAM_FRONT'])

    # Get ego-to-camera transformation

    camera_pose_rec = nusc.get('ego_pose', camera_data['ego_pose_token'])

    camera_cs_rec = nusc.get('calibrated_sensor', camera_data['calibrated_sensor_token'])

    ego_to_camera = transform_matrix(camera_cs_rec['translation'], pq.Quaternion(camera_cs_rec['rotation']), inverse=True)

   

    # Extract position and velocity information from radar data

    # convert it to distance and speeds information

    # Transform radar points to camera frame

    points_radar = pc[:3, :]  # We only take the x, y, z coordinates

    points_ego = np.dot(radar_to_ego, np.vstack((points_radar, np.ones(points_radar.shape[1]))))

    points_camera = np.dot(ego_to_camera, points_ego)

 

    radar_points_in_box = []

    # For each annotation

    for ann_token in sample['anns']:

      # Get the annotation record

      ann_record = nusc.get('sample_annotation', ann_token)

      # Create a Box instance for the annotation

      ann_box = Box(ann_record['translation'], ann_record['size'], pq.Quaternion(ann_record['rotation']))

      # For each radar point

      for pt in points_camera:

        # Check if the box contains the point

        if points_in_box(ann_box, points=pt).any():

          print('y')

          #print(f"Point {point} is within the bounding box of annotation {ann_token}")

          # Append the point to the list

          radar_points_in_box.append(pt)

      # Create a dictionary with the box information and the radar points

      box_data = {

          'size': ann_box.wlh,

          'translation': ann_box.center,

          'rotation': ann_box.orientation,

          'radar_points': radar_points_in_box

      }

      # Append the dictionary to the dataset

      dataset.append(box_data)

return dataset