Difficult Scenes and/or Samples During Challenge

Hi,

I am a researcher on the “Can Can Explain!” project at MIT, where we
are working to explain errors and anomalies in autonomous vehicles.
We are working with the NuScenes dataset, and we are trying to find
“near-miss” mislabelings by incorrect object classifications or
inaccurate 3D bounding boxes (e.g. a predicted pedestrian that’s true
label is a policeman, a predicted moveable object that’s true label is
a static object, or a bounding box for a car that has encapsulated
multiple vehicles).

Does anyone have a list of these kinds of “difficulties” and their
scenes and/or samples? Or would any team be willing to share some
difficult scenes and/or samples for their algorithms and methods to
process?

Thanks,
Vishnu

Car Can Explain Team

Hi, sounds like a great project! If you look at the tracking challenge, in the Baselines section you will find links to download the detections of 3 state-of-the-art methods. I suggest you work with these and look at the scores and compare them to the ground truth labels.

Hello,

Thank you so much for the response, the links seem like exactly what we need!

Thanks,
Vishnu

Car Can Explain Team