Hi,
I am a researcher on the “Can Can Explain!” project at MIT, where we
are working to explain errors and anomalies in autonomous vehicles.
We are working with the NuScenes dataset, and we are trying to find
“near-miss” mislabelings by incorrect object classifications or
inaccurate 3D bounding boxes (e.g. a predicted pedestrian that’s true
label is a policeman, a predicted moveable object that’s true label is
a static object, or a bounding box for a car that has encapsulated
multiple vehicles).
Does anyone have a list of these kinds of “difficulties” and their
scenes and/or samples? Or would any team be willing to share some
difficult scenes and/or samples for their algorithms and methods to
process?
Thanks,
Vishnu
Car Can Explain Team