Hey,

Thanks in advance for this data set.

Nice work from what I can tell so far.

But I’m a little confused how you create the recall vs precision curves.

In your documentation it says:

Specifically, we match predictions with the ground truth objects that have the smallest center-distance up to a certain threshold. For a given match threshold we calculate average precision (AP) by integrating the recall vs precision curve for recalls and precisions > 0.1. We finally average over match thresholds of {0.5, 1, 2, 4} meters and compute the mean across classes.

The paper says something similar.

In my understanding you get one precision and recall value for one algorithm (with a fixed set of parameters) and one distance threshold during matching.

To get several values you would need a parameter you can tune.

Am I on the wrong track here or which parameter do you tune?