diff --git a/DATASET_ZOO.md b/DATASET_ZOO.md index d19be18..fc03a0c 100644 --- a/DATASET_ZOO.md +++ b/DATASET_ZOO.md @@ -1,9 +1,9 @@ # Dataset Zoo We provide several relevant datasets for training and evaluating the Joint Detection and Embedding (JDE) model. Annotations are provided in a unified format. If you want to use these datasets, please **follow their licenses**, -and if you use these datasets in your research, please cite the original work (you can find the bibtex in the bottom). +and if you use these datasets in your research, please cite the original work (you can find the BibTeX in the bottom). ## Data Format -All the dataset has the following structrue: +All the dataset has the following structure: ``` Caltech |——————images @@ -15,14 +15,14 @@ Caltech |—————— ... └——————0000N.txt ``` -Every image corresponds to an annation text. Given an image path, +Every image corresponds to an annotation text. Given an image path, the annotation text path can be easily generated by replacing the string `images` with `labels_with_ids` and replacing `.jpg` with `.txt`. In the annotation text, each line is a bounding box and has the following format, ``` [class] [identity] [x_center] [y_center] [width] [height] ``` -The field `[class]` is not used in this project since we only care about a single class, i.e., pedestrian here. +The field `[class]` should be `0`. Only single-class multi-object tracking is supported in this version. The field `[identity]` is an integer from `0` to `num_identities - 1`, or `-1` if this box has no identity annotation.