diff --git a/DATASET_ZOO.md b/DATASET_ZOO.md
index d19be18..fc03a0c 100644
--- a/DATASET_ZOO.md
+++ b/DATASET_ZOO.md
@@ -1,9 +1,9 @@
# Dataset Zoo
We provide several relevant datasets for training and evaluating the Joint Detection and Embedding (JDE) model.
Annotations are provided in a unified format. If you want to use these datasets, please **follow their licenses**,
-and if you use these datasets in your research, please cite the original work (you can find the bibtex in the bottom).
+and if you use these datasets in your research, please cite the original work (you can find the BibTeX in the bottom).
## Data Format
-All the dataset has the following structrue:
+All the dataset has the following structure:
```
Caltech
|——————images
@@ -15,14 +15,14 @@ Caltech
|—————— ...
└——————0000N.txt
```
-Every image corresponds to an annation text. Given an image path,
+Every image corresponds to an annotation text. Given an image path,
the annotation text path can be easily generated by replacing the string `images` with `labels_with_ids` and replacing `.jpg` with `.txt`.
In the annotation text, each line is a bounding box and has the following format,
```
[class] [identity] [x_center] [y_center] [width] [height]
```
-The field `[class]` is not used in this project since we only care about a single class, i.e., pedestrian here.
+The field `[class]` should be `0`. Only single-class multi-object tracking is supported in this version.
The field `[identity]` is an integer from `0` to `num_identities - 1`, or `-1` if this box has no identity annotation.
diff --git a/README.md b/README.md
index 4ac295c..cff3936 100644
--- a/README.md
+++ b/README.md
@@ -10,12 +10,14 @@ We hope this repo will help researches/engineers to develop more practical MOT s
## Requirements
* Python 3.6
-* [Pytorch](https://pytorch.org) >= 1.0.1
-* [syncbn](https://github.com/ytoon/Synchronized-BatchNorm-PyTorch) (Optional, compile and place it under utils/syncbn, or simply replace with nn.BatchNorm [here](https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/models.py#L12))
-* [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) (Their GPU NMS is used in this project)
+* [Pytorch](https://pytorch.org) >= 1.2.0
* python-opencv
-* ffmpeg (Optional, used in the video demo)
-* [py-motmetrics](https://github.com/cheind/py-motmetrics) (Simply `pip install motmetrics`)
+* [py-motmetrics](https://github.com/cheind/py-motmetrics) (`pip install motmetrics`)
+* cython-bbox (`pip install cython_bbox`)
+* (Optional) ffmpeg (used in the video demo)
+* (Optional) [syncbn](https://github.com/ytoon/Synchronized-BatchNorm-PyTorch) (compile and place it under utils/syncbn, or simply replace with nn.BatchNorm [here](https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/models.py#L12))
+* ~~[maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) (Their GPU NMS is used in this project)~~
+
## Video Demo
diff --git a/utils/utils.py b/utils/utils.py
index 5694e24..17ee5f8 100644
--- a/utils/utils.py
+++ b/utils/utils.py
@@ -414,13 +414,6 @@ def pooling_nms(heatmap, kernel=1):
keep = (hmax == heatmap).float()
return keep * heatmap
-def soft_nms(dets, sigma=0.5, Nt=0.3, threshold=0.05, method=1):
- keep = cpu_soft_nms(np.ascontiguousarray(dets, dtype=np.float32),
- np.float32(sigma), np.float32(Nt),
- np.float32(threshold),
- np.uint8(method))
- return keep
-
def non_max_suppression(prediction, conf_thres=0.5, nms_thres=0.4, method='standard'):
"""
Removes detections with lower object confidence score than 'conf_thres'
@@ -431,7 +424,7 @@ def non_max_suppression(prediction, conf_thres=0.5, nms_thres=0.4, method='stand
prediction,
conf_thres,
nms_thres,
- method = 'standard', 'fast', 'soft_linear' or 'soft_gaussian'
+ method = 'standard' or 'fast'
"""
output = [None for _ in range(len(prediction))]
@@ -457,12 +450,6 @@ def non_max_suppression(prediction, conf_thres=0.5, nms_thres=0.4, method='stand
# Non-maximum suppression
if method == 'standard':
nms_indices = nms(pred[:, :4], pred[:, 4], nms_thres)
- elif method == 'soft_linear':
- dets = pred[:, :5].clone().contiguous().data.cpu().numpy()
- nms_indices = soft_nms(dets, Nt=nms_thres, method=0)
- elif method == 'soft_gaussian':
- dets = pred[:, :5].clone().contiguous().data.cpu().numpy()
- nms_indices = soft_nms(dets, Nt=nms_thres, method=1)
elif method == 'fast':
nms_indices = fast_nms(pred[:, :4], pred[:, 4], iou_thres=nms_thres, conf_thres=conf_thres)
else: