Fork from https://github.com/Zhongdao/Towards-Realtime-MOT for surfacing suspicion (PhD. research project)
Find a file
Ruben van de Ven 291504263c ignore OUT
2023-04-13 13:12:07 +02:00
assets add gif demo 2019-10-02 20:04:00 +08:00
cfg change minor log annoyance 2023-04-05 17:17:35 +02:00
data fixed a spelling mistake 2020-01-10 13:26:52 +08:00
docker Fix docker issues and more deps 2023-04-05 17:16:28 +02:00
tracker Store embeddings 2023-04-05 17:17:15 +02:00
utils change minor log annoyance 2023-04-05 17:17:35 +02:00
.gitignore ignore OUT 2023-04-13 13:12:07 +02:00
DATASET_ZOO.md Update DATASET_ZOO.md 2020-04-20 19:42:01 +08:00
demo.py Allow image-dir as --input-video 2023-03-31 15:58:02 +02:00
extract_ped_per_frame.py replace maskrcnn-benchmark nms with torchvision nms 2020-01-09 22:48:17 +08:00
LICENSE replace maskrcnn-benchmark nms with torchvision nms 2020-01-09 22:48:17 +08:00
models.py Merge branch 'master' of https://github.com/Zhongdao/Towards-Realtime-MOT 2020-01-29 21:47:41 +08:00
README.md Update README.md 2021-12-29 14:37:22 +08:00
test.py 1.Accelerate the association step. 2020-01-29 21:45:07 +08:00
track.py tqdm.auto to support notebook 2023-04-13 13:11:58 +02:00
train.py Documentation (#95) 2020-03-14 10:24:27 +08:00
visualise_embeddings.ipynb Visualise embeddings using UMAP and PCA 2023-04-05 17:15:50 +02:00

Towards-Realtime-MOT

NEWS:

  • [2021.08.19] A pure C++ re-implementation by samylee. Helpful if you want to deploy JDE in your own project!
  • [2021.06.01] A nice re-implementation (and document) by Baidu PaddlePaddle team.
  • [2020.07.14] Our paper is accepted to ECCV 2020!
  • [2020.01.29] More models uploaded! The fastest one runs at around 38 FPS!.
  • [2019.10.11] Training and evaluation data uploaded! Please see DATASET_ZOO.md for details.
  • [2019.10.01] Demo code and pre-trained model released!

Introduction

This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our ECCV 2020 paper. By using this repo, you can simply achieve MOTA 64%+ on the "private" protocol of MOT-16 challenge, and with a near real-time speed at 22~38 FPS (Note this speed is for the entire system, including the detection step! ) .

We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

Requirements

  • Python 3.6
  • Pytorch >= 1.2.0
  • python-opencv
  • py-motmetrics (pip install motmetrics)
  • cython-bbox (pip install cython_bbox)
  • (Optional) ffmpeg (used in the video demo)
  • (Optional) syncbn (compile and place it under utils/syncbn, or simply replace with nn.BatchNorm here)
  • maskrcnn-benchmark (Their GPU NMS is used in this project)

Video Demo

Usage:

python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Docker demo example

docker build -t towards-realtime-mot docker/

docker run --rm --gpus all -v $(pwd)/:/Towards-Realtime-MOT -ti towards-realtime-mot /bin/bash
cd /Towards-Realtime-MOT;
python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Dataset zoo

Please see DATASET_ZOO.md for detailed description of the training/evaluation datasets.

Pretrained model and baseline models

Darknet-53 ImageNet pretrained model: [DarkNet Official]

Trained models with different input resolutions:

Model MOTA IDF1 IDS FP FN FPS Link
JDE-1088x608 73.1 68.9 1312 6593 21788 22.2 [Google] [Baidu]
JDE-864x480 70.8 65.8 1279 5653 25806 30.3 [Google] [Baidu]
JDE-576x320 63.7 63.3 1307 6657 32794 37.9 [Google] [Baidu]

The performance is tested on the MOT-16 training set, just for reference. Running speed is tested on an Nvidia Titan Xp GPU. For a more comprehensive comparison with other methods you can test on MOT-16 test set and submit a result to the MOT-16 benchmark. Note that the results should be submitted to the private detector track.

Test on MOT-16 Challenge

python track.py --cfg ./cfg/yolov3_1088x608.cfg --weights /path/to/model/weights

By default the script runs evaluation on the MOT-16 training set. If you want to evaluate on the test set, please add --test-mot16 to the command line. Results are saved in text files in $DATASET_ROOT/results/*.txt. You can also add --save-images or --save-videos flags to obtain the visualized results. Visualized results are saved in $DATASET_ROOT/outputs/

Training instruction

  • Download the training datasets.
  • Edit cfg/ccmcpe.json, config the training/validation combinations. A dataset is represented by an image list, please see data/*.train for example.
  • Run the training script:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py 

We use 8x Nvidia Titan Xp to train the model, with a batch size of 32. You can adjust the batch size (and the learning rate together) according to how many GPUs your have. You can also train with smaller image size, which will bring faster inference time. But note the image size had better to be multiples of 32 (the down-sampling rate).

Train with custom datasets

Adding custom datsets is quite simple, all you need to do is to organize your annotation files in the same format as in our training sets. Please refer to DATASET_ZOO.md for the dataset format.

  • FairMOT: An improved method based on the JDE framework, SOTA performance.
  • CSTrack: Better disentangled detection/embedding heads for JDE.
  • JDE-Paddle: A nice re-implementation (and document) by Baidu PaddlePaddle team.
  • JDE-CPP: A pure C++ re-implementation by samylee. Helpful if you want to deploy JDE in your own project!

Acknowledgement

A large portion of code is borrowed from ultralytics/yolov3 and longcw/MOTDT, many thanks to their wonderful work!

Citation

If you find this repo useful in your project or research, please consider citing it:

@article{wang2019towards,
  title={Towards Real-Time Multi-Object Tracking},
  author={Wang, Zhongdao and Zheng, Liang and Liu, Yixuan and Wang, Shengjin},
  journal={The European Conference on Computer Vision (ECCV)},
  year={2020}
}