Fork from https://github.com/Zhongdao/Towards-Realtime-MOT for surfacing suspicion (PhD. research project)
Find a file
2020-03-14 10:32:00 +08:00
assets add gif demo 2019-10-02 20:04:00 +08:00
cfg 1.Accelerate the association step. 2020-01-29 21:45:07 +08:00
data fixed a spelling mistake 2020-01-10 13:26:52 +08:00
docker add docker file and edit the README (#77) 2020-01-21 10:50:51 +08:00
tracker Documentation (#95) 2020-03-14 10:24:27 +08:00
utils Documentation (#95) 2020-03-14 10:24:27 +08:00
.gitignore replace maskrcnn-benchmark nms with torchvision nms 2020-01-09 22:48:17 +08:00
DATASET_ZOO.md Minor documentation fixes (#91) 2020-02-17 15:07:27 +08:00
demo.py Documentation (#95) 2020-03-14 10:24:27 +08:00
extract_ped_per_frame.py replace maskrcnn-benchmark nms with torchvision nms 2020-01-09 22:48:17 +08:00
LICENSE replace maskrcnn-benchmark nms with torchvision nms 2020-01-09 22:48:17 +08:00
models.py Merge branch 'master' of https://github.com/Zhongdao/Towards-Realtime-MOT 2020-01-29 21:47:41 +08:00
README.md Update README.md 2020-03-14 10:32:00 +08:00
test.py 1.Accelerate the association step. 2020-01-29 21:45:07 +08:00
track.py Documentation (#95) 2020-03-14 10:24:27 +08:00
train.py Documentation (#95) 2020-03-14 10:24:27 +08:00

Towards-Realtime-MOT

NEWS:

  • [2020.01.29] More models uploaded! The fastest one runs at around 38 FPS!.
  • [2019.10.11] Training and evaluation data uploaded! Please see DATASET_ZOO.md for details.
  • [2019.10.01] Demo code and pre-trained model released!

Introduction

This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our arXiv preprint paper. By using this repo, you can simply achieve MOTA 64%+ on the "private" protocol of MOT-16 challenge, and with a near real-time speed at 22~38 FPS (Note this speed is for the entire system, including the detection step! ) .

We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

Requirements

  • Python 3.6
  • Pytorch >= 1.2.0
  • python-opencv
  • py-motmetrics (pip install motmetrics)
  • cython-bbox (pip install cython_bbox)
  • (Optional) ffmpeg (used in the video demo)
  • (Optional) syncbn (compile and place it under utils/syncbn, or simply replace with nn.BatchNorm here)
  • maskrcnn-benchmark (Their GPU NMS is used in this project)

Video Demo

Usage:

python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Docker demo example

docker build -t towards-realtime-mot docker/

docker run --rm --gpus all -v $(pwd)/:/Towards-Realtime-MOT -ti towards-realtime-mot /bin/bash
cd /Towards-Realtime-MOT;
python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Dataset zoo

Please see DATASET_ZOO.md for detailed description of the training/evaluation datasets.

Pretrained model and baseline models

Darknet-53 ImageNet pretrained model: [DarkNet Official]

Trained models with different input resolutions:

Model MOTA IDF1 IDS FP FN FPS Link
JDE-1088x608 74.8 67.3 1189 5558 21505 22.2 [Google] [Baidu]
JDE-864x480 70.8 65.8 1279 5653 25806 30.3 [Google] [Baidu]
JDE-576x320 63.7 63.3 1307 6657 32794 37.9 [Google] [Baidu]

The performance is tested on the MOT-16 training set, just for reference. Running speed is tested on an Nvidia Titan Xp GPU. For a more comprehensive comparison with other methods you can test on MOT-16 test set and submit a result to the MOT-16 benchmark. Note that the results should be submitted to the private detector track.

Test on MOT-16 Challenge

python track.py --cfg ./cfg/yolov3_1088x608.cfg --weights /path/to/model/weights

By default the script runs evaluation on the MOT-16 training set. If you want to evaluate on the test set, please add --test-mot16 to the command line. Results are saved in text files in $DATASET_ROOT/results/*.txt. You can also add --save-images or --save-videos flags to obtain the visualized results. Visualized results are saved in $DATASET_ROOT/outputs/

Training instruction

  • Download the training datasets.
  • Edit cfg/ccmcpe.json, config the training/validation combinations. A dataset is represented by an image list, please see data/*.train for example.
  • Run the training script:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py 

We use 8x Nvidia Titan Xp to train the model, with a batch size of 32. You can adjust the batch size (and the learning rate together) according to how many GPUs your have. You can also train with smaller image size, which will bring faster inference time. But note the image size had better to be multiples of 32 (the down-sampling rate).

Train with custom datasets

Adding custom datsets is quite simple, all you need to do is to organize your annotation files in the same format as in our training sets. Please refer to DATASET_ZOO.md for the dataset format.

Acknowledgement

A large portion of code is borrowed from ultralytics/yolov3 and longcw/MOTDT, many thanks to their wonderful work!