Fork from https://github.com/Zhongdao/Towards-Realtime-MOT for surfacing suspicion (PhD. research project)
Find a file
2019-10-11 16:31:25 +08:00
assets add gif demo 2019-10-02 20:04:00 +08:00
cfg training data 2019-10-11 14:45:21 +08:00
data training data 2019-10-11 14:45:21 +08:00
tracker can save video output now 2019-09-27 16:58:09 +08:00
utils training data 2019-10-11 14:32:39 +08:00
.gitignore add train list 2019-10-10 17:29:53 +08:00
DATASET_ZOO.md Create DATASET_ZOO.md 2019-10-11 16:31:25 +08:00
demo.py hh 2019-09-28 12:07:58 +08:00
extract_ped_per_frame.py add train list 2019-10-10 17:29:53 +08:00
LICENSE Initial commit 2019-09-27 13:09:34 +08:00
models.py training data 2019-10-11 14:32:39 +08:00
README.md Update README.md 2019-10-11 15:04:50 +08:00
test.py training data 2019-10-11 14:32:39 +08:00
track.py training data 2019-10-11 14:32:39 +08:00
train.py training data 2019-10-11 14:32:39 +08:00

Towards-Realtime-MOT

NEWS:

  • [2019.10.11] Training and evaluation data uploaded! Please see DATASET_ZOO.MD for details.
  • [2019.10.01] Demo code and pre-trained model released!

Introduction

This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our arXiv preprint paper. By using this repo, you can simply achieve MOTA 64%+ on the "private" protocol of MOT-16 challenge, and with a near real-time speed at 18~24 FPS (Note this speed is for the entire system, including the detection step! ) .

We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

Requirements

  • Python 3.6
  • Pytorch >= 1.0.1
  • syncbn (Optional, compile and place it under utils/syncbn, or simply replace with nn.BatchNorm here)
  • maskrcnn-benchmark (Their GPU NMS is used in this project)
  • python-opencv
  • ffmpeg (Optional, used in the video demo)
  • py-motmetrics (Simply pip install motmetrics)

Video Demo

Usage:

python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Dataset zoo

Please see DATASET_ZOO.MD for detailed description of the training/evaluation datasets.

Pretrained model and baseline models

Darknet-53 ImageNet pretrained: [DarkNet Official]

JDE-1088x608-uncertainty: [Google Drive] [Baidu NetDisk]

Test on MOT-16 Challenge

Training instruction

Train with custom datasets

Acknowledgement

A large portion of code is borrowed from ultralytics/yolov3 and longcw/MOTDT, many thanks to their wonderful work!