Update README.md
This commit is contained in:
parent
c9dd7ee438
commit
c2654cdd7b
1 changed files with 2 additions and 1 deletions
|
@ -1,11 +1,12 @@
|
|||
# Towards-Realtime-MOT
|
||||
**NEWS:**
|
||||
- **[2020.07.14]** Our paper is accepted to ECCV 2020!
|
||||
- **[2020.01.29]** More models uploaded! The fastest one runs at around **38 FPS!**.
|
||||
- **[2019.10.11]** Training and evaluation data uploaded! Please see [DATASET_ZOO.md](https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/DATASET_ZOO.md) for details.
|
||||
- **[2019.10.01]** Demo code and pre-trained model released!
|
||||
|
||||
## Introduction
|
||||
This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our [arXiv preprint paper](https://arxiv.org/pdf/1909.12605v1.pdf). By using this repo, you can simply achieve **MOTA 64%+** on the "private" protocol of [MOT-16 challenge](https://motchallenge.net/tracker/JDE), and with a near real-time speed at **22~38 FPS** (Note this speed is for the entire system, including the detection step! ) .
|
||||
This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our [ECCV 2020 paper](https://arxiv.org/pdf/1909.12605v1.pdf). By using this repo, you can simply achieve **MOTA 64%+** on the "private" protocol of [MOT-16 challenge](https://motchallenge.net/tracker/JDE), and with a near real-time speed at **22~38 FPS** (Note this speed is for the entire system, including the detection step! ) .
|
||||
|
||||
We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.
|
||||
|
||||
|
|
Loading…
Reference in a new issue