24 lines
1.4 KiB
Markdown
24 lines
1.4 KiB
Markdown
# AlphaPose Docker
|
|
|
|
Create a docker image for [AlphaPose](http://mvig.org/research/alphapose.html) ([code at GitHub](https://github.com/MVIG-SJTU/AlphaPose)).
|
|
|
|
## Building the image
|
|
|
|
```bash
|
|
docker build --tag alphapose .
|
|
```
|
|
|
|
## Usage
|
|
|
|
Before the repository can be used, it is necessary to download the required auxilary models, as per [the AlphaPose installation guide](https://github.com/MVIG-SJTU/AlphaPose/blob/master/docs/INSTALL.md#models).
|
|
|
|
1. YOLOv3 Object detector can be place in the `detector/yolo/data` repository.
|
|
2. (Optionally) YOLOX models go in the `detector/yolox/data` repository.
|
|
3. A pretrained AlphaPose model can be place in the `pretrained_models` directory. See their [Model Zoo](https://github.com/MVIG-SJTU/AlphaPose/blob/master/docs/MODEL_ZOO.md) for the various options.
|
|
4. For pose tracking, see the [pose tracking module](https://github.com/MVIG-SJTU/AlphaPose/tree/master/trackers). Make sure to add the necessary folder as a volume to the `docker run` command.
|
|
|
|
### Running
|
|
|
|
```
|
|
docker run --rm --gpus all -v `pwd`/out:/out -v `pwd`/detector/yolox/data:/build/AlphaPose/detector/yolox/data -v `pwd`/detector/yolo/data:/build/AlphaPose/detector/yolo/data -v `pwd`/pretrained_models:/build/AlphaPose/pretrained_models alphapose python3 scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --gpus 0 --indir examples/demo/ --save_img --vis_fast --outdir /out
|
|
```
|