alphapose-feedback/README.md
2023-03-08 14:48:29 +01:00

38 lines
1.5 KiB
Markdown

# Looping Alphapose Training
Pulling the traditional media art trick: can we loop the training of Alphapose? What if we train it on COCO data, run it on COCO images, train it on these results, run it on COCO images etc. ad inf.?
## Installation
```bash
poetry install
git submodule init
git submodule update
```
Then make sure alphapose-docker is set up:
```bash
cd alphapose-docker
docker build --tag alphapose .
```
As mentioned in the README from [alphapose-docker](https://git.rubenvandeven.com/security_vision/alphapose-docker) it is necessary to download some external models before usage, as per [the AlphaPose installation guide](https://github.com/MVIG-SJTU/AlphaPose/blob/master/docs/INSTALL.md#models).
1. YOLOv3 Object detector can be place in the `alphapose-docker/detector/yolo/data` repository.
2. (Optionally) YOLOX models go in the `alphapose-docker/detector/yolox/data` repository.
3. A pretrained AlphaPose model can be place in the `alphapose-docker/pretrained_models` directory. See their [Model Zoo](https://github.com/MVIG-SJTU/AlphaPose/blob/master/docs/MODEL_ZOO.md) for the various options.
4. For pose tracking, see the [pose tracking module](https://github.com/MVIG-SJTU/AlphaPose/tree/master/trackers). Make sure to add the necessary folder as a volume to the `docker run` command.
Then make sure to download COCO keypoint 2017 to the data directory:
```bash
./downloadCOCO.sh
```
## Usage
Then the scripts can be invoked with `poetry run python ....py`