Compare commits
15 commits
main
...
animation_
Author | SHA1 | Date | |
---|---|---|---|
|
389da6701f | ||
|
abc80727da | ||
|
cc952424e0 | ||
|
627b320ec7 | ||
|
204f8a836b | ||
|
0612aa2048 | ||
|
a2ced9646f | ||
|
f2d71a9da3 | ||
|
dd10ce13af | ||
|
b6360c09a3 | ||
|
e6187964d3 | ||
|
0af5030845 | ||
|
9284ce8849 | ||
|
a0c63c4929 | ||
|
2e2bd76b05 |
16 changed files with 6131 additions and 309 deletions
24
README.md
Normal file
24
README.md
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
# Trajectory Prediction Video installation
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
* Run `bash build_opencv_with_gstreamer.sh` to build opencv with gstreamer support
|
||||||
|
* Use pyenv + poetry to install
|
||||||
|
|
||||||
|
## How to
|
||||||
|
|
||||||
|
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo.
|
||||||
|
|
||||||
|
These are roughly the steps to go from datagathering to training
|
||||||
|
|
||||||
|
1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
|
||||||
|
* Recording can be done with `ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4`
|
||||||
|
2. Follow the steps in the auxilary [traptools](https://git.rubenvandeven.com/security_vision/traptools) repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
|
||||||
|
3. Run the tracker, e.g. `poetry run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`
|
||||||
|
* Note: You can run this right of the camera stream: `poetry run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`, each recording adding a new file to the `raw` folder.
|
||||||
|
4. Parse tracker data to Trajectron format: `poetry run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME` Optionally, smooth tracks: `--smooth-tracks`
|
||||||
|
5. Train Trajectron model `poetry run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
|
||||||
|
6. The run!
|
||||||
|
* On a video file (you can use a wildcard) `DISPLAY=:1 poetry run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
|
||||||
|
* or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
|
||||||
|
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`.
|
45
build_opencv_with_gstreamer.sh
Normal file
45
build_opencv_with_gstreamer.sh
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# When using RTSP gstreamer can provides a way lower latency then ffmpeg
|
||||||
|
# and exposes more options to tweak the connection. However, the pypi
|
||||||
|
# version of python-opencv is build without gstreamer. Thus, we need to
|
||||||
|
# build our own python wheel.
|
||||||
|
|
||||||
|
# adapted from https://github.com/opencv/opencv-python/issues/530#issuecomment-1006343643
|
||||||
|
|
||||||
|
# install gstreamer dependencies
|
||||||
|
sudo apt-get install --quiet -y --no-install-recommends \
|
||||||
|
gstreamer1.0-gl \
|
||||||
|
gstreamer1.0-opencv \
|
||||||
|
gstreamer1.0-plugins-bad \
|
||||||
|
gstreamer1.0-plugins-good \
|
||||||
|
gstreamer1.0-plugins-ugly \
|
||||||
|
gstreamer1.0-tools \
|
||||||
|
libgstreamer-plugins-base1.0-dev \
|
||||||
|
libgstreamer1.0-0 \
|
||||||
|
libgstreamer1.0-dev
|
||||||
|
|
||||||
|
# ffmpeg deps
|
||||||
|
sudo apt install ffmpeg libgtk2.0-dev libavformat-dev libavcodec-dev libavutil-dev libswscale-dev libtbb-dev libjpeg-dev libpng-dev libtiff-dev
|
||||||
|
|
||||||
|
OPENCV_VER="84" #fix at 4.10.0.84, or use rolling release: "4.x"
|
||||||
|
STARTDIR=$(pwd)
|
||||||
|
TMPDIR=$(mktemp -d)
|
||||||
|
|
||||||
|
# Build and install OpenCV from source.
|
||||||
|
echo $TMPDIR
|
||||||
|
|
||||||
|
# pyenv compatibility
|
||||||
|
cp .python-version $TMPDIR
|
||||||
|
|
||||||
|
cd "${TMPDIR}"
|
||||||
|
git clone --branch ${OPENCV_VER} --depth 1 --recurse-submodules --shallow-submodules https://github.com/opencv/opencv-python.git opencv-python-${OPENCV_VER}
|
||||||
|
cd opencv-python-${OPENCV_VER}
|
||||||
|
export ENABLE_CONTRIB=0
|
||||||
|
# export ENABLE_HEADLESS=1
|
||||||
|
# We want GStreamer support enabled.
|
||||||
|
export CMAKE_ARGS="-DWITH_GSTREAMER=ON -DWITH_FFMPEG=ON"
|
||||||
|
python -m pip wheel . --verbose -w $STARTDIR
|
||||||
|
|
||||||
|
# # Install OpenCV
|
||||||
|
# python3 -m pip install opencv_python*.whl
|
||||||
|
# cp opencv_python*.whl $STARTDIR
|
11
custom_bytetrack.yaml
Normal file
11
custom_bytetrack.yaml
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
||||||
|
# Default YOLO tracker settings for ByteTrack tracker https://github.com/ifzhang/ByteTrack
|
||||||
|
|
||||||
|
tracker_type: bytetrack # tracker type, ['botsort', 'bytetrack']
|
||||||
|
track_high_thresh: 0.05 # threshold for the first association
|
||||||
|
track_low_thresh: 0.01 # threshold for the second association
|
||||||
|
new_track_thresh: 0.1 # threshold for init new track if the detection does not match any tracks
|
||||||
|
track_buffer: 35 # buffer to calculate the time when to remove tracks
|
||||||
|
match_thresh: 0.9 # threshold for matching tracks
|
||||||
|
fuse_score: True # Whether to fuse confidence scores with the iou distances before matching
|
||||||
|
# min_box_area: 10 # threshold for min box areas(for tracker evaluation, not used for now)
|
227
poetry.lock
generated
227
poetry.lock
generated
|
@ -219,6 +219,25 @@ webencodings = "*"
|
||||||
[package.extras]
|
[package.extras]
|
||||||
css = ["tinycss2 (>=1.1.0,<1.3)"]
|
css = ["tinycss2 (>=1.1.0,<1.3)"]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "bytetracker"
|
||||||
|
version = "0.3.2"
|
||||||
|
description = "Packaged version of the ByteTrack repository"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.5"
|
||||||
|
files = []
|
||||||
|
develop = false
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
lapx = ">=0.5.8"
|
||||||
|
scipy = ">=1.9.3"
|
||||||
|
|
||||||
|
[package.source]
|
||||||
|
type = "git"
|
||||||
|
url = "https://github.com/rubenvandeven/bytetrack-pip"
|
||||||
|
reference = "HEAD"
|
||||||
|
resolved_reference = "7053b946af8641581999b70230ac6260d37365ae"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cachetools"
|
name = "cachetools"
|
||||||
version = "5.3.2"
|
version = "5.3.2"
|
||||||
|
@ -1366,6 +1385,72 @@ files = [
|
||||||
{file = "kiwisolver-1.4.5.tar.gz", hash = "sha256:e57e563a57fb22a142da34f38acc2fc1a5c864bc29ca1517a88abc963e60d6ec"},
|
{file = "kiwisolver-1.4.5.tar.gz", hash = "sha256:e57e563a57fb22a142da34f38acc2fc1a5c864bc29ca1517a88abc963e60d6ec"},
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "lapx"
|
||||||
|
version = "0.5.11"
|
||||||
|
description = "Linear Assignment Problem solver (LAPJV/LAPMOD)."
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.7"
|
||||||
|
files = [
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ad2e100a81387e958cbb2b25b09f5f4b4c7af1ba39313b4bbda9965ee85b43a2"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:523baec5c6a554946c843877802fbefe2230179854134e6b0f281e0f47f5b342"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6eba714681ea77c834a740eaa43640a3072341a7418e95fb6d5aa4380b6b2069"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:797cda3b06e835fa6ee3e245289c03ec29d110e6dd8e333db71483f5bbd32129"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:29b1447d32e00a89afa90627c7e806bd7eb8e21e4d559749f4a0fc4e47989f64"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:71caffa000782ab265f76e47aa018368b3738a5be5d62581a870dd6e68417703"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-win_amd64.whl", hash = "sha256:a3c1c20c7d80fa7b6eca0ea9e10966c93ccdaf4d5286e677b199bf021a889b18"},
|
||||||
|
{file = "lapx-0.5.11-cp310-cp310-win_arm64.whl", hash = "sha256:995aea6268f0a519e536f009f44144c4f4066e0724d19239fbcc1c1ab082d7c0"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1a665dbc34f04fe21cdb798be1c003fa0d07b0e27e9020487e7626714bad4b8a"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5262a868f8802e368ecb470444c830e95b960a2a3763764dd3370680a466684e"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824a8c50e191343096d48164f2a0e9a5d466e8d20dd8e3eff1a3c1082b4d2b2"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d648b706b9a22255a028c72f4849c97ba0d754e03d74009ff447b26dbbd9bb59"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:f6a0b789023d80b0f5ba3f20d83c9601d02e03abe8ae209ada3a77f304d91fff"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:44b3c36d52db1eea6eb0c46795440adddbdfe39c22ddb0f2a86db40ab963a798"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-win_amd64.whl", hash = "sha256:2e21c0162b426034ff545cedb86713b642f1e7335fda43605b330ca28a107d13"},
|
||||||
|
{file = "lapx-0.5.11-cp311-cp311-win_arm64.whl", hash = "sha256:2ea0c5dbf62de0612337c4c0a3f1b5ac8cc4fabfb9f68fd1c76612e2d873a28c"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:18c0c2e7f7ca76527468d98b99c54cf339ea040512392de6d20d8582235b43bc"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:20ab21b4a45bf975b890ba0364bc354652e3ebb548fb69f23cca4c337ce0e72b"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afc0233eaef80c04f88449f1bfbe059bfb5556458bc46de54d080e3236db6588"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d48adb34670c1c548cc39c40831e44dd57d7fe640320d86d61e3d2bf179f1f79"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:28fb1d9f076aff6b98abbc1287aa453d6fd3be0b0a5039adb2a822e2246a2bdd"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:70abdb835fcfad72856f3fa27d8233b9b267a9efe534a7fa8ede456b876a696d"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-win_amd64.whl", hash = "sha256:905fb018952b7b6ea9ef5ac8f5600e525c2545a679d11951bfc0d7e861efbe31"},
|
||||||
|
{file = "lapx-0.5.11-cp312-cp312-win_arm64.whl", hash = "sha256:02343669611038ec2826c4110d953235397d25e5eff01a5d2cbd9986c3492691"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:d517ce6b0d17af31c71d9230b03f2c09cb7a701d20b3ffbe02c4366ed91c3b85"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:6078aa84f768585c6121fc1d8767d6136d9af34ccd48525174ee488c86a59964"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:206382e2e942e1d968cb194149d0693a293052c16d0016505788b795818bab21"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:986f6eaa5a21d5b90869d54a0b7e11e9e532cd8938b3980cbd43d43154d96ac1"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:394555e2245cd6aa2ad79cea58c8a5cc73f6c6f79b85f497020e6a978c346329"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:dcd1a608c19e14d6d7fd47c885c5f3c68ce4e08f3e8ec2eecbe32fc026b0d1de"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-win_amd64.whl", hash = "sha256:37b6e5f4f04c477a49f7d0780fbe76513c2d5e183bcf5005396c96bfd3de15d6"},
|
||||||
|
{file = "lapx-0.5.11-cp313-cp313-win_arm64.whl", hash = "sha256:c6c84a46f94829c6d992cce7fe747bacb971bca8cb9e77ea6ff80dfbc4fea6e2"},
|
||||||
|
{file = "lapx-0.5.11-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cb7175a5527a46cd6b9212da766539f256790e93747ba5503bd0507e3fd19e3c"},
|
||||||
|
{file = "lapx-0.5.11-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33769788e57b5a7fa0951a884c7e5f8a381792427357976041c1d4c3ae75ddcd"},
|
||||||
|
{file = "lapx-0.5.11-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc015da5a657dbb8822fb47f4702f5bf0b67bafff6f5aed98dad3a6204572e40"},
|
||||||
|
{file = "lapx-0.5.11-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:9b322e6e0685340f5a10ada401065165fa73e84c2db93ba17945e8e119bd17f5"},
|
||||||
|
{file = "lapx-0.5.11-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:a64bc3da09c5925efaff59d20dcfbc3febac64fd1fcc263604f7d4ccdd0e2a75"},
|
||||||
|
{file = "lapx-0.5.11-cp37-cp37m-win_amd64.whl", hash = "sha256:e6d98d31bdf7131a0ec9967068885c86357cc77cf7883d1a1335a48b24e537bb"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c381563bc713a6fc8a281698893a8115abb34363929105142d66e592252af196"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7c4e10ec35b539984684ef9bfdbd0f763a441225e8a9cff5d7081b24795dd419"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6583c3a5a47dbfb360d312e4bb3bde509d519886da74161a46ad77653fd18dcb"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a3963a87d44bc174eee423239dff952a922b3a8e30fbc514c00fab361f464c74"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:5e8bf4b68f45e378ce7fc68dd407ea210926881e8f25a08dc18beb4a4a7cced0"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a8ee4578a898148110dd8ee877370ee99b6309d7b600bde60e204efd93858c0d"},
|
||||||
|
{file = "lapx-0.5.11-cp38-cp38-win_amd64.whl", hash = "sha256:46b8373f25c1ea85b236fc20183b077efd33112de57f57feccc61f3541f0d8f0"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f3c88ff301e7cf9a22a7e276e13a9154f0813a2c3f9c3619f3785def851b5f4c"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d07df31b37c92643b9fb6aceeafb53a13375d95dd9cbd2454d83f03941d7e137"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6cbf42869528ac80d87e1d9019d594a619fd77c26fba0eb8e5242f6657197a57"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5f38aa911b4773b44d2da0046aad6ded8f08853cbc1ec60a2363b46371e16f8"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:d04e5bd530fb04a22004c3ba1a31033c4e3deffe18fb43778fc234a627a55397"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:845ca48d0c313113f7ae53420d1e5af87c01355fe14e5624c2a934e1004ef43b"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-win_amd64.whl", hash = "sha256:0eb81393d0edd089b61de1c3c25895b8ed39bc8d91c766a6e06b761194e79894"},
|
||||||
|
{file = "lapx-0.5.11-cp39-cp39-win_arm64.whl", hash = "sha256:e12b3e0d9943e92a0cd3af7a6fa5fb8fab3aa018897beda6d02647d9ce708d5a"},
|
||||||
|
{file = "lapx-0.5.11.tar.gz", hash = "sha256:d925d4a11f436ef0f9e9684378a44e4375aa9c868b22e2f51e6ff15a3362685f"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
numpy = ">=1.21.6"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "markdown"
|
name = "markdown"
|
||||||
version = "3.5.1"
|
version = "3.5.1"
|
||||||
|
@ -1801,27 +1886,25 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "opencv-python"
|
name = "opencv-python"
|
||||||
version = "4.8.1.78"
|
version = "4.10.0.84"
|
||||||
description = "Wrapper package for OpenCV python bindings."
|
description = "Wrapper package for OpenCV python bindings."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "opencv-python-4.8.1.78.tar.gz", hash = "sha256:cc7adbbcd1112877a39274106cb2752e04984bc01a031162952e97450d6117f6"},
|
{file = "opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl", hash = "sha256:c1f8e6ba7fd82517ba97d352f51d161c5be51495dc7b6c6f929a8546d650f4ea"},
|
||||||
{file = "opencv_python-4.8.1.78-cp37-abi3-macosx_10_16_x86_64.whl", hash = "sha256:91d5f6f5209dc2635d496f6b8ca6573ecdad051a09e6b5de4c399b8e673c60da"},
|
|
||||||
{file = "opencv_python-4.8.1.78-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:bc31f47e05447da8b3089faa0a07ffe80e114c91ce0b171e6424f9badbd1c5cd"},
|
|
||||||
{file = "opencv_python-4.8.1.78-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9814beca408d3a0eca1bae7e3e5be68b07c17ecceb392b94170881216e09b319"},
|
|
||||||
{file = "opencv_python-4.8.1.78-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c4c406bdb41eb21ea51b4e90dfbc989c002786c3f601c236a99c59a54670a394"},
|
|
||||||
{file = "opencv_python-4.8.1.78-cp37-abi3-win32.whl", hash = "sha256:a7aac3900fbacf55b551e7b53626c3dad4c71ce85643645c43e91fcb19045e47"},
|
|
||||||
{file = "opencv_python-4.8.1.78-cp37-abi3-win_amd64.whl", hash = "sha256:b983197f97cfa6fcb74e1da1802c7497a6f94ed561aba6980f1f33123f904956"},
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
numpy = [
|
numpy = [
|
||||||
|
{version = ">=1.23.5", markers = "python_version >= \"3.11\""},
|
||||||
{version = ">=1.21.4", markers = "python_version >= \"3.10\" and platform_system == \"Darwin\" and python_version < \"3.11\""},
|
{version = ">=1.21.4", markers = "python_version >= \"3.10\" and platform_system == \"Darwin\" and python_version < \"3.11\""},
|
||||||
{version = ">=1.21.2", markers = "platform_system != \"Darwin\" and python_version >= \"3.10\" and python_version < \"3.11\""},
|
{version = ">=1.21.2", markers = "platform_system != \"Darwin\" and python_version >= \"3.10\" and python_version < \"3.11\""},
|
||||||
{version = ">=1.23.5", markers = "python_version >= \"3.11\""},
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[package.source]
|
||||||
|
type = "file"
|
||||||
|
url = "opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "orjson"
|
name = "orjson"
|
||||||
version = "3.9.10"
|
version = "3.9.10"
|
||||||
|
@ -1939,8 +2022,8 @@ files = [
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
numpy = [
|
numpy = [
|
||||||
{version = ">=1.22.4,<2", markers = "python_version < \"3.11\""},
|
|
||||||
{version = ">=1.23.2,<2", markers = "python_version == \"3.11\""},
|
{version = ">=1.23.2,<2", markers = "python_version == \"3.11\""},
|
||||||
|
{version = ">=1.22.4,<2", markers = "python_version < \"3.11\""},
|
||||||
]
|
]
|
||||||
python-dateutil = ">=2.8.2"
|
python-dateutil = ">=2.8.2"
|
||||||
pytz = ">=2020.1"
|
pytz = ">=2020.1"
|
||||||
|
@ -2290,15 +2373,29 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyglet"
|
name = "pyglet"
|
||||||
version = "2.0.15"
|
version = "2.0.18"
|
||||||
description = "pyglet is a cross-platform games and multimedia package."
|
description = "pyglet is a cross-platform games and multimedia package."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "pyglet-2.0.15-py3-none-any.whl", hash = "sha256:9e4cc16efc308106fd3a9ff8f04e7a6f4f6a807c6ac8a331375efbbac8be85af"},
|
{file = "pyglet-2.0.18-py3-none-any.whl", hash = "sha256:e592952ae0297e456c587b6486ed8c3e5f9d0c3519d517bb92dde5fdf4c26b41"},
|
||||||
{file = "pyglet-2.0.15.tar.gz", hash = "sha256:42085567cece0c7f1c14e36eef799938cbf528cfbb0150c484b984f3ff1aa771"},
|
{file = "pyglet-2.0.18.tar.gz", hash = "sha256:7cf9238d70082a2da282759679f8a011cc979753a32224a8ead8ed80e48f99dc"},
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pyglet-cornerpin"
|
||||||
|
version = "0.3.0"
|
||||||
|
description = "Add a corner pin transform to a pyglet window"
|
||||||
|
optional = false
|
||||||
|
python-versions = "<4.0,>=3.10"
|
||||||
|
files = [
|
||||||
|
{file = "pyglet_cornerpin-0.3.0-py3-none-any.whl", hash = "sha256:64058a8c0bc1a8fc0369cdf41ec09f0d40e18d4c2b02fb74a1748fe82b2479c7"},
|
||||||
|
{file = "pyglet_cornerpin-0.3.0.tar.gz", hash = "sha256:3df5578f2255209d6df84074ae2c3d5deb25d345466bcd52c8ab97d4f95ec903"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
pyglet = ">=2.0.18,<3.0.0"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pygments"
|
name = "pygments"
|
||||||
version = "2.17.2"
|
version = "2.17.2"
|
||||||
|
@ -2915,6 +3012,106 @@ nativelib = ["pyobjc-framework-Cocoa", "pywin32"]
|
||||||
objc = ["pyobjc-framework-Cocoa"]
|
objc = ["pyobjc-framework-Cocoa"]
|
||||||
win32 = ["pywin32"]
|
win32 = ["pywin32"]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "setproctitle"
|
||||||
|
version = "1.3.3"
|
||||||
|
description = "A Python module to customize the process title"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.7"
|
||||||
|
files = [
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:897a73208da48db41e687225f355ce993167079eda1260ba5e13c4e53be7f754"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8c331e91a14ba4076f88c29c777ad6b58639530ed5b24b5564b5ed2fd7a95452"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbbd6c7de0771c84b4aa30e70b409565eb1fc13627a723ca6be774ed6b9d9fa3"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c05ac48ef16ee013b8a326c63e4610e2430dbec037ec5c5b58fcced550382b74"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1342f4fdb37f89d3e3c1c0a59d6ddbedbde838fff5c51178a7982993d238fe4f"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc74e84fdfa96821580fb5e9c0b0777c1c4779434ce16d3d62a9c4d8c710df39"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9617b676b95adb412bb69645d5b077d664b6882bb0d37bfdafbbb1b999568d85"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6a249415f5bb88b5e9e8c4db47f609e0bf0e20a75e8d744ea787f3092ba1f2d0"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:38da436a0aaace9add67b999eb6abe4b84397edf4a78ec28f264e5b4c9d53cd5"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:da0d57edd4c95bf221b2ebbaa061e65b1788f1544977288bdf95831b6e44e44d"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-win32.whl", hash = "sha256:a1fcac43918b836ace25f69b1dca8c9395253ad8152b625064415b1d2f9be4fb"},
|
||||||
|
{file = "setproctitle-1.3.3-cp310-cp310-win_amd64.whl", hash = "sha256:200620c3b15388d7f3f97e0ae26599c0c378fdf07ae9ac5a13616e933cbd2086"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:334f7ed39895d692f753a443102dd5fed180c571eb6a48b2a5b7f5b3564908c8"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:950f6476d56ff7817a8fed4ab207727fc5260af83481b2a4b125f32844df513a"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:195c961f54a09eb2acabbfc90c413955cf16c6e2f8caa2adbf2237d1019c7dd8"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f05e66746bf9fe6a3397ec246fe481096664a9c97eb3fea6004735a4daf867fd"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b5901a31012a40ec913265b64e48c2a4059278d9f4e6be628441482dd13fb8b5"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64286f8a995f2cd934082b398fc63fca7d5ffe31f0e27e75b3ca6b4efda4e353"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:184239903bbc6b813b1a8fc86394dc6ca7d20e2ebe6f69f716bec301e4b0199d"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:664698ae0013f986118064b6676d7dcd28fefd0d7d5a5ae9497cbc10cba48fa5"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:e5119a211c2e98ff18b9908ba62a3bd0e3fabb02a29277a7232a6fb4b2560aa0"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:417de6b2e214e837827067048f61841f5d7fc27926f2e43954567094051aff18"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-win32.whl", hash = "sha256:6a143b31d758296dc2f440175f6c8e0b5301ced3b0f477b84ca43cdcf7f2f476"},
|
||||||
|
{file = "setproctitle-1.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:a680d62c399fa4b44899094027ec9a1bdaf6f31c650e44183b50d4c4d0ccc085"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:d4460795a8a7a391e3567b902ec5bdf6c60a47d791c3b1d27080fc203d11c9dc"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:bdfd7254745bb737ca1384dee57e6523651892f0ea2a7344490e9caefcc35e64"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:477d3da48e216d7fc04bddab67b0dcde633e19f484a146fd2a34bb0e9dbb4a1e"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ab2900d111e93aff5df9fddc64cf51ca4ef2c9f98702ce26524f1acc5a786ae7"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:088b9efc62d5aa5d6edf6cba1cf0c81f4488b5ce1c0342a8b67ae39d64001120"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a6d50252377db62d6a0bb82cc898089916457f2db2041e1d03ce7fadd4a07381"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:87e668f9561fd3a457ba189edfc9e37709261287b52293c115ae3487a24b92f6"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:287490eb90e7a0ddd22e74c89a92cc922389daa95babc833c08cf80c84c4df0a"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:4fe1c49486109f72d502f8be569972e27f385fe632bd8895f4730df3c87d5ac8"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4a6ba2494a6449b1f477bd3e67935c2b7b0274f2f6dcd0f7c6aceae10c6c6ba3"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-win32.whl", hash = "sha256:2df2b67e4b1d7498632e18c56722851ba4db5d6a0c91aaf0fd395111e51cdcf4"},
|
||||||
|
{file = "setproctitle-1.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:f38d48abc121263f3b62943f84cbaede05749047e428409c2c199664feb6abc7"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:816330675e3504ae4d9a2185c46b573105d2310c20b19ea2b4596a9460a4f674"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68f960bc22d8d8e4ac886d1e2e21ccbd283adcf3c43136161c1ba0fa509088e0"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:00e6e7adff74796ef12753ff399491b8827f84f6c77659d71bd0b35870a17d8f"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:53bc0d2358507596c22b02db079618451f3bd720755d88e3cccd840bafb4c41c"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad6d20f9541f5f6ac63df553b6d7a04f313947f550eab6a61aa758b45f0d5657"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c1c84beab776b0becaa368254801e57692ed749d935469ac10e2b9b825dbdd8e"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:507e8dc2891021350eaea40a44ddd887c9f006e6b599af8d64a505c0f718f170"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:b1067647ac7aba0b44b591936118a22847bda3c507b0a42d74272256a7a798e9"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2e71f6365744bf53714e8bd2522b3c9c1d83f52ffa6324bd7cbb4da707312cd8"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-win32.whl", hash = "sha256:7f1d36a1e15a46e8ede4e953abb104fdbc0845a266ec0e99cc0492a4364f8c44"},
|
||||||
|
{file = "setproctitle-1.3.3-cp37-cp37m-win_amd64.whl", hash = "sha256:c9a402881ec269d0cc9c354b149fc29f9ec1a1939a777f1c858cdb09c7a261df"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:ff814dea1e5c492a4980e3e7d094286077054e7ea116cbeda138819db194b2cd"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:accb66d7b3ccb00d5cd11d8c6e07055a4568a24c95cf86109894dcc0c134cc89"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:554eae5a5b28f02705b83a230e9d163d645c9a08914c0ad921df363a07cf39b1"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a911b26264dbe9e8066c7531c0591cfab27b464459c74385b276fe487ca91c12"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2982efe7640c4835f7355fdb4da313ad37fb3b40f5c69069912f8048f77b28c8"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:df3f4274b80709d8bcab2f9a862973d453b308b97a0b423a501bcd93582852e3"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:af2c67ae4c795d1674a8d3ac1988676fa306bcfa1e23fddb5e0bd5f5635309ca"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:af4061f67fd7ec01624c5e3c21f6b7af2ef0e6bab7fbb43f209e6506c9ce0092"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:37a62cbe16d4c6294e84670b59cf7adcc73faafe6af07f8cb9adaf1f0e775b19"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a83ca086fbb017f0d87f240a8f9bbcf0809f3b754ee01cec928fff926542c450"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-win32.whl", hash = "sha256:059f4ce86f8cc92e5860abfc43a1dceb21137b26a02373618d88f6b4b86ba9b2"},
|
||||||
|
{file = "setproctitle-1.3.3-cp38-cp38-win_amd64.whl", hash = "sha256:ab92e51cd4a218208efee4c6d37db7368fdf182f6e7ff148fb295ecddf264287"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c7951820b77abe03d88b114b998867c0f99da03859e5ab2623d94690848d3e45"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5bc94cf128676e8fac6503b37763adb378e2b6be1249d207630f83fc325d9b11"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f5d9027eeda64d353cf21a3ceb74bb1760bd534526c9214e19f052424b37e42"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2e4a8104db15d3462e29d9946f26bed817a5b1d7a47eabca2d9dc2b995991503"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c32c41ace41f344d317399efff4cffb133e709cec2ef09c99e7a13e9f3b9483c"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cbf16381c7bf7f963b58fb4daaa65684e10966ee14d26f5cc90f07049bfd8c1e"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:e18b7bd0898398cc97ce2dfc83bb192a13a087ef6b2d5a8a36460311cb09e775"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:69d565d20efe527bd8a9b92e7f299ae5e73b6c0470f3719bd66f3cd821e0d5bd"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:ddedd300cd690a3b06e7eac90ed4452348b1348635777ce23d460d913b5b63c3"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:415bfcfd01d1fbf5cbd75004599ef167a533395955305f42220a585f64036081"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-win32.whl", hash = "sha256:21112fcd2195d48f25760f0eafa7a76510871bbb3b750219310cf88b04456ae3"},
|
||||||
|
{file = "setproctitle-1.3.3-cp39-cp39-win_amd64.whl", hash = "sha256:5a740f05d0968a5a17da3d676ce6afefebeeeb5ce137510901bf6306ba8ee002"},
|
||||||
|
{file = "setproctitle-1.3.3-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:6b9e62ddb3db4b5205c0321dd69a406d8af9ee1693529d144e86bd43bcb4b6c0"},
|
||||||
|
{file = "setproctitle-1.3.3-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e3b99b338598de0bd6b2643bf8c343cf5ff70db3627af3ca427a5e1a1a90dd9"},
|
||||||
|
{file = "setproctitle-1.3.3-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:38ae9a02766dad331deb06855fb7a6ca15daea333b3967e214de12cfae8f0ef5"},
|
||||||
|
{file = "setproctitle-1.3.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:200ede6fd11233085ba9b764eb055a2a191fb4ffb950c68675ac53c874c22e20"},
|
||||||
|
{file = "setproctitle-1.3.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0d3a953c50776751e80fe755a380a64cb14d61e8762bd43041ab3f8cc436092f"},
|
||||||
|
{file = "setproctitle-1.3.3-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e5e08e232b78ba3ac6bc0d23ce9e2bee8fad2be391b7e2da834fc9a45129eb87"},
|
||||||
|
{file = "setproctitle-1.3.3-pp37-pypy37_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f1da82c3e11284da4fcbf54957dafbf0655d2389cd3d54e4eaba636faf6d117a"},
|
||||||
|
{file = "setproctitle-1.3.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:aeaa71fb9568ebe9b911ddb490c644fbd2006e8c940f21cb9a1e9425bd709574"},
|
||||||
|
{file = "setproctitle-1.3.3-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:59335d000c6250c35989394661eb6287187854e94ac79ea22315469ee4f4c244"},
|
||||||
|
{file = "setproctitle-1.3.3-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c3ba57029c9c50ecaf0c92bb127224cc2ea9fda057b5d99d3f348c9ec2855ad3"},
|
||||||
|
{file = "setproctitle-1.3.3-pp38-pypy38_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d876d355c53d975c2ef9c4f2487c8f83dad6aeaaee1b6571453cb0ee992f55f6"},
|
||||||
|
{file = "setproctitle-1.3.3-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:224602f0939e6fb9d5dd881be1229d485f3257b540f8a900d4271a2c2aa4e5f4"},
|
||||||
|
{file = "setproctitle-1.3.3-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d7f27e0268af2d7503386e0e6be87fb9b6657afd96f5726b733837121146750d"},
|
||||||
|
{file = "setproctitle-1.3.3-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f5e7266498cd31a4572378c61920af9f6b4676a73c299fce8ba93afd694f8ae7"},
|
||||||
|
{file = "setproctitle-1.3.3-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33c5609ad51cd99d388e55651b19148ea99727516132fb44680e1f28dd0d1de9"},
|
||||||
|
{file = "setproctitle-1.3.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:eae8988e78192fd1a3245a6f4f382390b61bce6cfcc93f3809726e4c885fa68d"},
|
||||||
|
{file = "setproctitle-1.3.3.tar.gz", hash = "sha256:c913e151e7ea01567837ff037a23ca8740192880198b7fbb90b16d181607caae"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.extras]
|
||||||
|
test = ["pytest"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "setuptools"
|
name = "setuptools"
|
||||||
version = "68.2.2"
|
version = "68.2.2"
|
||||||
|
@ -3310,7 +3507,7 @@ test = ["argcomplete (>=3.0.3)", "mypy (>=1.7.0)", "pre-commit", "pytest (>=7.0,
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "trajectron-plus-plus"
|
name = "trajectron-plus-plus"
|
||||||
version = "0.1.1"
|
version = "0.1.1"
|
||||||
description = "Predict trajectories for anomaly detection"
|
description = "This repository contains the code for Trajectron++: Dynamically-Feasible Trajectory Forecasting With Heterogeneous Data by Tim Salzmann*, Boris Ivanovic*, Punarjay Chakravarty, and Marco Pavone (* denotes equal contribution)."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "^3.9,<3.12"
|
python-versions = "^3.9,<3.12"
|
||||||
files = []
|
files = []
|
||||||
|
@ -3528,4 +3725,4 @@ watchdog = ["watchdog (>=2.3)"]
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = "^3.10,<3.12,"
|
python-versions = "^3.10,<3.12,"
|
||||||
content-hash = "5154a99d490755a68e51595424649b5269fcd17ef14094c6285f5de7f972f110"
|
content-hash = "bf4feafd4afa6ceb39a1c599e3e7cdc84afbe11ab1672b49e5de99ad44568b08"
|
||||||
|
|
|
@ -7,6 +7,9 @@ readme = "README.md"
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
trapserv = "trap.plumber:start"
|
trapserv = "trap.plumber:start"
|
||||||
|
tracker = "trap.tools:tracker_preprocess"
|
||||||
|
compare = "trap.tools:tracker_compare"
|
||||||
|
process_data = "trap.process_data:main"
|
||||||
|
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
|
@ -32,6 +35,10 @@ gdown = "^4.7.1"
|
||||||
pandas-helper-calc = {git = "https://github.com/scls19fr/pandas-helper-calc"}
|
pandas-helper-calc = {git = "https://github.com/scls19fr/pandas-helper-calc"}
|
||||||
tsmoothie = "^1.0.5"
|
tsmoothie = "^1.0.5"
|
||||||
pyglet = "^2.0.15"
|
pyglet = "^2.0.15"
|
||||||
|
pyglet-cornerpin = "^0.3.0"
|
||||||
|
opencv-python = {file="./opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"}
|
||||||
|
setproctitle = "^1.3.3"
|
||||||
|
bytetracker = { git = "https://github.com/rubenvandeven/bytetrack-pip" }
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["poetry-core"]
|
requires = ["poetry-core"]
|
||||||
|
|
3624
test_custom_rnn.ipynb
Normal file
3624
test_custom_rnn.ipynb
Normal file
File diff suppressed because one or more lines are too long
294
test_tracking_data.ipynb
Normal file
294
test_tracking_data.ipynb
Normal file
File diff suppressed because one or more lines are too long
485
trap/animation_renderer.py
Normal file
485
trap/animation_renderer.py
Normal file
|
@ -0,0 +1,485 @@
|
||||||
|
# used for "Forward Referencing of type annotations"
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import time
|
||||||
|
import ffmpeg
|
||||||
|
from argparse import Namespace
|
||||||
|
import datetime
|
||||||
|
import logging
|
||||||
|
from multiprocessing import Event
|
||||||
|
from multiprocessing.synchronize import Event as BaseEvent
|
||||||
|
import cv2
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
import pyglet
|
||||||
|
import pyglet.event
|
||||||
|
import zmq
|
||||||
|
import tempfile
|
||||||
|
from pathlib import Path
|
||||||
|
import shutil
|
||||||
|
import math
|
||||||
|
|
||||||
|
from pyglet import shapes
|
||||||
|
|
||||||
|
from PIL import Image
|
||||||
|
import json
|
||||||
|
|
||||||
|
from trap.frame_emitter import DetectionState, Frame, Track
|
||||||
|
from trap.preview_renderer import DrawnTrack, PROJECTION_IMG, PROJECTION_MAP
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger("trap.renderer")
|
||||||
|
|
||||||
|
COLOR_PRIMARY = (0,0,0,255)
|
||||||
|
|
||||||
|
class AnimationRenderer:
|
||||||
|
def __init__(self, config: Namespace, is_running: BaseEvent):
|
||||||
|
self.config = config
|
||||||
|
self.is_running = is_running
|
||||||
|
|
||||||
|
context = zmq.Context()
|
||||||
|
self.prediction_sock = context.socket(zmq.SUB)
|
||||||
|
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||||
|
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||||
|
self.prediction_sock.connect(config.zmq_prediction_addr)
|
||||||
|
|
||||||
|
self.tracker_sock = context.socket(zmq.SUB)
|
||||||
|
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||||
|
self.tracker_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||||
|
self.tracker_sock.connect(config.zmq_trajectory_addr)
|
||||||
|
|
||||||
|
self.frame_sock = context.socket(zmq.SUB)
|
||||||
|
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||||
|
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||||
|
self.frame_sock.connect(config.zmq_frame_addr)
|
||||||
|
|
||||||
|
self.H = self.config.H
|
||||||
|
|
||||||
|
self.inv_H = np.linalg.pinv(self.H)
|
||||||
|
|
||||||
|
# TODO: get FPS from frame_emitter
|
||||||
|
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
|
||||||
|
self.fps = 60
|
||||||
|
self.frame_size = (self.config.camera.w,self.config.camera.h)
|
||||||
|
self.hide_stats = False
|
||||||
|
self.out_writer = None # self.start_writer() if self.config.render_file else None
|
||||||
|
self.streaming_process = None # self.start_streaming() if self.config.render_url else None
|
||||||
|
|
||||||
|
if self.config.render_window:
|
||||||
|
pass
|
||||||
|
# cv2.namedWindow("frame", cv2.WND_PROP_FULLSCREEN)
|
||||||
|
# cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
|
||||||
|
else:
|
||||||
|
pyglet.options["headless"] = True
|
||||||
|
|
||||||
|
config = pyglet.gl.Config(sample_buffers=1, samples=4)
|
||||||
|
# , fullscreen=self.config.render_window
|
||||||
|
|
||||||
|
display = pyglet.canvas.get_display()
|
||||||
|
screen = display.get_screens()[0]
|
||||||
|
|
||||||
|
# self.window = pyglet.window.Window(width=self.frame_size[0], height=self.frame_size[1], config=config, fullscreen=False, screen=screens[1])
|
||||||
|
self.window = pyglet.window.Window(width=screen.width, height=screen.height, config=config, fullscreen=True, screen=screen)
|
||||||
|
self.window.set_handler('on_draw', self.on_draw)
|
||||||
|
self.window.set_handler('on_refresh', self.on_refresh)
|
||||||
|
self.window.set_handler('on_close', self.on_close)
|
||||||
|
|
||||||
|
# don't know why, but importing this before window leads to "x connection to :1 broken (explicit kill or server shutdown)"
|
||||||
|
from pyglet_cornerpin import PygletCornerPin
|
||||||
|
|
||||||
|
# self.pins = PygletCornerPin(self.window, corners=[[-144,-2], [2880,0], [-168,958], [3011,1553]])
|
||||||
|
# x1 540 y1 760-360
|
||||||
|
# x2 1380 y2 670-360
|
||||||
|
|
||||||
|
self.pins = PygletCornerPin(
|
||||||
|
self.window,
|
||||||
|
source_points=[[540, 670-360], [1380,670-360], [540,760-360], [1380,760-360]],
|
||||||
|
corners=[[471, 304], [1797, 376], [467, 387], [1792, 484]])
|
||||||
|
self.window.push_handlers(self.pins)
|
||||||
|
|
||||||
|
pyglet.gl.glClearColor(255,255,255,255)
|
||||||
|
self.fps_display = pyglet.window.FPSDisplay(window=self.window, color=COLOR_PRIMARY)
|
||||||
|
self.fps_display.label.x = self.window.width - 50
|
||||||
|
self.fps_display.label.y = self.window.height - 17
|
||||||
|
self.fps_display.label.bold = False
|
||||||
|
self.fps_display.label.font_size = 10
|
||||||
|
|
||||||
|
self.drawn_tracks: dict[str, DrawnTrack] = {}
|
||||||
|
|
||||||
|
|
||||||
|
self.first_time: float|None = None
|
||||||
|
self.frame: Frame|None= None
|
||||||
|
self.tracker_frame: Frame|None = None
|
||||||
|
self.prediction_frame: Frame|None = None
|
||||||
|
|
||||||
|
|
||||||
|
self.batch_bg = pyglet.graphics.Batch()
|
||||||
|
self.batch_overlay = pyglet.graphics.Batch()
|
||||||
|
self.batch_anim = pyglet.graphics.Batch()
|
||||||
|
|
||||||
|
if self.config.render_debug_shapes:
|
||||||
|
self.debug_lines = [
|
||||||
|
pyglet.shapes.Line(1370, self.config.camera.h-360, 1380, 670-360, 2, COLOR_PRIMARY, batch=self.batch_overlay),#v
|
||||||
|
pyglet.shapes.Line(0, 660-360, 1380, 670-360, 2, COLOR_PRIMARY, batch=self.batch_overlay), #h
|
||||||
|
pyglet.shapes.Line(1140, 760-360, 1140, 675-360, 2, COLOR_PRIMARY, batch=self.batch_overlay), #h
|
||||||
|
pyglet.shapes.Line(540, 760-360,540, 675-360, 2, COLOR_PRIMARY, batch=self.batch_overlay), #v
|
||||||
|
pyglet.shapes.Line(0, 770-360, 1380, 770-360, 2, COLOR_PRIMARY, batch=self.batch_overlay), #h
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
self.debug_points = []
|
||||||
|
# print(self.config.debug_points_file)
|
||||||
|
if self.config.debug_points_file:
|
||||||
|
with self.config.debug_points_file.open('r') as fp:
|
||||||
|
img_points = np.array(json.load(fp))
|
||||||
|
# to place points accurate I used a 2160p image, but during calibration and
|
||||||
|
# prediction I use(d) a 1440p image, so convert points to different space:
|
||||||
|
img_points = np.array(img_points)
|
||||||
|
# first undistort the points so that lines are actually straight
|
||||||
|
undistorted_img_points = cv2.undistortPoints(np.array([img_points]).astype('float32'), self.config.camera.mtx, self.config.camera.dist, None, self.config.camera.newcameramtx)
|
||||||
|
dst_img_points = cv2.perspectiveTransform(np.array(undistorted_img_points), self.config.camera.H)
|
||||||
|
if dst_img_points.shape[1:] == (1,2):
|
||||||
|
dst_img_points = np.reshape(dst_img_points, (dst_img_points.shape[0], 2))
|
||||||
|
|
||||||
|
self.debug_points = [
|
||||||
|
pyglet.shapes.Circle(p[0], self.window.height - p[1], 3, color=(255,0,0,255), batch=self.batch_overlay) for p in dst_img_points
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
self.init_shapes()
|
||||||
|
|
||||||
|
self.init_labels()
|
||||||
|
|
||||||
|
|
||||||
|
def init_shapes(self):
|
||||||
|
'''
|
||||||
|
Due to error when running headless, we need to configure options before extending the shapes class
|
||||||
|
'''
|
||||||
|
class GradientLine(shapes.Line):
|
||||||
|
def __init__(self, x, y, x2, y2, width=1, color1=[255,255,255], color2=[255,255,255], batch=None, group=None):
|
||||||
|
# print('colors!', colors)
|
||||||
|
# assert len(colors) == 6
|
||||||
|
|
||||||
|
r, g, b, *a = color1
|
||||||
|
self._rgba1 = (r, g, b, a[0] if a else 255)
|
||||||
|
r, g, b, *a = color2
|
||||||
|
self._rgba2 = (r, g, b, a[0] if a else 255)
|
||||||
|
|
||||||
|
# print('rgba', self._rgba)
|
||||||
|
|
||||||
|
super().__init__(x, y, x2, y2, width, color1, batch=None, group=None)
|
||||||
|
# <pyglet.graphics.vertexdomain.VertexList
|
||||||
|
# pyglet.graphics.vertexdomain
|
||||||
|
# print(self._vertex_list)
|
||||||
|
|
||||||
|
def _create_vertex_list(self):
|
||||||
|
'''
|
||||||
|
copy of super()._create_vertex_list but with additional colors'''
|
||||||
|
self._vertex_list = self._group.program.vertex_list(
|
||||||
|
6, self._draw_mode, self._batch, self._group,
|
||||||
|
position=('f', self._get_vertices()),
|
||||||
|
colors=('Bn', self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1 ),
|
||||||
|
translation=('f', (self._x, self._y) * self._num_verts))
|
||||||
|
|
||||||
|
def _update_colors(self):
|
||||||
|
self._vertex_list.colors[:] = self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1
|
||||||
|
|
||||||
|
def color1(self, color):
|
||||||
|
r, g, b, *a = color
|
||||||
|
self._rgba1 = (r, g, b, a[0] if a else 255)
|
||||||
|
self._update_colors()
|
||||||
|
|
||||||
|
def color2(self, color):
|
||||||
|
r, g, b, *a = color
|
||||||
|
self._rgba2 = (r, g, b, a[0] if a else 255)
|
||||||
|
self._update_colors()
|
||||||
|
|
||||||
|
self.gradientLine = GradientLine
|
||||||
|
|
||||||
|
def init_labels(self):
|
||||||
|
base_color = COLOR_PRIMARY
|
||||||
|
color_predictor = (255,255,0, 255)
|
||||||
|
color_info = (255,0, 255, 255)
|
||||||
|
color_tracker = (0,255, 255, 255)
|
||||||
|
|
||||||
|
options = []
|
||||||
|
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
|
||||||
|
options.append(f"{option}: {self.config.__dict__[option]}")
|
||||||
|
|
||||||
|
self.labels = {
|
||||||
|
'waiting': pyglet.text.Label("Waiting for prediction"),
|
||||||
|
'frame_idx': pyglet.text.Label("", x=20, y=self.window.height - 17, color=base_color, batch=self.batch_overlay),
|
||||||
|
'tracker_idx': pyglet.text.Label("", x=90, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
|
||||||
|
'pred_idx': pyglet.text.Label("", x=110, y=self.window.height - 17, color=color_predictor, batch=self.batch_overlay),
|
||||||
|
'frame_time': pyglet.text.Label("t", x=140, y=self.window.height - 17, color=base_color, batch=self.batch_overlay),
|
||||||
|
'frame_latency': pyglet.text.Label("", x=235, y=self.window.height - 17, color=color_info, batch=self.batch_overlay),
|
||||||
|
'tracker_time': pyglet.text.Label("", x=300, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
|
||||||
|
'pred_time': pyglet.text.Label("", x=360, y=self.window.height - 17, color=color_predictor, batch=self.batch_overlay),
|
||||||
|
'track_len': pyglet.text.Label("", x=800, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
|
||||||
|
'options1': pyglet.text.Label(options.pop(-1), x=20, y=30, color=base_color, batch=self.batch_overlay),
|
||||||
|
'options2': pyglet.text.Label(" | ".join(options), x=20, y=10, color=base_color, batch=self.batch_overlay),
|
||||||
|
}
|
||||||
|
|
||||||
|
def refresh_labels(self, dt: float):
|
||||||
|
"""Every frame"""
|
||||||
|
|
||||||
|
if self.frame:
|
||||||
|
self.labels['frame_idx'].text = f"{self.frame.index:06d}"
|
||||||
|
self.labels['frame_time'].text = f"{self.frame.time - self.first_time: >10.2f}s"
|
||||||
|
self.labels['frame_latency'].text = f"{self.frame.time - time.time():.2f}s"
|
||||||
|
|
||||||
|
if self.tracker_frame:
|
||||||
|
self.labels['tracker_idx'].text = f"{self.tracker_frame.index - self.frame.index}"
|
||||||
|
self.labels['tracker_time'].text = f"{self.tracker_frame.time - time.time():.3f}s"
|
||||||
|
self.labels['track_len'].text = f"{len(self.tracker_frame.tracks)} tracks"
|
||||||
|
|
||||||
|
if self.prediction_frame:
|
||||||
|
self.labels['pred_idx'].text = f"{self.prediction_frame.index - self.frame.index}"
|
||||||
|
self.labels['pred_time'].text = f"{self.prediction_frame.time - time.time():.3f}s"
|
||||||
|
# self.labels['track_len'].text = f"{len(self.prediction_frame.tracks)} tracks"
|
||||||
|
|
||||||
|
|
||||||
|
# cv2.putText(img, f"{frame.index:06d}", (20,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||||
|
# cv2.putText(img, f"{frame.time - first_time:.3f}s", (120,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||||
|
|
||||||
|
# if prediction_frame:
|
||||||
|
# # render Δt and Δ frames
|
||||||
|
# cv2.putText(img, f"{prediction_frame.index - frame.index}", (90,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||||
|
# cv2.putText(img, f"{prediction_frame.time - time.time():.2f}s", (200,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||||
|
# cv2.putText(img, f"{len(prediction_frame.tracks)} tracks", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||||
|
# cv2.putText(img, f"h: {np.average([len(t.history or []) for t in prediction_frame.tracks.values()]):.2f}", (580,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||||
|
# cv2.putText(img, f"ph: {np.average([len(t.predictor_history or []) for t in prediction_frame.tracks.values()]):.2f}", (660,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||||
|
# cv2.putText(img, f"p: {np.average([len(t.predictions or []) for t in prediction_frame.tracks.values()]):.2f}", (740,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||||
|
|
||||||
|
# options = []
|
||||||
|
# for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
|
||||||
|
# options.append(f"{option}: {config.__dict__[option]}")
|
||||||
|
|
||||||
|
|
||||||
|
# cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||||
|
# cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def check_frames(self, dt):
|
||||||
|
new_tracks = False
|
||||||
|
try:
|
||||||
|
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
|
||||||
|
if not self.first_time:
|
||||||
|
self.first_time = self.frame.time
|
||||||
|
img = self.frame.img
|
||||||
|
# newcameramtx, roi = cv2.getOptimalNewCameraMatrix(self.config.camera.mtx, self.config.camera.dist, (self.frame.img.shape[1], self.frame.img.shape[0]), 1, (self.frame.img.shape[1], self.frame.img.shape[0]))
|
||||||
|
img = cv2.undistort(img, self.config.camera.mtx, self.config.camera.dist, None, self.config.camera.newcameramtx)
|
||||||
|
img = cv2.warpPerspective(img, self.config.camera.H, (self.config.camera.w, self.config.camera.h))
|
||||||
|
# img = cv2.GaussianBlur(img, (15, 15), 0)
|
||||||
|
img = cv2.flip(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), 0)
|
||||||
|
img = pyglet.image.ImageData(self.frame_size[0], self.frame_size[1], 'RGB', img.tobytes())
|
||||||
|
# don't draw in batch, so that it is the background
|
||||||
|
self.video_sprite = pyglet.sprite.Sprite(img=img, batch=self.batch_bg)
|
||||||
|
# transform to flipped coordinate system for pyglet
|
||||||
|
self.video_sprite.y = self.window.height - self.video_sprite.height
|
||||||
|
self.video_sprite.opacity = 70
|
||||||
|
except zmq.ZMQError as e:
|
||||||
|
# idx = frame.index if frame else "NONE"
|
||||||
|
# logger.debug(f"reuse video frame {idx}")
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
self.prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
|
||||||
|
new_tracks = True
|
||||||
|
except zmq.ZMQError as e:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
self.tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
|
||||||
|
new_tracks = True
|
||||||
|
except zmq.ZMQError as e:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if new_tracks:
|
||||||
|
self.update_tracks()
|
||||||
|
|
||||||
|
def update_tracks(self):
|
||||||
|
"""Updates the track objects and shapes. Called after setting `prediction_frame`
|
||||||
|
"""
|
||||||
|
|
||||||
|
# clean up
|
||||||
|
# for track_id in list(self.drawn_tracks.keys()):
|
||||||
|
# if track_id not in self.prediction_frame.tracks.keys():
|
||||||
|
# # TODO fade out
|
||||||
|
# del self.drawn_tracks[track_id]
|
||||||
|
|
||||||
|
if self.tracker_frame:
|
||||||
|
for track_id, track in self.tracker_frame.tracks.items():
|
||||||
|
if track_id not in self.drawn_tracks:
|
||||||
|
self.drawn_tracks[track_id] = DrawnTrack(track_id, track, self, self.tracker_frame.H, PROJECTION_MAP, self.config.camera)
|
||||||
|
else:
|
||||||
|
self.drawn_tracks[track_id].set_track(track)
|
||||||
|
|
||||||
|
if self.prediction_frame:
|
||||||
|
for track_id, track in self.prediction_frame.tracks.items():
|
||||||
|
if track_id not in self.drawn_tracks:
|
||||||
|
self.drawn_tracks[track_id] = DrawnTrack(track_id, track, self, self.prediction_frame.H, PROJECTION_MAP, self.config.camera)
|
||||||
|
else:
|
||||||
|
self.drawn_tracks[track_id].set_predictions(track)
|
||||||
|
|
||||||
|
# clean up
|
||||||
|
for track_id in list(self.drawn_tracks.keys()):
|
||||||
|
# TODO make delay configurable
|
||||||
|
if self.drawn_tracks[track_id].update_at < time.time() - 5:
|
||||||
|
# TODO fade out
|
||||||
|
del self.drawn_tracks[track_id]
|
||||||
|
|
||||||
|
|
||||||
|
def on_key_press(self, symbol, modifiers):
|
||||||
|
print('A key was pressed, use f to hide')
|
||||||
|
if symbol == ord('f'):
|
||||||
|
self.window.set_fullscreen(not self.window.fullscreen)
|
||||||
|
if symbol == ord('h'):
|
||||||
|
self.hide_stats = not self.hide_stats
|
||||||
|
|
||||||
|
def check_running(self, dt):
|
||||||
|
if not self.is_running.is_set():
|
||||||
|
self.window.close()
|
||||||
|
self.event_loop.exit()
|
||||||
|
|
||||||
|
def on_close(self):
|
||||||
|
self.is_running.clear()
|
||||||
|
|
||||||
|
|
||||||
|
def on_refresh(self, dt: float):
|
||||||
|
# update shapes
|
||||||
|
# self.bg =
|
||||||
|
for track_id, track in self.drawn_tracks.items():
|
||||||
|
track.update_drawn_positions(dt)
|
||||||
|
|
||||||
|
|
||||||
|
self.refresh_labels(dt)
|
||||||
|
|
||||||
|
# self.shape1 = shapes.Circle(700, 150, 100, color=(50, 0, 30), batch=self.batch_anim)
|
||||||
|
# self.shape3 = shapes.Circle(800, 150, 100, color=(100, 225, 30), batch=self.batch_anim)
|
||||||
|
pass
|
||||||
|
|
||||||
|
def on_draw(self):
|
||||||
|
self.window.clear()
|
||||||
|
|
||||||
|
self.batch_bg.draw()
|
||||||
|
|
||||||
|
for track in self.drawn_tracks.values():
|
||||||
|
for shape in track.shapes:
|
||||||
|
shape.draw() # for some reason the batches don't work
|
||||||
|
for track in self.drawn_tracks.values():
|
||||||
|
for shapes in track.pred_shapes:
|
||||||
|
for shape in shapes:
|
||||||
|
shape.draw()
|
||||||
|
# self.batch_anim.draw()
|
||||||
|
self.batch_overlay.draw()
|
||||||
|
|
||||||
|
if self.config.render_debug_shapes:
|
||||||
|
self.pins.draw()
|
||||||
|
|
||||||
|
# pyglet.graphics.draw(3, pyglet.gl.GL_LINE, ("v2i", (100,200, 600,800)), ('c3B', (255,255,255, 255,255,255)))
|
||||||
|
|
||||||
|
if not self.hide_stats:
|
||||||
|
self.fps_display.draw()
|
||||||
|
|
||||||
|
# if streaming, capture buffer and send
|
||||||
|
try:
|
||||||
|
if self.streaming_process or self.out_writer:
|
||||||
|
buf = pyglet.image.get_buffer_manager().get_color_buffer()
|
||||||
|
img_data = buf.get_image_data()
|
||||||
|
data = img_data.get_data() # alternative: .get_data("RGBA", image_data.pitch)
|
||||||
|
img = np.asanyarray(data).reshape((img_data.height, img_data.width, 4))
|
||||||
|
img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGB)
|
||||||
|
img = np.flip(img, 0)
|
||||||
|
# img = cv2.flip(img, cv2.0)
|
||||||
|
|
||||||
|
# cv2.imshow('frame', img)
|
||||||
|
# cv2.waitKey(1)
|
||||||
|
if self.streaming_process:
|
||||||
|
self.streaming_process.stdin.write(img.tobytes())
|
||||||
|
if self.out_writer:
|
||||||
|
self.out_writer.write(img)
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception(e)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
frame = None
|
||||||
|
prediction_frame = None
|
||||||
|
tracker_frame = None
|
||||||
|
|
||||||
|
i=0
|
||||||
|
first_time = None
|
||||||
|
|
||||||
|
self.event_loop = pyglet.app.EventLoop()
|
||||||
|
pyglet.clock.schedule_interval(self.check_running, 0.1)
|
||||||
|
pyglet.clock.schedule(self.check_frames)
|
||||||
|
self.event_loop.run()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# while self.is_running.is_set():
|
||||||
|
# i+=1
|
||||||
|
|
||||||
|
|
||||||
|
# # zmq_ev = self.frame_sock.poll(timeout=2000)
|
||||||
|
# # if not zmq_ev:
|
||||||
|
# # # when no data comes in, loop so that is_running is checked
|
||||||
|
# # continue
|
||||||
|
|
||||||
|
# try:
|
||||||
|
# frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
|
||||||
|
# except zmq.ZMQError as e:
|
||||||
|
# # idx = frame.index if frame else "NONE"
|
||||||
|
# # logger.debug(f"reuse video frame {idx}")
|
||||||
|
# pass
|
||||||
|
# # else:
|
||||||
|
# # logger.debug(f'new video frame {frame.index}')
|
||||||
|
|
||||||
|
|
||||||
|
# if frame is None:
|
||||||
|
# # might need to wait a few iterations before first frame comes available
|
||||||
|
# time.sleep(.1)
|
||||||
|
# continue
|
||||||
|
|
||||||
|
# try:
|
||||||
|
# prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
|
||||||
|
# except zmq.ZMQError as e:
|
||||||
|
# logger.debug(f'reuse prediction')
|
||||||
|
|
||||||
|
# if first_time is None:
|
||||||
|
# first_time = frame.time
|
||||||
|
|
||||||
|
# img = decorate_frame(frame, prediction_frame, first_time, self.config)
|
||||||
|
|
||||||
|
# img_path = (self.config.output_dir / f"{i:05d}.png").resolve()
|
||||||
|
|
||||||
|
# logger.debug(f"write frame {frame.time - first_time:.3f}s")
|
||||||
|
# if self.out_writer:
|
||||||
|
# self.out_writer.write(img)
|
||||||
|
# if self.streaming_process:
|
||||||
|
# self.streaming_process.stdin.write(img.tobytes())
|
||||||
|
# if self.config.render_window:
|
||||||
|
# cv2.imshow('frame',img)
|
||||||
|
# cv2.waitKey(1)
|
||||||
|
logger.info('Stopping')
|
||||||
|
logger.info(f'used corner pins {self.pins.pin_positions}')
|
||||||
|
|
||||||
|
|
||||||
|
# if i>2:
|
||||||
|
if self.streaming_process:
|
||||||
|
self.streaming_process.stdin.close()
|
||||||
|
if self.out_writer:
|
||||||
|
self.out_writer.release()
|
||||||
|
if self.streaming_process:
|
||||||
|
# oddly wrapped, because both close and release() take time.
|
||||||
|
self.streaming_process.wait()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def run_animation_renderer(config: Namespace, is_running: BaseEvent):
|
||||||
|
renderer = AnimationRenderer(config, is_running)
|
||||||
|
renderer.run()
|
|
@ -1,10 +1,14 @@
|
||||||
import argparse
|
import argparse
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import types
|
import types
|
||||||
|
import numpy as np
|
||||||
|
import json
|
||||||
|
|
||||||
from trap.tracker import DETECTORS
|
from trap.tracker import DETECTORS, TRACKER_BYTETRACK, TRACKERS
|
||||||
|
from trap.frame_emitter import Camera
|
||||||
|
|
||||||
from pyparsing import Optional
|
from pyparsing import Optional
|
||||||
|
from trap.frame_emitter import UrlOrPath
|
||||||
|
|
||||||
class LambdaParser(argparse.ArgumentParser):
|
class LambdaParser(argparse.ArgumentParser):
|
||||||
"""Execute lambda functions
|
"""Execute lambda functions
|
||||||
|
@ -49,6 +53,43 @@ frame_emitter_parser = parser.add_argument_group('Frame emitter')
|
||||||
tracker_parser = parser.add_argument_group('Tracker')
|
tracker_parser = parser.add_argument_group('Tracker')
|
||||||
render_parser = parser.add_argument_group('Renderer')
|
render_parser = parser.add_argument_group('Renderer')
|
||||||
|
|
||||||
|
class HomographyAction(argparse.Action):
|
||||||
|
def __init__(self, option_strings, dest, nargs=None, **kwargs):
|
||||||
|
if nargs is not None:
|
||||||
|
raise ValueError("nargs not allowed")
|
||||||
|
super().__init__(option_strings, dest, **kwargs)
|
||||||
|
def __call__(self, parser, namespace, values: Path, option_string=None):
|
||||||
|
if values.suffix == '.json':
|
||||||
|
with values.open('r') as fp:
|
||||||
|
H = np.array(json.load(fp))
|
||||||
|
else:
|
||||||
|
H = np.loadtxt(values, delimiter=',')
|
||||||
|
|
||||||
|
setattr(namespace, self.dest, values)
|
||||||
|
setattr(namespace, 'H', H)
|
||||||
|
|
||||||
|
class CameraAction(argparse.Action):
|
||||||
|
def __init__(self, option_strings, dest, nargs=None, **kwargs):
|
||||||
|
if nargs is not None:
|
||||||
|
raise ValueError("nargs not allowed")
|
||||||
|
super().__init__(option_strings, dest, **kwargs)
|
||||||
|
def __call__(self, parser, namespace, values, option_string=None):
|
||||||
|
if values is None:
|
||||||
|
setattr(namespace, self.dest, None)
|
||||||
|
else:
|
||||||
|
values = Path(values)
|
||||||
|
with values.open('r') as fp:
|
||||||
|
data = json.load(fp)
|
||||||
|
# print(data)
|
||||||
|
# print(data['camera_matrix'])
|
||||||
|
# camera = {
|
||||||
|
# 'camera_matrix': np.array(data['camera_matrix']),
|
||||||
|
# 'dist_coeff': np.array(data['dist_coeff']),
|
||||||
|
# }
|
||||||
|
camera = Camera(np.array(data['camera_matrix']), np.array(data['dist_coeff']), data['dim']['width'], data['dim']['height'], namespace.H)
|
||||||
|
|
||||||
|
setattr(namespace, 'camera', camera)
|
||||||
|
|
||||||
inference_parser.add_argument("--model_dir",
|
inference_parser.add_argument("--model_dir",
|
||||||
help="directory with the model to use for inference",
|
help="directory with the model to use for inference",
|
||||||
type=str, # TODO: make into Path
|
type=str, # TODO: make into Path
|
||||||
|
@ -166,16 +207,20 @@ inference_parser.add_argument('--num-samples',
|
||||||
default=5)
|
default=5)
|
||||||
inference_parser.add_argument("--full-dist",
|
inference_parser.add_argument("--full-dist",
|
||||||
help="Trajectron.incremental_forward parameter",
|
help="Trajectron.incremental_forward parameter",
|
||||||
type=bool,
|
action='store_true')
|
||||||
default=False)
|
|
||||||
inference_parser.add_argument("--gmm-mode",
|
inference_parser.add_argument("--gmm-mode",
|
||||||
help="Trajectron.incremental_forward parameter",
|
help="Trajectron.incremental_forward parameter",
|
||||||
type=bool,
|
type=bool,
|
||||||
default=True)
|
default=True)
|
||||||
inference_parser.add_argument("--z-mode",
|
inference_parser.add_argument("--z-mode",
|
||||||
help="Trajectron.incremental_forward parameter",
|
help="Trajectron.incremental_forward parameter",
|
||||||
type=bool,
|
action='store_true')
|
||||||
default=False)
|
inference_parser.add_argument('--cm-to-m',
|
||||||
|
help="Correct for homography that is in cm (i.e. {x,y}/100). Should also be used when processing data",
|
||||||
|
action='store_true')
|
||||||
|
inference_parser.add_argument('--center-data',
|
||||||
|
help="Center data around cx and cy. Should also be used when processing data",
|
||||||
|
action='store_true')
|
||||||
|
|
||||||
|
|
||||||
# Internal connections.
|
# Internal connections.
|
||||||
|
@ -213,10 +258,10 @@ connection_parser.add_argument('--bypass-prediction',
|
||||||
# Frame emitter
|
# Frame emitter
|
||||||
|
|
||||||
frame_emitter_parser.add_argument("--video-src",
|
frame_emitter_parser.add_argument("--video-src",
|
||||||
help="source video to track from",
|
help="source video to track from can be either a relative or absolute path, or a url, like an RTSP resource",
|
||||||
type=Path,
|
type=UrlOrPath,
|
||||||
nargs='+',
|
nargs='+',
|
||||||
default=lambda: list(Path('../DATASETS/VIRAT_subset_0102x/').glob('*.mp4')))
|
default=lambda: [UrlOrPath(p) for p in Path('../DATASETS/VIRAT_subset_0102x/').glob('*.mp4')])
|
||||||
frame_emitter_parser.add_argument("--video-offset",
|
frame_emitter_parser.add_argument("--video-offset",
|
||||||
help="Start playback from given frame. Note that when src is an array, this applies to all videos individually.",
|
help="Start playback from given frame. Note that when src is an array, this applies to all videos individually.",
|
||||||
default=None,
|
default=None,
|
||||||
|
@ -234,7 +279,13 @@ frame_emitter_parser.add_argument("--video-loop",
|
||||||
tracker_parser.add_argument("--homography",
|
tracker_parser.add_argument("--homography",
|
||||||
help="File with homography params",
|
help="File with homography params",
|
||||||
type=Path,
|
type=Path,
|
||||||
default='../DATASETS/VIRAT_subset_0102x/VIRAT_0102_homography_img2world.txt')
|
default='../DATASETS/VIRAT_subset_0102x/VIRAT_0102_homography_img2world.txt',
|
||||||
|
action=HomographyAction)
|
||||||
|
tracker_parser.add_argument("--calibration",
|
||||||
|
help="File with camera intrinsics and lens distortion params (calibration.json)",
|
||||||
|
# type=Path,
|
||||||
|
default=None,
|
||||||
|
action=CameraAction)
|
||||||
tracker_parser.add_argument("--save-for-training",
|
tracker_parser.add_argument("--save-for-training",
|
||||||
help="Specify the path in which to save",
|
help="Specify the path in which to save",
|
||||||
type=Path,
|
type=Path,
|
||||||
|
@ -243,9 +294,24 @@ tracker_parser.add_argument("--detector",
|
||||||
help="Specify the detector to use",
|
help="Specify the detector to use",
|
||||||
type=str,
|
type=str,
|
||||||
choices=DETECTORS)
|
choices=DETECTORS)
|
||||||
|
tracker_parser.add_argument("--tracker",
|
||||||
|
help="Specify the detector to use",
|
||||||
|
type=str,
|
||||||
|
default=TRACKER_BYTETRACK,
|
||||||
|
choices=TRACKERS)
|
||||||
tracker_parser.add_argument("--smooth-tracks",
|
tracker_parser.add_argument("--smooth-tracks",
|
||||||
help="Smooth the tracker tracks before sending them to the predictor",
|
help="Smooth the tracker tracks before sending them to the predictor",
|
||||||
action='store_true')
|
action='store_true')
|
||||||
|
# now in calibration.json
|
||||||
|
# tracker_parser.add_argument("--frame-width",
|
||||||
|
# help="width of the frames",
|
||||||
|
# type=int,
|
||||||
|
# default=1280)
|
||||||
|
# tracker_parser.add_argument("--frame-height",
|
||||||
|
# help="height of the frames",
|
||||||
|
# type=int,
|
||||||
|
# default=720)
|
||||||
|
|
||||||
|
|
||||||
# Renderer
|
# Renderer
|
||||||
|
|
||||||
|
@ -255,6 +321,12 @@ render_parser.add_argument("--render-file",
|
||||||
render_parser.add_argument("--render-window",
|
render_parser.add_argument("--render-window",
|
||||||
help="Render a previewing to a window",
|
help="Render a previewing to a window",
|
||||||
action='store_true')
|
action='store_true')
|
||||||
|
render_parser.add_argument("--render-no-preview",
|
||||||
|
help="No preview, but only animation",
|
||||||
|
action='store_true')
|
||||||
|
render_parser.add_argument("--render-debug-shapes",
|
||||||
|
help="Lines and points for debugging/mapping",
|
||||||
|
action='store_true')
|
||||||
render_parser.add_argument("--full-screen",
|
render_parser.add_argument("--full-screen",
|
||||||
help="Set Window full screen",
|
help="Set Window full screen",
|
||||||
action='store_true')
|
action='store_true')
|
||||||
|
@ -269,3 +341,9 @@ render_parser.add_argument("--render-url",
|
||||||
type=str,
|
type=str,
|
||||||
default=None)
|
default=None)
|
||||||
|
|
||||||
|
|
||||||
|
render_parser.add_argument("--debug-points-file",
|
||||||
|
help="A json file with points to test projection/homography etc.",
|
||||||
|
type=Path,
|
||||||
|
required=False,
|
||||||
|
)
|
|
@ -8,15 +8,36 @@ from pathlib import Path
|
||||||
import pickle
|
import pickle
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
from typing import Iterable, Optional
|
from typing import Iterable, List, Optional
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import cv2
|
import cv2
|
||||||
import zmq
|
import zmq
|
||||||
|
import os
|
||||||
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
||||||
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
|
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
|
||||||
|
from bytetracker.byte_tracker import STrack as ByteTrackTrack
|
||||||
|
from bytetracker.basetrack import TrackState as ByteTrackTrackState
|
||||||
|
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
logger = logging.getLogger('trap.frame_emitter')
|
logger = logging.getLogger('trap.frame_emitter')
|
||||||
|
|
||||||
|
|
||||||
|
class UrlOrPath():
|
||||||
|
def __init__(self, str):
|
||||||
|
self.url = urlparse(str)
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return self.url.geturl()
|
||||||
|
|
||||||
|
def is_url(self) -> bool:
|
||||||
|
return len(self.url.netloc) > 0
|
||||||
|
|
||||||
|
def path(self) -> Path:
|
||||||
|
if self.is_url():
|
||||||
|
return Path(self.url.path)
|
||||||
|
return Path(self.url.geturl()) # can include scheme, such as C:/
|
||||||
|
|
||||||
class DetectionState(IntFlag):
|
class DetectionState(IntFlag):
|
||||||
Tentative = 1 # state before n_init (see DeepsortTrack)
|
Tentative = 1 # state before n_init (see DeepsortTrack)
|
||||||
Confirmed = 2 # after tentative
|
Confirmed = 2 # after tentative
|
||||||
|
@ -32,6 +53,26 @@ class DetectionState(IntFlag):
|
||||||
return cls.Confirmed
|
return cls.Confirmed
|
||||||
raise RuntimeError("Should not run into Deleted entries here")
|
raise RuntimeError("Should not run into Deleted entries here")
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_bytetrack_track(cls, track: ByteTrackTrack):
|
||||||
|
if track.state == ByteTrackTrackState.New:
|
||||||
|
return cls.Tentative
|
||||||
|
if track.state == ByteTrackTrackState.Lost:
|
||||||
|
return cls.Lost
|
||||||
|
# if track.time_since_update > 0:
|
||||||
|
if track.state == ByteTrackTrackState.Tracked:
|
||||||
|
return cls.Confirmed
|
||||||
|
raise RuntimeError("Should not run into Deleted entries here")
|
||||||
|
|
||||||
|
class Camera:
|
||||||
|
def __init__(self, mtx, dist, w, h, H):
|
||||||
|
self.mtx = mtx
|
||||||
|
self.dist = dist
|
||||||
|
self.w = w
|
||||||
|
self.h = h
|
||||||
|
self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
|
||||||
|
self.H = H # homography
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class Detection:
|
class Detection:
|
||||||
|
@ -43,13 +84,19 @@ class Detection:
|
||||||
conf: float # object detector probablity
|
conf: float # object detector probablity
|
||||||
state: DetectionState
|
state: DetectionState
|
||||||
frame_nr: int
|
frame_nr: int
|
||||||
|
det_class: str
|
||||||
|
|
||||||
def get_foot_coords(self) -> list[tuple[float, float]]:
|
def get_foot_coords(self) -> list[tuple[float, float]]:
|
||||||
return [self.l + 0.5 * self.w, self.t+self.h]
|
return [self.l + 0.5 * self.w, self.t+self.h]
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_deepsort(cls, dstrack: DeepsortTrack):
|
def from_deepsort(cls, dstrack: DeepsortTrack, frame_nr: int):
|
||||||
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf, DetectionState.from_deepsort_track(dstrack))
|
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
|
||||||
|
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_bytetrack(cls, bstrack: ByteTrackTrack, frame_nr: int):
|
||||||
|
return cls(bstrack.track_id, *bstrack.tlwh, bstrack.score, DetectionState.from_bytetrack_track(bstrack), frame_nr, bstrack.cls)
|
||||||
|
|
||||||
def get_scaled(self, scale: float = 1):
|
def get_scaled(self, scale: float = 1):
|
||||||
if scale == 1:
|
if scale == 1:
|
||||||
|
@ -62,7 +109,9 @@ class Detection:
|
||||||
self.w*scale,
|
self.w*scale,
|
||||||
self.h*scale,
|
self.h*scale,
|
||||||
self.conf,
|
self.conf,
|
||||||
self.state)
|
self.state,
|
||||||
|
self.frame_nr,
|
||||||
|
self.det_class)
|
||||||
|
|
||||||
def to_ltwh(self):
|
def to_ltwh(self):
|
||||||
return (int(self.l), int(self.t), int(self.w), int(self.h))
|
return (int(self.l), int(self.t), int(self.w), int(self.h))
|
||||||
|
@ -79,23 +128,29 @@ class Track:
|
||||||
and acceleration.
|
and acceleration.
|
||||||
"""
|
"""
|
||||||
track_id: str = None
|
track_id: str = None
|
||||||
history: [Detection] = field(default_factory=lambda: [])
|
history: List[Detection] = field(default_factory=lambda: [])
|
||||||
predictor_history: Optional[list] = None # in image space
|
predictor_history: Optional[list] = None # in image space
|
||||||
predictions: Optional[list] = None
|
predictions: Optional[list] = None
|
||||||
|
|
||||||
def get_projected_history(self, H) -> np.array:
|
def get_projected_history(self, H, camera: Optional[Camera]= None) -> np.array:
|
||||||
foot_coordinates = [d.get_foot_coords() for d in self.history]
|
foot_coordinates = [d.get_foot_coords() for d in self.history]
|
||||||
|
# TODO)) Undistort points before perspective transform
|
||||||
if len(foot_coordinates):
|
if len(foot_coordinates):
|
||||||
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
|
if camera:
|
||||||
|
coords = cv2.undistortPoints(np.array([foot_coordinates]).astype('float32'), camera.mtx, camera.dist, None, camera.newcameramtx)
|
||||||
|
coords = cv2.perspectiveTransform(np.array(coords),H)
|
||||||
|
return coords.reshape((coords.shape[0],2))
|
||||||
|
else:
|
||||||
|
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
|
||||||
return coords[0]
|
return coords[0]
|
||||||
return np.array([])
|
return np.array([])
|
||||||
|
|
||||||
def get_projected_history_as_dict(self, H) -> dict:
|
def get_projected_history_as_dict(self, H, camera: Optional[Camera]= None) -> dict:
|
||||||
coords = self.get_projected_history(H)
|
coords = self.get_projected_history(H, camera)
|
||||||
return [{"x":c[0], "y":c[1]} for c in coords]
|
return [{"x":c[0], "y":c[1]} for c in coords]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -106,6 +161,7 @@ class Frame:
|
||||||
time: float= field(default_factory=lambda: time.time())
|
time: float= field(default_factory=lambda: time.time())
|
||||||
tracks: Optional[dict[str, Track]] = None
|
tracks: Optional[dict[str, Track]] = None
|
||||||
H: Optional[np.array] = None
|
H: Optional[np.array] = None
|
||||||
|
camera: Optional[Camera] = None
|
||||||
|
|
||||||
def aslist(self) -> [dict]:
|
def aslist(self) -> [dict]:
|
||||||
return { t.track_id:
|
return { t.track_id:
|
||||||
|
@ -120,6 +176,13 @@ class Frame:
|
||||||
} for t in self.tracks.values()
|
} for t in self.tracks.values()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
def video_src_from_config(config) -> UrlOrPath:
|
||||||
|
if config.video_loop:
|
||||||
|
video_srcs: Iterable[UrlOrPath] = cycle(config.video_src)
|
||||||
|
else:
|
||||||
|
video_srcs: Iterable[UrlOrPath] = config.video_src
|
||||||
|
return video_srcs
|
||||||
|
|
||||||
class FrameEmitter:
|
class FrameEmitter:
|
||||||
'''
|
'''
|
||||||
Emit frame in a separate threat so they can be throttled,
|
Emit frame in a separate threat so they can be throttled,
|
||||||
|
@ -137,27 +200,33 @@ class FrameEmitter:
|
||||||
|
|
||||||
logger.info(f"Connection socket {config.zmq_frame_addr}")
|
logger.info(f"Connection socket {config.zmq_frame_addr}")
|
||||||
|
|
||||||
if self.config.video_loop:
|
self.video_srcs = video_src_from_config(self.config)
|
||||||
self.video_srcs: Iterable[Path] = cycle(self.config.video_src)
|
|
||||||
else:
|
|
||||||
self.video_srcs: [Path] = self.config.video_src
|
|
||||||
|
|
||||||
|
|
||||||
def emit_video(self):
|
def emit_video(self):
|
||||||
i = 0
|
i = 0
|
||||||
|
delay_generation = False
|
||||||
for video_path in self.video_srcs:
|
for video_path in self.video_srcs:
|
||||||
logger.info(f"Play from '{str(video_path)}'")
|
logger.info(f"Play from '{str(video_path)}'")
|
||||||
if str(video_path).isdigit():
|
if str(video_path).isdigit():
|
||||||
# numeric input is a CV camera
|
# numeric input is a CV camera
|
||||||
video = cv2.VideoCapture(int(str(video_path)))
|
video = cv2.VideoCapture(int(str(video_path)))
|
||||||
# TODO: make config variables
|
# TODO: make config variables
|
||||||
video.set(cv2.CAP_PROP_FRAME_WIDTH, int(1280))
|
video.set(cv2.CAP_PROP_FRAME_WIDTH, int(self.config.camera.w))
|
||||||
video.set(cv2.CAP_PROP_FRAME_HEIGHT, int(720))
|
video.set(cv2.CAP_PROP_FRAME_HEIGHT, int(self.config.camera.h))
|
||||||
print("exposure!", video.get(cv2.CAP_PROP_AUTO_EXPOSURE))
|
print("exposure!", video.get(cv2.CAP_PROP_AUTO_EXPOSURE))
|
||||||
video.set(cv2.CAP_PROP_FPS, 5)
|
video.set(cv2.CAP_PROP_FPS, 5)
|
||||||
|
fps=5
|
||||||
|
elif video_path.url.scheme == 'rtsp':
|
||||||
|
gst = f"rtspsrc location={video_path} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=1 drop=true"
|
||||||
|
logger.info(f"Capture gstreamer (gst-launch-1.0): {gst}")
|
||||||
|
video = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
|
||||||
|
fps=12
|
||||||
else:
|
else:
|
||||||
|
# os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "fflags;nobuffer|flags;low_delay|avioflags;direct|rtsp_transport;udp"
|
||||||
video = cv2.VideoCapture(str(video_path))
|
video = cv2.VideoCapture(str(video_path))
|
||||||
fps = video.get(cv2.CAP_PROP_FPS)
|
delay_generation = True
|
||||||
|
fps = video.get(cv2.CAP_PROP_FPS)
|
||||||
target_frame_duration = 1./fps
|
target_frame_duration = 1./fps
|
||||||
logger.info(f"Emit frames at {fps} fps")
|
logger.info(f"Emit frames at {fps} fps")
|
||||||
|
|
||||||
|
@ -167,18 +236,19 @@ class FrameEmitter:
|
||||||
i = self.config.video_offset
|
i = self.config.video_offset
|
||||||
|
|
||||||
|
|
||||||
if '-' in video_path.stem:
|
# if '-' in video_path.path().stem:
|
||||||
path_stem = video_path.stem[:video_path.stem.rfind('-')]
|
# path_stem = video_path.stem[:video_path.stem.rfind('-')]
|
||||||
else:
|
# else:
|
||||||
path_stem = video_path.stem
|
# path_stem = video_path.stem
|
||||||
path_stem += "-homography"
|
# path_stem += "-homography"
|
||||||
homography_path = video_path.with_stem(path_stem).with_suffix('.txt')
|
# homography_path = video_path.with_stem(path_stem).with_suffix('.txt')
|
||||||
logger.info(f'check homography file {homography_path}')
|
# logger.info(f'check homography file {homography_path}')
|
||||||
if homography_path.exists():
|
# if homography_path.exists():
|
||||||
logger.info(f'Found custom homography file! Using {homography_path}')
|
# logger.info(f'Found custom homography file! Using {homography_path}')
|
||||||
video_H = np.loadtxt(homography_path, delimiter=',')
|
# video_H = np.loadtxt(homography_path, delimiter=',')
|
||||||
else:
|
# else:
|
||||||
video_H = None
|
# video_H = None
|
||||||
|
video_H = self.config.camera.H
|
||||||
|
|
||||||
prev_time = time.time()
|
prev_time = time.time()
|
||||||
|
|
||||||
|
@ -198,19 +268,22 @@ class FrameEmitter:
|
||||||
# hack to mask out area
|
# hack to mask out area
|
||||||
cv2.rectangle(img, (0,0), (800,200), (0,0,0), -1)
|
cv2.rectangle(img, (0,0), (800,200), (0,0,0), -1)
|
||||||
|
|
||||||
frame = Frame(index=i, img=img, H=video_H)
|
frame = Frame(index=i, img=img, H=self.config.H, camera=self.config.camera)
|
||||||
# TODO: this is very dirty, need to find another way.
|
# TODO: this is very dirty, need to find another way.
|
||||||
# perhaps multiprocessing Array?
|
# perhaps multiprocessing Array?
|
||||||
self.frame_sock.send(pickle.dumps(frame))
|
self.frame_sock.send(pickle.dumps(frame))
|
||||||
|
|
||||||
# defer next loop
|
# only delay consuming the next frame when using a file.
|
||||||
now = time.time()
|
# Otherwise, go ASAP
|
||||||
time_diff = (now - prev_time)
|
if delay_generation:
|
||||||
if time_diff < target_frame_duration:
|
# defer next loop
|
||||||
time.sleep(target_frame_duration - time_diff)
|
now = time.time()
|
||||||
now += target_frame_duration - time_diff
|
time_diff = (now - prev_time)
|
||||||
|
if time_diff < target_frame_duration:
|
||||||
prev_time = now
|
time.sleep(target_frame_duration - time_diff)
|
||||||
|
now += target_frame_duration - time_diff
|
||||||
|
|
||||||
|
prev_time = now
|
||||||
|
|
||||||
i += 1
|
i += 1
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
import atexit
|
import atexit
|
||||||
import logging
|
import logging
|
||||||
from logging.handlers import SocketHandler
|
from logging.handlers import SocketHandler, QueueHandler, QueueListener
|
||||||
from multiprocessing import Event, Process, Queue
|
from multiprocessing import Event, Process, Queue
|
||||||
import multiprocessing
|
import multiprocessing
|
||||||
import signal
|
import signal
|
||||||
|
@ -9,14 +9,18 @@ import time
|
||||||
from trap.config import parser
|
from trap.config import parser
|
||||||
from trap.frame_emitter import run_frame_emitter
|
from trap.frame_emitter import run_frame_emitter
|
||||||
from trap.prediction_server import run_prediction_server
|
from trap.prediction_server import run_prediction_server
|
||||||
from trap.renderer import run_renderer
|
from trap.preview_renderer import run_preview_renderer
|
||||||
|
from trap.animation_renderer import run_animation_renderer
|
||||||
from trap.socket_forwarder import run_ws_forwarder
|
from trap.socket_forwarder import run_ws_forwarder
|
||||||
from trap.tracker import run_tracker
|
from trap.tracker import run_tracker
|
||||||
|
|
||||||
|
from setproctitle import setproctitle, setthreadtitle
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger("trap.plumbing")
|
logger = logging.getLogger("trap.plumbing")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class ExceptionHandlingProcess(Process):
|
class ExceptionHandlingProcess(Process):
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
@ -31,10 +35,12 @@ class ExceptionHandlingProcess(Process):
|
||||||
atexit.register(exit_handler)
|
atexit.register(exit_handler)
|
||||||
signal.signal(signal.SIGTERM, exit_handler)
|
signal.signal(signal.SIGTERM, exit_handler)
|
||||||
signal.signal(signal.SIGINT, exit_handler)
|
signal.signal(signal.SIGINT, exit_handler)
|
||||||
|
setproctitle(f"trap-{self.name}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
super(Process, self).run()
|
super(Process, self).run()
|
||||||
except Exception as e:
|
print("finished ", self.name)
|
||||||
|
except BaseException as e:
|
||||||
logger.critical(f"Exception in {self.name}")
|
logger.critical(f"Exception in {self.name}")
|
||||||
logger.exception(e)
|
logger.exception(e)
|
||||||
self._kwargs['is_running'].clear()
|
self._kwargs['is_running'].clear()
|
||||||
|
@ -44,25 +50,37 @@ def start():
|
||||||
loglevel = logging.NOTSET if args.verbose > 1 else logging.DEBUG if args.verbose > 0 else logging.INFO
|
loglevel = logging.NOTSET if args.verbose > 1 else logging.DEBUG if args.verbose > 0 else logging.INFO
|
||||||
# print(args)
|
# print(args)
|
||||||
# exit()
|
# exit()
|
||||||
logging.basicConfig(
|
|
||||||
level=loglevel,
|
|
||||||
)
|
|
||||||
|
|
||||||
# set per handler, so we can set it lower for the root logger if remote logging is enabled
|
|
||||||
root_logger = logging.getLogger()
|
|
||||||
[h.setLevel(loglevel) for h in root_logger.handlers]
|
|
||||||
|
|
||||||
isRunning = Event()
|
isRunning = Event()
|
||||||
isRunning.set()
|
isRunning.set()
|
||||||
|
|
||||||
|
q = multiprocessing.Queue(-1)
|
||||||
|
queue_handler = QueueHandler(q)
|
||||||
|
stream_handler = logging.StreamHandler()
|
||||||
|
log_handlers = [stream_handler]
|
||||||
|
|
||||||
if args.remote_log_addr:
|
if args.remote_log_addr:
|
||||||
logging.captureWarnings(True)
|
logging.captureWarnings(True)
|
||||||
root_logger.setLevel(logging.NOTSET) # to send all records to cutelog
|
# root_logger.setLevel(logging.NOTSET) # to send all records to cutelog
|
||||||
socket_handler = SocketHandler(args.remote_log_addr, args.remote_log_port)
|
socket_handler = SocketHandler(args.remote_log_addr, args.remote_log_port)
|
||||||
root_logger.addHandler(socket_handler)
|
socket_handler.setLevel(logging.NOTSET)
|
||||||
|
log_handlers.append(socket_handler)
|
||||||
|
|
||||||
|
queue_listener = QueueListener(q, *log_handlers, respect_handler_level=True)
|
||||||
|
|
||||||
|
|
||||||
|
# root = logging.getLogger()
|
||||||
|
logging.basicConfig(
|
||||||
|
level=loglevel,
|
||||||
|
handlers=[queue_handler]
|
||||||
|
)
|
||||||
|
|
||||||
|
# root_logger = logging.getLogger()
|
||||||
|
# # set per handler, so we can set it lower for the root logger if remote logging is enabled
|
||||||
|
# [h.setLevel(loglevel) for h in root_logger.handlers]
|
||||||
|
|
||||||
|
|
||||||
|
# queue_listener.handlers.append(socket_handler)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -74,8 +92,12 @@ def start():
|
||||||
]
|
]
|
||||||
|
|
||||||
if args.render_file or args.render_url or args.render_window:
|
if args.render_file or args.render_url or args.render_window:
|
||||||
|
if not args.render_no_preview or args.render_file or args.render_url:
|
||||||
|
procs.append(
|
||||||
|
ExceptionHandlingProcess(target=run_preview_renderer, kwargs={'config': args, 'is_running': isRunning}, name='preview')
|
||||||
|
)
|
||||||
procs.append(
|
procs.append(
|
||||||
ExceptionHandlingProcess(target=run_renderer, kwargs={'config': args, 'is_running': isRunning}, name='renderer')
|
ExceptionHandlingProcess(target=run_animation_renderer, kwargs={'config': args, 'is_running': isRunning}, name='renderer')
|
||||||
)
|
)
|
||||||
|
|
||||||
if not args.bypass_prediction:
|
if not args.bypass_prediction:
|
||||||
|
@ -83,16 +105,41 @@ def start():
|
||||||
ExceptionHandlingProcess(target=run_prediction_server, kwargs={'config': args, 'is_running':isRunning}, name='inference'),
|
ExceptionHandlingProcess(target=run_prediction_server, kwargs={'config': args, 'is_running':isRunning}, name='inference'),
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info("start")
|
try:
|
||||||
for proc in procs:
|
logger.info("start")
|
||||||
proc.start()
|
for proc in procs:
|
||||||
|
proc.start()
|
||||||
|
|
||||||
# wait for processes to clean up
|
# if start the listener before the subprocesses, it becomes a mess, because the
|
||||||
for proc in procs:
|
# running threat is forked too, but cannot easily be stopped in the forks.
|
||||||
proc.join()
|
# Thus, only start the queue-listener threat _after_ starting processes
|
||||||
|
queue_listener.start()
|
||||||
logger.info('Stop')
|
|
||||||
|
|
||||||
|
# wait for processes to clean up
|
||||||
|
for proc in procs:
|
||||||
|
proc.join()
|
||||||
|
|
||||||
|
isRunning.clear()
|
||||||
|
|
||||||
|
logger.info('Stop')
|
||||||
|
except BaseException as e:
|
||||||
|
# mainly for KeyboardInterrupt
|
||||||
|
# but in any case, on error all processed need to be signalled to shut down
|
||||||
|
logger.critical(f"Exception in plumber")
|
||||||
|
logger.exception(e)
|
||||||
|
isRunning.clear()
|
||||||
|
|
||||||
|
# while True:
|
||||||
|
# time.sleep(2)
|
||||||
|
# any_alive = False
|
||||||
|
# alive = [proc for proc in procs if proc.is_alive()]
|
||||||
|
# print("alive: ", [p.name for p in alive])
|
||||||
|
# if len(alive) < 1:
|
||||||
|
# break
|
||||||
|
print('stop listener')
|
||||||
|
queue_listener.stop()
|
||||||
|
print('stopped listener')
|
||||||
|
print("finished plumber")
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
start()
|
start()
|
|
@ -113,6 +113,34 @@ def get_maps_for_input(input_dict, scene, hyperparams):
|
||||||
return maps_dict
|
return maps_dict
|
||||||
|
|
||||||
|
|
||||||
|
# If homography is in cm, predictions can be terrible. Correct that here
|
||||||
|
# TODO)) This should actually not be here, but we should use alternative homography
|
||||||
|
# and then scale up in rendering
|
||||||
|
def history_cm_to_m(history):
|
||||||
|
return [(h[0]/100, h[1]/100) for h in history]
|
||||||
|
|
||||||
|
# TODO)) variable. Now placeholders for hof2 dataset
|
||||||
|
cx = 11.874955125
|
||||||
|
cy = 7.186118765
|
||||||
|
|
||||||
|
def prediction_m_to_cm(source):
|
||||||
|
# histories_dict[t][node]
|
||||||
|
for t in source:
|
||||||
|
for node in source[t]:
|
||||||
|
# source[t][node][:,0] += cx
|
||||||
|
# source[t][node][:,1] += cy
|
||||||
|
source[t][node] *= 100
|
||||||
|
# print(t,node, source[t][node])
|
||||||
|
return source
|
||||||
|
|
||||||
|
def offset_trajectron_dict(source, x, y):
|
||||||
|
# histories_dict[t][node]
|
||||||
|
for t in source:
|
||||||
|
for node in source[t]:
|
||||||
|
source[t][node][:,0] += x
|
||||||
|
source[t][node][:,1] += y
|
||||||
|
return source
|
||||||
|
|
||||||
class PredictionServer:
|
class PredictionServer:
|
||||||
def __init__(self, config: Namespace, is_running: Event):
|
def __init__(self, config: Namespace, is_running: Event):
|
||||||
self.config = config
|
self.config = config
|
||||||
|
@ -122,7 +150,7 @@ class PredictionServer:
|
||||||
logger.warning("Running on CPU. Specifying --eval_device cuda:0 should dramatically speed up prediction")
|
logger.warning("Running on CPU. Specifying --eval_device cuda:0 should dramatically speed up prediction")
|
||||||
|
|
||||||
if self.config.smooth_predictions:
|
if self.config.smooth_predictions:
|
||||||
self.smoother = Smoother(window_len=4)
|
self.smoother = Smoother(window_len=12, convolution=True) # convolution seems fine for predictions
|
||||||
|
|
||||||
context = zmq.Context()
|
context = zmq.Context()
|
||||||
self.trajectory_socket: zmq.Socket = context.socket(zmq.SUB)
|
self.trajectory_socket: zmq.Socket = context.socket(zmq.SUB)
|
||||||
|
@ -153,6 +181,7 @@ class PredictionServer:
|
||||||
if not os.path.exists(config_file):
|
if not os.path.exists(config_file):
|
||||||
raise ValueError('Config json not found!')
|
raise ValueError('Config json not found!')
|
||||||
with open(config_file, 'r') as conf_json:
|
with open(config_file, 'r') as conf_json:
|
||||||
|
logger.info(f"Load config from {config_file}")
|
||||||
hyperparams = json.load(conf_json)
|
hyperparams = json.load(conf_json)
|
||||||
|
|
||||||
# Add hyperparams from arguments
|
# Add hyperparams from arguments
|
||||||
|
@ -269,15 +298,21 @@ class PredictionServer:
|
||||||
|
|
||||||
# TODO: modify this into a mapping function between JS data an the expected Node format
|
# TODO: modify this into a mapping function between JS data an the expected Node format
|
||||||
# node = FakeNode(online_env.NodeType.PEDESTRIAN)
|
# node = FakeNode(online_env.NodeType.PEDESTRIAN)
|
||||||
history = [[h['x'], h['y']] for h in track.get_projected_history_as_dict(frame.H)]
|
history = [[h['x'], h['y']] for h in track.get_projected_history_as_dict(frame.H, self.config.camera)]
|
||||||
|
if self.config.cm_to_m:
|
||||||
|
history = history_cm_to_m(history)
|
||||||
|
|
||||||
history = np.array(history)
|
history = np.array(history)
|
||||||
x = history[:, 0]
|
x = history[:, 0] #- cx # we can create bigger steps by doing history[::5,0]
|
||||||
y = history[:, 1]
|
y = history[:, 1] #- cy # history[::5,1]
|
||||||
|
if self.config.center_data:
|
||||||
|
x -= cx
|
||||||
|
y -= cy
|
||||||
# TODO: calculate dt based on input
|
# TODO: calculate dt based on input
|
||||||
vx = derivative_of(x, 0.1) #eval_scene.dt
|
vx = derivative_of(x, .1) #eval_scene.dt
|
||||||
vy = derivative_of(y, 0.1)
|
vy = derivative_of(y, .1)
|
||||||
ax = derivative_of(vx, 0.1)
|
ax = derivative_of(vx, .1)
|
||||||
ay = derivative_of(vy, 0.1)
|
ay = derivative_of(vy, .1)
|
||||||
|
|
||||||
data_dict = {('position', 'x'): x[:], # [-10:-1]
|
data_dict = {('position', 'x'): x[:], # [-10:-1]
|
||||||
('position', 'y'): y[:], # [-10:-1]
|
('position', 'y'): y[:], # [-10:-1]
|
||||||
|
@ -334,7 +369,7 @@ class PredictionServer:
|
||||||
maps,
|
maps,
|
||||||
prediction_horizon=self.config.prediction_horizon, # TODO: make variable
|
prediction_horizon=self.config.prediction_horizon, # TODO: make variable
|
||||||
num_samples=self.config.num_samples, # TODO: make variable
|
num_samples=self.config.num_samples, # TODO: make variable
|
||||||
full_dist=self.config.full_dist, # "The model’s full sampled output, where z and y are sampled sequentially"
|
full_dist=self.config.full_dist, # "The mol’des full sampled output, where z and y are sampled sequentially"
|
||||||
gmm_mode=self.config.gmm_mode, # "If True: The mode of the Gaussian Mixture Model (GMM) is sampled (see trajectron.model.mgcvae.py)"
|
gmm_mode=self.config.gmm_mode, # "If True: The mode of the Gaussian Mixture Model (GMM) is sampled (see trajectron.model.mgcvae.py)"
|
||||||
z_mode=self.config.z_mode # "Predictions from the model’s most-likely high-level latent behavior mode" (see trajecton.models.components.discrete_latent:sample_p(most_likely_z=z_mode))
|
z_mode=self.config.z_mode # "Predictions from the model’s most-likely high-level latent behavior mode" (see trajecton.models.components.discrete_latent:sample_p(most_likely_z=z_mode))
|
||||||
)
|
)
|
||||||
|
@ -359,7 +394,16 @@ class PredictionServer:
|
||||||
hyperparams['maximum_history_length'],
|
hyperparams['maximum_history_length'],
|
||||||
hyperparams['prediction_horizon']
|
hyperparams['prediction_horizon']
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# if self.config.center_data:
|
||||||
|
# prediction_dict, histories_dict, futures_dict = offset_trajectron_dict(prediction_dict, cx, cy), offset_trajectron_dict(histories_dict, cx, cy), offset_trajectron_dict(futures_dict, cx, cy)
|
||||||
|
|
||||||
|
if self.config.cm_to_m:
|
||||||
|
# convert back to fit homography
|
||||||
|
prediction_dict, histories_dict, futures_dict = prediction_m_to_cm(prediction_dict), prediction_m_to_cm(histories_dict), prediction_m_to_cm(futures_dict)
|
||||||
|
|
||||||
|
|
||||||
assert(len(prediction_dict.keys()) <= 1)
|
assert(len(prediction_dict.keys()) <= 1)
|
||||||
if len(prediction_dict.keys()) == 0:
|
if len(prediction_dict.keys()) == 0:
|
||||||
return
|
return
|
||||||
|
@ -395,12 +439,13 @@ class PredictionServer:
|
||||||
if self.config.predict_training_data:
|
if self.config.predict_training_data:
|
||||||
logger.info(f"Frame prediction: {len(trajectron.nodes)} nodes & {trajectron.scene_graph.get_num_edges()} edges. Trajectron: {end - start}s")
|
logger.info(f"Frame prediction: {len(trajectron.nodes)} nodes & {trajectron.scene_graph.get_num_edges()} edges. Trajectron: {end - start}s")
|
||||||
else:
|
else:
|
||||||
logger.info(f"Total frame delay = {time.time()-frame.time}s ({len(trajectron.nodes)} nodes & {trajectron.scene_graph.get_num_edges()} edges. Trajectron: {end - start}s)")
|
logger.debug(f"Total frame delay = {time.time()-frame.time}s ({len(trajectron.nodes)} nodes & {trajectron.scene_graph.get_num_edges()} edges. Trajectron: {end - start}s)")
|
||||||
|
|
||||||
if self.config.smooth_predictions:
|
if self.config.smooth_predictions:
|
||||||
frame = self.smoother.smooth_frame_predictions(frame)
|
frame = self.smoother.smooth_frame_predictions(frame)
|
||||||
|
|
||||||
self.prediction_socket.send_pyobj(frame)
|
self.prediction_socket.send_pyobj(frame)
|
||||||
|
time.sleep(.5)
|
||||||
logger.info('Stopping')
|
logger.info('Stopping')
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -10,7 +10,7 @@ from multiprocessing import Event
|
||||||
from multiprocessing.synchronize import Event as BaseEvent
|
from multiprocessing.synchronize import Event as BaseEvent
|
||||||
import cv2
|
import cv2
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
import json
|
||||||
import pyglet
|
import pyglet
|
||||||
import pyglet.event
|
import pyglet.event
|
||||||
import zmq
|
import zmq
|
||||||
|
@ -18,15 +18,17 @@ import tempfile
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import shutil
|
import shutil
|
||||||
import math
|
import math
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
|
||||||
from pyglet import shapes
|
from pyglet import shapes
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
|
|
||||||
from trap.frame_emitter import DetectionState, Frame, Track
|
from trap.frame_emitter import DetectionState, Frame, Track, Camera
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger("trap.renderer")
|
logger = logging.getLogger("trap.preview")
|
||||||
|
|
||||||
class FrameAnimation:
|
class FrameAnimation:
|
||||||
def __init__(self, frame: Frame):
|
def __init__(self, frame: Frame):
|
||||||
|
@ -55,32 +57,46 @@ def relativePointToPolar(origin, point) -> tuple[float, float]:
|
||||||
def relativePolarToPoint(origin, r, angle) -> tuple[float, float]:
|
def relativePolarToPoint(origin, r, angle) -> tuple[float, float]:
|
||||||
return r * np.cos(angle) + origin[0], r * np.sin(angle) + origin[1]
|
return r * np.cos(angle) + origin[0], r * np.sin(angle) + origin[1]
|
||||||
|
|
||||||
|
PROJECTION_IMG = 0
|
||||||
|
PROJECTION_UNDISTORT = 1
|
||||||
|
PROJECTION_MAP = 2
|
||||||
|
PROJECTION_PROJECTOR = 4
|
||||||
|
|
||||||
class DrawnTrack:
|
class DrawnTrack:
|
||||||
def __init__(self, track_id, track: Track, renderer: Renderer, H):
|
def __init__(self, track_id, track: Track, renderer: PreviewRenderer, H, draw_projection = PROJECTION_IMG, camera: Optional[Camera] = None):
|
||||||
# self.created_at = time.time()
|
# self.created_at = time.time()
|
||||||
|
self.draw_projection = draw_projection
|
||||||
self.update_at = self.created_at = time.time()
|
self.update_at = self.created_at = time.time()
|
||||||
self.track_id = track_id
|
self.track_id = track_id
|
||||||
self.renderer = renderer
|
self.renderer = renderer
|
||||||
|
self.camera = camera
|
||||||
|
self.H = H # TODO)) Move H to Camera object
|
||||||
self.set_track(track, H)
|
self.set_track(track, H)
|
||||||
|
self.set_predictions(track, H)
|
||||||
self.drawn_positions = []
|
self.drawn_positions = []
|
||||||
self.drawn_predictions = []
|
self.drawn_predictions = []
|
||||||
self.shapes: list[pyglet.shapes.Line] = []
|
self.shapes: list[pyglet.shapes.Line] = []
|
||||||
self.pred_shapes: list[list[pyglet.shapes.Line]] = []
|
self.pred_shapes: list[list[pyglet.shapes.Line]] = []
|
||||||
|
|
||||||
def set_track(self, track: Track, H):
|
def set_track(self, track: Track, H = None):
|
||||||
self.update_at = time.time()
|
self.update_at = time.time()
|
||||||
|
|
||||||
self.track = track
|
self.track = track
|
||||||
self.H = H
|
# self.H = H
|
||||||
self.coords = [d.get_foot_coords() for d in track.history]
|
self.coords = [d.get_foot_coords() for d in track.history] if self.draw_projection == PROJECTION_IMG else track.get_projected_history(self.H, self.camera)
|
||||||
|
|
||||||
# perhaps only do in constructor:
|
# perhaps only do in constructor:
|
||||||
self.inv_H = np.linalg.pinv(self.H)
|
self.inv_H = np.linalg.pinv(self.H)
|
||||||
|
|
||||||
|
def set_predictions(self, track: Track, H = None):
|
||||||
|
|
||||||
pred_coords = []
|
pred_coords = []
|
||||||
for pred_i, pred in enumerate(track.predictions):
|
if track.predictions:
|
||||||
pred_coords.append(cv2.perspectiveTransform(np.array([pred]), self.inv_H)[0].tolist())
|
if self.draw_projection == PROJECTION_IMG:
|
||||||
|
for pred_i, pred in enumerate(track.predictions):
|
||||||
|
pred_coords.append(cv2.perspectiveTransform(np.array([pred]), self.inv_H)[0].tolist())
|
||||||
|
elif self.draw_projection == PROJECTION_MAP:
|
||||||
|
pred_coords = [pred for pred in track.predictions]
|
||||||
|
|
||||||
self.pred_coords = pred_coords
|
self.pred_coords = pred_coords
|
||||||
# color = (128,0,128) if pred_i else (128,
|
# color = (128,0,128) if pred_i else (128,
|
||||||
|
@ -98,20 +114,21 @@ class DrawnTrack:
|
||||||
if len(self.coords) > len(self.drawn_positions):
|
if len(self.coords) > len(self.drawn_positions):
|
||||||
self.drawn_positions.extend(self.coords[len(self.drawn_positions):])
|
self.drawn_positions.extend(self.coords[len(self.drawn_positions):])
|
||||||
|
|
||||||
for a, drawn_prediction in enumerate(self.drawn_predictions):
|
if len(self.pred_coords):
|
||||||
for i, pos in enumerate(drawn_prediction):
|
for a, drawn_prediction in enumerate(self.drawn_predictions):
|
||||||
# TODO: this should be done in polar space starting from origin (i.e. self.drawn_posision[-1])
|
for i, pos in enumerate(drawn_prediction):
|
||||||
decay = max(3, (18/i) if i else 10) # points further away move with more delay
|
# TODO: this should be done in polar space starting from origin (i.e. self.drawn_posision[-1])
|
||||||
decay = 6
|
decay = max(3, (18/i) if i else 10) # points further away move with more delay
|
||||||
origin = self.drawn_positions[-1]
|
decay = 16
|
||||||
drawn_r, drawn_angle = relativePointToPolar( origin, drawn_prediction[i])
|
origin = self.drawn_positions[-1]
|
||||||
pred_r, pred_angle = relativePointToPolar(origin, self.pred_coords[a][i])
|
drawn_r, drawn_angle = relativePointToPolar( origin, drawn_prediction[i])
|
||||||
r = exponentialDecay(drawn_r, pred_r, decay, dt)
|
pred_r, pred_angle = relativePointToPolar(origin, self.pred_coords[a][i])
|
||||||
angle = exponentialDecay(drawn_angle, pred_angle, decay, dt)
|
r = exponentialDecay(drawn_r, pred_r, decay, dt)
|
||||||
x, y = relativePolarToPoint(origin, r, angle)
|
angle = exponentialDecay(drawn_angle, pred_angle, decay, dt)
|
||||||
self.drawn_predictions[a][i] = int(x), int(y)
|
x, y = relativePolarToPoint(origin, r, angle)
|
||||||
# self.drawn_predictions[i][0] = int(exponentialDecay(self.drawn_predictions[i][0], self.pred_coords[i][0], decay, dt))
|
self.drawn_predictions[a][i] = int(x), int(y)
|
||||||
# self.drawn_predictions[i][1] = int(exponentialDecay(self.drawn_predictions[i][1], self.pred_coords[i][1], decay, dt))
|
# self.drawn_predictions[i][0] = int(exponentialDecay(self.drawn_predictions[i][0], self.pred_coords[i][0], decay, dt))
|
||||||
|
# self.drawn_predictions[i][1] = int(exponentialDecay(self.drawn_predictions[i][1], self.pred_coords[i][1], decay, dt))
|
||||||
|
|
||||||
if len(self.pred_coords) > len(self.drawn_predictions):
|
if len(self.pred_coords) > len(self.drawn_predictions):
|
||||||
self.drawn_predictions.extend(self.pred_coords[len(self.drawn_predictions):])
|
self.drawn_predictions.extend(self.pred_coords[len(self.drawn_predictions):])
|
||||||
|
@ -139,15 +156,17 @@ class DrawnTrack:
|
||||||
if ci >= len(self.shapes):
|
if ci >= len(self.shapes):
|
||||||
# TODO: add color2
|
# TODO: add color2
|
||||||
line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
||||||
line.opacity = 5
|
line = pyglet.shapes.Arc(x2, y2, 10, thickness=2, color=color, batch=self.renderer.batch_anim)
|
||||||
|
line.opacity = 20
|
||||||
self.shapes.append(line)
|
self.shapes.append(line)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
line = self.shapes[ci-1]
|
line = self.shapes[ci-1]
|
||||||
line.x, line.y = x, y
|
line.x, line.y = x, y
|
||||||
line.x2, line.y2 = x2, y2
|
line.x2, line.y2 = x2, y2
|
||||||
|
line.radius = int(exponentialDecay(line.radius, 1.5, 3, dt))
|
||||||
line.color = color
|
line.color = color
|
||||||
line.opacity = int(exponentialDecay(line.opacity, 180, 3, dt))
|
line.opacity = int(exponentialDecay(line.opacity, 180, 8, dt))
|
||||||
|
|
||||||
# TODO: basically a duplication of the above, do this smarter?
|
# TODO: basically a duplication of the above, do this smarter?
|
||||||
# TODO: add intermediate segment
|
# TODO: add intermediate segment
|
||||||
|
@ -163,7 +182,8 @@ class DrawnTrack:
|
||||||
# for i, pos in drawn_predictions.enumerate():
|
# for i, pos in drawn_predictions.enumerate():
|
||||||
for ci in range(0, len(drawn_predictions)):
|
for ci in range(0, len(drawn_predictions)):
|
||||||
if ci == 0:
|
if ci == 0:
|
||||||
x, y = [int(p) for p in self.drawn_positions[-1]]
|
continue
|
||||||
|
# x, y = [int(p) for p in self.drawn_positions[-1]]
|
||||||
else:
|
else:
|
||||||
x, y = [int(p) for p in drawn_predictions[ci-1]]
|
x, y = [int(p) for p in drawn_predictions[ci-1]]
|
||||||
|
|
||||||
|
@ -175,7 +195,9 @@ class DrawnTrack:
|
||||||
|
|
||||||
if ci >= len(self.pred_shapes[a]):
|
if ci >= len(self.pred_shapes[a]):
|
||||||
# TODO: add color2
|
# TODO: add color2
|
||||||
line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
# line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
||||||
|
line = pyglet.shapes.Line(x,y ,x2, y2, 1.5, color, batch=self.renderer.batch_anim)
|
||||||
|
# line = pyglet.shapes.Arc(x,y ,1.5, thickness=1.5, color=color, batch=self.renderer.batch_anim)
|
||||||
line.opacity = 5
|
line.opacity = 5
|
||||||
self.pred_shapes[a].append(line)
|
self.pred_shapes[a].append(line)
|
||||||
|
|
||||||
|
@ -187,9 +209,9 @@ class DrawnTrack:
|
||||||
decay = (16/ci) if ci else 16
|
decay = (16/ci) if ci else 16
|
||||||
half = len(drawn_predictions) / 2
|
half = len(drawn_predictions) / 2
|
||||||
if ci < half:
|
if ci < half:
|
||||||
target_opacity = 180
|
target_opacity = 60
|
||||||
else:
|
else:
|
||||||
target_opacity = (1 - ((ci - half) / half)) * 180
|
target_opacity = (1 - ((ci - half) / half)) * 60
|
||||||
line.opacity = int(exponentialDecay(line.opacity, target_opacity, decay, dt))
|
line.opacity = int(exponentialDecay(line.opacity, target_opacity, decay, dt))
|
||||||
|
|
||||||
|
|
||||||
|
@ -232,7 +254,7 @@ class FrameWriter:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class Renderer:
|
class PreviewRenderer:
|
||||||
def __init__(self, config: Namespace, is_running: BaseEvent):
|
def __init__(self, config: Namespace, is_running: BaseEvent):
|
||||||
self.config = config
|
self.config = config
|
||||||
self.is_running = is_running
|
self.is_running = is_running
|
||||||
|
@ -241,7 +263,8 @@ class Renderer:
|
||||||
self.prediction_sock = context.socket(zmq.SUB)
|
self.prediction_sock = context.socket(zmq.SUB)
|
||||||
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||||
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||||
self.prediction_sock.connect(config.zmq_prediction_addr if not self.config.bypass_prediction else config.zmq_trajectory_addr)
|
# self.prediction_sock.connect(config.zmq_prediction_addr if not self.config.bypass_prediction else config.zmq_trajectory_addr)
|
||||||
|
self.prediction_sock.connect(config.zmq_prediction_addr)
|
||||||
|
|
||||||
self.tracker_sock = context.socket(zmq.SUB)
|
self.tracker_sock = context.socket(zmq.SUB)
|
||||||
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||||
|
@ -253,14 +276,23 @@ class Renderer:
|
||||||
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||||
self.frame_sock.connect(config.zmq_frame_addr)
|
self.frame_sock.connect(config.zmq_frame_addr)
|
||||||
|
|
||||||
self.H = np.loadtxt(self.config.homography, delimiter=',')
|
|
||||||
|
# TODO)) Move loading H to config.py
|
||||||
|
# if self.config.homography.suffix == '.json':
|
||||||
|
# with self.config.homography.open('r') as fp:
|
||||||
|
# self.H = np.array(json.load(fp))
|
||||||
|
# else:
|
||||||
|
# self.H = np.loadtxt(self.config.homography, delimiter=',')
|
||||||
|
# print('h', self.config.H)
|
||||||
|
self.H = self.config.H
|
||||||
|
|
||||||
|
|
||||||
self.inv_H = np.linalg.pinv(self.H)
|
self.inv_H = np.linalg.pinv(self.H)
|
||||||
|
|
||||||
# TODO: get FPS from frame_emitter
|
# TODO: get FPS from frame_emitter
|
||||||
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
|
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
|
||||||
self.fps = 60
|
self.fps = 60
|
||||||
self.frame_size = (1280,720)
|
self.frame_size = (self.config.camera.w,self.config.camera.h)
|
||||||
self.hide_stats = False
|
self.hide_stats = False
|
||||||
self.out_writer = self.start_writer() if self.config.render_file else None
|
self.out_writer = self.start_writer() if self.config.render_file else None
|
||||||
self.streaming_process = self.start_streaming() if self.config.render_url else None
|
self.streaming_process = self.start_streaming() if self.config.render_url else None
|
||||||
|
@ -649,16 +681,26 @@ class Renderer:
|
||||||
self.out_writer.release()
|
self.out_writer.release()
|
||||||
if self.streaming_process:
|
if self.streaming_process:
|
||||||
# oddly wrapped, because both close and release() take time.
|
# oddly wrapped, because both close and release() take time.
|
||||||
|
logger.info('wait for closing stream')
|
||||||
self.streaming_process.wait()
|
self.streaming_process.wait()
|
||||||
|
|
||||||
|
logger.info('stopped')
|
||||||
# colorset = itertools.product([0,255], repeat=3) # but remove white
|
# colorset = itertools.product([0,255], repeat=3) # but remove white
|
||||||
colorset = [(0, 0, 0),
|
# colorset = [(0, 0, 0),
|
||||||
(0, 0, 255),
|
# (0, 0, 255),
|
||||||
(0, 255, 0),
|
# (0, 255, 0),
|
||||||
(0, 255, 255),
|
# (0, 255, 255),
|
||||||
(255, 0, 0),
|
# (255, 0, 0),
|
||||||
(255, 0, 255),
|
# (255, 0, 255),
|
||||||
(255, 255, 0)
|
# (255, 255, 0)
|
||||||
|
# ]
|
||||||
|
# colorset = [
|
||||||
|
# (255,255,100),
|
||||||
|
# (255,100,255),
|
||||||
|
# (100,255,255),
|
||||||
|
# ]
|
||||||
|
colorset = [
|
||||||
|
(0,0,0),
|
||||||
]
|
]
|
||||||
|
|
||||||
# Deprecated
|
# Deprecated
|
||||||
|
@ -772,6 +814,6 @@ def decorate_frame(frame: Frame, prediction_frame: Frame, first_time: float, con
|
||||||
return img
|
return img
|
||||||
|
|
||||||
|
|
||||||
def run_renderer(config: Namespace, is_running: BaseEvent):
|
def run_preview_renderer(config: Namespace, is_running: BaseEvent):
|
||||||
renderer = Renderer(config, is_running)
|
renderer = PreviewRenderer(config, is_running)
|
||||||
renderer.run()
|
renderer.run()
|
294
trap/process_data.py
Normal file
294
trap/process_data.py
Normal file
|
@ -0,0 +1,294 @@
|
||||||
|
from pathlib import Path
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
|
import dill
|
||||||
|
import tqdm
|
||||||
|
import argparse
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
from trap.tracker import Smoother
|
||||||
|
|
||||||
|
#sys.path.append("../../")
|
||||||
|
from trajectron.environment import Environment, Scene, Node
|
||||||
|
from trajectron.utils import maybe_makedirs
|
||||||
|
from trajectron.environment import derivative_of
|
||||||
|
|
||||||
|
FPS = 12
|
||||||
|
desired_max_time = 100
|
||||||
|
pred_indices = [2, 3]
|
||||||
|
state_dim = 6
|
||||||
|
frame_diff = 10
|
||||||
|
desired_frame_diff = 1
|
||||||
|
dt = 1/FPS # dt per frame (e.g. 1/FPS)
|
||||||
|
smooth_window = FPS * 1.5 # see also tracker.py
|
||||||
|
min_track_length = 10
|
||||||
|
|
||||||
|
standardization = {
|
||||||
|
'PEDESTRIAN': {
|
||||||
|
'position': {
|
||||||
|
'x': {'mean': 0, 'std': 1},
|
||||||
|
'y': {'mean': 0, 'std': 1}
|
||||||
|
},
|
||||||
|
'velocity': {
|
||||||
|
'x': {'mean': 0, 'std': 2},
|
||||||
|
'y': {'mean': 0, 'std': 2}
|
||||||
|
},
|
||||||
|
'acceleration': {
|
||||||
|
'x': {'mean': 0, 'std': 1},
|
||||||
|
'y': {'mean': 0, 'std': 1}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def augment_scene(scene, angle):
|
||||||
|
def rotate_pc(pc, alpha):
|
||||||
|
M = np.array([[np.cos(alpha), -np.sin(alpha)],
|
||||||
|
[np.sin(alpha), np.cos(alpha)]])
|
||||||
|
return M @ pc
|
||||||
|
|
||||||
|
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
|
||||||
|
|
||||||
|
scene_aug = Scene(timesteps=scene.timesteps, dt=scene.dt, name=scene.name)
|
||||||
|
|
||||||
|
alpha = angle * np.pi / 180
|
||||||
|
|
||||||
|
for node in scene.nodes:
|
||||||
|
x = node.data.position.x.copy()
|
||||||
|
y = node.data.position.y.copy()
|
||||||
|
|
||||||
|
x, y = rotate_pc(np.array([x, y]), alpha)
|
||||||
|
|
||||||
|
vx = derivative_of(x, scene.dt)
|
||||||
|
vy = derivative_of(y, scene.dt)
|
||||||
|
ax = derivative_of(vx, scene.dt)
|
||||||
|
ay = derivative_of(vy, scene.dt)
|
||||||
|
|
||||||
|
data_dict = {('position', 'x'): x,
|
||||||
|
('position', 'y'): y,
|
||||||
|
('velocity', 'x'): vx,
|
||||||
|
('velocity', 'y'): vy,
|
||||||
|
('acceleration', 'x'): ax,
|
||||||
|
('acceleration', 'y'): ay}
|
||||||
|
|
||||||
|
node_data = pd.DataFrame(data_dict, columns=data_columns)
|
||||||
|
|
||||||
|
node = Node(node_type=node.type, node_id=node.id, data=node_data, first_timestep=node.first_timestep)
|
||||||
|
|
||||||
|
scene_aug.nodes.append(node)
|
||||||
|
return scene_aug
|
||||||
|
|
||||||
|
|
||||||
|
def augment(scene):
|
||||||
|
scene_aug = np.random.choice(scene.augmented)
|
||||||
|
scene_aug.temporal_scene_graph = scene.temporal_scene_graph
|
||||||
|
return scene_aug
|
||||||
|
|
||||||
|
|
||||||
|
# maybe_makedirs('trajectron-data')
|
||||||
|
# for desired_source in [ 'hof2', ]:# ,'hof-maskrcnn', 'hof-yolov8', 'VIRAT-0102-parsed', 'virat-resnet-keypoints-full']:
|
||||||
|
|
||||||
|
def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, cm_to_m: bool, center_data: bool, bin_positions: bool):
|
||||||
|
print(f"Process data in {src_dir}, to {dst_dir}, identified by {name}")
|
||||||
|
|
||||||
|
nl = 0
|
||||||
|
l = 0
|
||||||
|
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
|
||||||
|
skipped_for_error = 0
|
||||||
|
created = 0
|
||||||
|
|
||||||
|
smoother = Smoother(window_len=smooth_window, convolution=False) if smooth_tracks else None
|
||||||
|
|
||||||
|
|
||||||
|
files = list(src_dir.glob("*/*.txt"))
|
||||||
|
print(files)
|
||||||
|
all_data = pd.concat((pd.read_csv(f, sep='\t', index_col=False, header=None) for f in files), axis=0, ignore_index=True)
|
||||||
|
print(all_data.shape)
|
||||||
|
if all_data.shape[1] == 8:
|
||||||
|
all_data.columns = ['frame_id', 'track_id', 'l','t', 'w','h', 'pos_x', 'pos_y']
|
||||||
|
elif all_data.shape[1] == 9:
|
||||||
|
all_data.columns = ['frame_id', 'track_id', 'l','t', 'w','h', 'pos_x', 'pos_y', 'state']
|
||||||
|
else:
|
||||||
|
raise Exception("Unknown data format. Check column count")
|
||||||
|
|
||||||
|
if cm_to_m:
|
||||||
|
all_data['pos_x'] /= 100
|
||||||
|
all_data['pos_y'] /= 100
|
||||||
|
|
||||||
|
|
||||||
|
mean_x, mean_y = all_data['pos_x'].mean(), all_data['pos_y'].mean()
|
||||||
|
cx = .5 * all_data['pos_x'].min() + .5 * all_data['pos_x'].max()
|
||||||
|
cy = .5 * all_data['pos_y'].min() + .5 * all_data['pos_y'].max()
|
||||||
|
# bins of .5 meter
|
||||||
|
# print(np.ceil(all_data['pos_x'].max())*2))
|
||||||
|
if bin_positions:
|
||||||
|
space_x = np.linspace(0, np.ceil(all_data['pos_x'].max()), int(np.ceil(all_data['pos_x'].max())*2)+1)
|
||||||
|
space_y = np.linspace(0, np.ceil(all_data['pos_y'].max()), int(np.ceil(all_data['pos_y'].max())*2)+1)
|
||||||
|
|
||||||
|
print(f"Dataset means: {mean_x=} {mean_y=}, (min: ({all_data['pos_x'].min()}, {all_data['pos_y'].min()}), max: ({all_data['pos_x'].max()}, {all_data['pos_y'].max()}))")
|
||||||
|
print(f"Dataset centers: {cx=} {cy=}")
|
||||||
|
|
||||||
|
for data_class in ['train', 'val', 'test']:
|
||||||
|
env = Environment(node_type_list=['PEDESTRIAN'], standardization=standardization)
|
||||||
|
attention_radius = dict()
|
||||||
|
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.PEDESTRIAN)] = 2.0
|
||||||
|
env.attention_radius = attention_radius
|
||||||
|
|
||||||
|
scenes = []
|
||||||
|
split_id = f"{name}_{data_class}"
|
||||||
|
data_dict_path = dst_dir / (split_id + '.pkl')
|
||||||
|
subpath = src_dir / data_class
|
||||||
|
|
||||||
|
print(data_dict_path)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
for file in subpath.glob("*.txt"):
|
||||||
|
print(file)
|
||||||
|
input_data_dict = dict()
|
||||||
|
|
||||||
|
data = pd.read_csv(file, sep='\t', index_col=False, header=None)
|
||||||
|
|
||||||
|
if data.shape[1] == 8:
|
||||||
|
data.columns = ['frame_id', 'track_id', 'l','t', 'w','h', 'pos_x', 'pos_y']
|
||||||
|
elif data.shape[1] == 9:
|
||||||
|
data.columns = ['frame_id', 'track_id', 'l','t', 'w','h', 'pos_x', 'pos_y', 'state']
|
||||||
|
else:
|
||||||
|
raise Exception("Unknown data format. Check column count")
|
||||||
|
# data['frame_id'] = pd.to_numeric(data['frame_id'], downcast='integer')
|
||||||
|
data['track_id'] = pd.to_numeric(data['track_id'], downcast='integer')
|
||||||
|
|
||||||
|
|
||||||
|
data['frame_id'] = (data['frame_id'] // frame_diff).astype(int)
|
||||||
|
|
||||||
|
|
||||||
|
data['frame_id'] -= data['frame_id'].min()
|
||||||
|
|
||||||
|
data['node_type'] = 'PEDESTRIAN'
|
||||||
|
data['node_id'] = data['track_id'].astype(str)
|
||||||
|
data.sort_values('frame_id', inplace=True)
|
||||||
|
|
||||||
|
# cm to m
|
||||||
|
if cm_to_m:
|
||||||
|
data['pos_x'] /= 100
|
||||||
|
data['pos_y'] /= 100
|
||||||
|
|
||||||
|
if center_data:
|
||||||
|
data['pos_x'] -= cx
|
||||||
|
data['pos_y'] -= cy
|
||||||
|
|
||||||
|
|
||||||
|
if bin_positions:
|
||||||
|
data['pos_x'] =np.digitize(data['pos_x'], bins=space_x)
|
||||||
|
data['pos_y'] =np.digitize(data['pos_y'], bins=space_y)
|
||||||
|
print(data['pos_x'])
|
||||||
|
|
||||||
|
# Mean Position
|
||||||
|
|
||||||
|
print("Means: x:", data['pos_x'].mean(), "y:", data['pos_y'].mean())
|
||||||
|
|
||||||
|
# TODO)) If this normalization is here, it should also be in prediction_server.py
|
||||||
|
# data['pos_x'] = data['pos_x'] - data['pos_x'].mean()
|
||||||
|
# data['pos_y'] = data['pos_y'] - data['pos_y'].mean()
|
||||||
|
# data['pos_x'] = data['pos_x'] - cx
|
||||||
|
# data['pos_y'] = data['pos_y'] - cy
|
||||||
|
|
||||||
|
max_timesteps = data['frame_id'].max()
|
||||||
|
|
||||||
|
scene = Scene(timesteps=max_timesteps+1, dt=dt, name=split_id, aug_func=augment if data_class == 'train' else None)
|
||||||
|
|
||||||
|
for node_id in tqdm.tqdm(pd.unique(data['node_id'])):
|
||||||
|
node_df = data[data['node_id'] == node_id]
|
||||||
|
if not np.all(np.diff(node_df['frame_id']) == 1):
|
||||||
|
# print(f"Interval in {node_id} not always 1")
|
||||||
|
# print(node_df['frame_id'])
|
||||||
|
# print(np.diff(node_df['frame_id']) != 1)
|
||||||
|
# mask=np.append(False, np.diff(node_df['frame_id']) != 1)
|
||||||
|
# print(node_df[mask]['frame_id'])
|
||||||
|
skipped_for_error += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
# without repeats, there will mostli likely only be straight movements
|
||||||
|
# better to filter by time
|
||||||
|
# only_diff = node_df[['pos_x', 'pos_y']].diff().fillna(1).any(axis=1)
|
||||||
|
# # print(node_df[['pos_x', 'pos_y']], )
|
||||||
|
# # exit()
|
||||||
|
|
||||||
|
|
||||||
|
# # mask positions
|
||||||
|
# node_values = node_df[only_diff][['pos_x', 'pos_y']].values
|
||||||
|
# print(node_values)
|
||||||
|
|
||||||
|
if bin_positions:
|
||||||
|
node_values = node_df.iloc[::5, :][['pos_x', 'pos_y']].values
|
||||||
|
else:
|
||||||
|
node_values = node_df[['pos_x', 'pos_y']].values
|
||||||
|
# print(node_values)
|
||||||
|
|
||||||
|
if node_values.shape[0] < min_track_length:
|
||||||
|
continue
|
||||||
|
|
||||||
|
new_first_idx = node_df['frame_id'].iloc[0]
|
||||||
|
|
||||||
|
x = node_values[:, 0]
|
||||||
|
y = node_values[:, 1]
|
||||||
|
if smoother:
|
||||||
|
x = smoother.smooth(x)
|
||||||
|
y = smoother.smooth(y)
|
||||||
|
|
||||||
|
vx = derivative_of(x, scene.dt)
|
||||||
|
vy = derivative_of(y, scene.dt)
|
||||||
|
ax = derivative_of(vx, scene.dt)
|
||||||
|
ay = derivative_of(vy, scene.dt)
|
||||||
|
|
||||||
|
data_dict = {('position', 'x'): x,
|
||||||
|
('position', 'y'): y,
|
||||||
|
('velocity', 'x'): vx,
|
||||||
|
('velocity', 'y'): vy,
|
||||||
|
('acceleration', 'x'): ax,
|
||||||
|
('acceleration', 'y'): ay}
|
||||||
|
|
||||||
|
node_data = pd.DataFrame(data_dict, columns=data_columns)
|
||||||
|
node = Node(node_type=env.NodeType.PEDESTRIAN, node_id=node_id, data=node_data)
|
||||||
|
node.first_timestep = new_first_idx
|
||||||
|
|
||||||
|
scene.nodes.append(node)
|
||||||
|
created+=1
|
||||||
|
# if data_class == 'train':
|
||||||
|
# scene.augmented = list()
|
||||||
|
# angles = np.arange(0, 360, 15) if data_class == 'train' else [0]
|
||||||
|
# for angle in angles:
|
||||||
|
# scene.augmented.append(augment_scene(scene, angle))
|
||||||
|
|
||||||
|
# print(scene)
|
||||||
|
scenes.append(scene)
|
||||||
|
print(f'Processed {len(scenes):.2f} scene for data class {data_class}')
|
||||||
|
|
||||||
|
env.scenes = scenes
|
||||||
|
|
||||||
|
print(env.scenes)
|
||||||
|
|
||||||
|
if len(scenes) > 0:
|
||||||
|
with open(data_dict_path, 'wb') as f:
|
||||||
|
dill.dump(env, f, protocol=dill.HIGHEST_PROTOCOL)
|
||||||
|
|
||||||
|
print(f"Linear: {l}")
|
||||||
|
print(f"Non-Linear: {nl}")
|
||||||
|
print(f"error: {skipped_for_error}, used: {created}")
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("--src-dir", "-s", type=Path, required=True, help="Directory with tracker output in .txt files")
|
||||||
|
parser.add_argument("--dst-dir", "-d", type=Path, required=True, help="Destination directory to store parsed .pkl files (typically 'trajectron-data')")
|
||||||
|
parser.add_argument("--name", "-n", type=str, required=True, help="Identifier to prefix the output .pkl files with (result is NAME-train.pkl, NAME-test.pkl)")
|
||||||
|
parser.add_argument("--smooth-tracks", action='store_true', help=f"Enable smoother. Set to {smooth_window} frames")
|
||||||
|
parser.add_argument("--cm-to-m", action='store_true', help=f"If homography is in cm, convert tracked points to meter for beter results")
|
||||||
|
parser.add_argument("--center-data", action='store_true', help=f"Normalise around center")
|
||||||
|
parser.add_argument("--bin-positions", action='store_true', help=f"Experiment to put round positions to a grid")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
process_data(**args.__dict__)
|
||||||
|
|
332
trap/tools.py
Normal file
332
trap/tools.py
Normal file
|
@ -0,0 +1,332 @@
|
||||||
|
from argparse import Namespace
|
||||||
|
from pathlib import Path
|
||||||
|
import pickle
|
||||||
|
from tempfile import mktemp
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
|
import trap.tracker
|
||||||
|
from trap.config import parser
|
||||||
|
from trap.frame_emitter import Detection, DetectionState, video_src_from_config, Frame
|
||||||
|
from trap.tracker import DETECTOR_YOLOv8, Smoother, _yolov8_track, Track, TrainingDataWriter, Tracker
|
||||||
|
from collections import defaultdict
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import cv2
|
||||||
|
from typing import List, Iterable, Optional
|
||||||
|
|
||||||
|
from ultralytics import YOLO
|
||||||
|
from ultralytics.engine.results import Results as YOLOResult
|
||||||
|
import tqdm
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger('tools')
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class FrameGenerator():
|
||||||
|
def __init__(self, config):
|
||||||
|
self.video_srcs = video_src_from_config(config)
|
||||||
|
self.config = config
|
||||||
|
if not hasattr(config, "H"):
|
||||||
|
raise RuntimeError("Set homography file with --homography param")
|
||||||
|
|
||||||
|
# store current position
|
||||||
|
self.video_path = None
|
||||||
|
self.video_nr = None
|
||||||
|
self.frame_count = None
|
||||||
|
self.frame_idx = None
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
n = 0
|
||||||
|
for video_nr, video_path in enumerate(self.video_srcs):
|
||||||
|
self.video_path = video_path
|
||||||
|
self.video_nr = video_path
|
||||||
|
logger.info(f"Play from '{str(video_path)}'")
|
||||||
|
video = cv2.VideoCapture(str(video_path))
|
||||||
|
fps = video.get(cv2.CAP_PROP_FPS)
|
||||||
|
self.frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
|
||||||
|
self.frame_idx = 0
|
||||||
|
if self.config.video_offset:
|
||||||
|
logger.info(f"Start at frame {self.config.video_offset}")
|
||||||
|
video.set(cv2.CAP_PROP_POS_FRAMES, self.config.video_offset)
|
||||||
|
self.frame_idx = self.config.video_offset
|
||||||
|
|
||||||
|
while True:
|
||||||
|
ret, img = video.read()
|
||||||
|
self.frame_idx+=1
|
||||||
|
n+=1
|
||||||
|
|
||||||
|
# seek to 0 if video has finished. Infinite loop
|
||||||
|
if not ret:
|
||||||
|
# now loading multiple files
|
||||||
|
break
|
||||||
|
|
||||||
|
frame = Frame(index=n, img=img, H=self.config.H, camera=self.config.camera)
|
||||||
|
yield frame
|
||||||
|
|
||||||
|
|
||||||
|
def tracker_preprocess():
|
||||||
|
|
||||||
|
config = parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
tracker = Tracker(config)
|
||||||
|
# model = YOLO('EXPERIMENTS/yolov8x.pt')
|
||||||
|
|
||||||
|
with TrainingDataWriter(config.save_for_training) as writer:
|
||||||
|
|
||||||
|
bar = tqdm.tqdm()
|
||||||
|
tracks = defaultdict(lambda: Track())
|
||||||
|
|
||||||
|
total = 0
|
||||||
|
frames = FrameGenerator(config)
|
||||||
|
for frame in frames:
|
||||||
|
bar.update()
|
||||||
|
|
||||||
|
detections = tracker.track_frame(frame)
|
||||||
|
total += len(detections)
|
||||||
|
# detections = _yolov8_track(frame, model, imgsz=1440, classes=[0])
|
||||||
|
|
||||||
|
bar.set_description(f"[{frames.video_nr}/{len(frames.video_srcs)}] [{frames.frame_idx}/{frames.frame_count}] {str(frames.video_path)} -- Detections {len(detections)}: {[d.track_id for d in detections]} (so far {total})")
|
||||||
|
|
||||||
|
for detection in detections:
|
||||||
|
track = tracks[detection.track_id]
|
||||||
|
track.track_id = detection.track_id # for new tracks
|
||||||
|
track.history.append(detection) # add to history
|
||||||
|
|
||||||
|
active_track_ids = [d.track_id for d in detections]
|
||||||
|
active_tracks = {t.track_id: t for t in tracks.values() if t.track_id in active_track_ids}
|
||||||
|
|
||||||
|
writer.add(frame, active_tracks.values())
|
||||||
|
|
||||||
|
logger.info("Done!")
|
||||||
|
|
||||||
|
bgr_colors = [
|
||||||
|
(255, 0, 0),
|
||||||
|
(0, 255, 0),
|
||||||
|
(0, 0, 255),
|
||||||
|
(0, 255, 255),
|
||||||
|
]
|
||||||
|
|
||||||
|
def detection_color(detection: Detection, i):
|
||||||
|
return bgr_colors[i % len(bgr_colors)] if detection.state != DetectionState.Lost else (100,100,100)
|
||||||
|
|
||||||
|
def to_point(coord):
|
||||||
|
return (int(coord[0]), int(coord[1]))
|
||||||
|
|
||||||
|
def tracker_compare():
|
||||||
|
config = parser.parse_args()
|
||||||
|
|
||||||
|
trackers: List[Tracker] = []
|
||||||
|
# TODO, support all tracker.DETECTORS
|
||||||
|
for tracker_id in [
|
||||||
|
trap.tracker.DETECTOR_YOLOv8,
|
||||||
|
# trap.tracker.DETECTOR_MASKRCNN,
|
||||||
|
# trap.tracker.DETECTOR_RETINANET,
|
||||||
|
trap.tracker.DETECTOR_FASTERRCNN,
|
||||||
|
]:
|
||||||
|
tracker_config = Namespace(**vars(config))
|
||||||
|
tracker_config.detector = tracker_id
|
||||||
|
trackers.append(Tracker(tracker_config))
|
||||||
|
|
||||||
|
|
||||||
|
frames = FrameGenerator(config)
|
||||||
|
bar = tqdm.tqdm(frames)
|
||||||
|
cv2.namedWindow("frame", cv2.WND_PROP_FULLSCREEN)
|
||||||
|
cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
|
||||||
|
|
||||||
|
for frame in bar:
|
||||||
|
|
||||||
|
# frame.img = cv2.undistort(frame.img, config.camera.mtx, config.camera.dist, None, config.camera.newcameramtx) # try to undistort for better detections, seems not to matter at all
|
||||||
|
trackers_detections = [(t, t.track_frame(frame)) for t in trackers]
|
||||||
|
|
||||||
|
for i, tracker in enumerate(trackers):
|
||||||
|
cv2.putText(frame.img, tracker.config.detector, (10,30*(i+1)), cv2.FONT_HERSHEY_DUPLEX, 1, color=bgr_colors[i % len(bgr_colors)])
|
||||||
|
|
||||||
|
for i, (tracker, detections) in enumerate(trackers_detections):
|
||||||
|
|
||||||
|
for track_id in tracker.tracks:
|
||||||
|
history = tracker.tracks[track_id].history
|
||||||
|
cv2.putText(frame.img, f"{track_id}", to_point(history[0].get_foot_coords()), cv2.FONT_HERSHEY_DUPLEX, 1, color=bgr_colors[i % len(bgr_colors)])
|
||||||
|
for j in range(len(history)-1):
|
||||||
|
a = history[j]
|
||||||
|
b = history[j+1]
|
||||||
|
color = detection_color(b, i)
|
||||||
|
cv2.line(frame.img, to_point(a.get_foot_coords()), to_point(b.get_foot_coords()), color, 1)
|
||||||
|
for detection in detections:
|
||||||
|
color = color = detection_color(detection, i)
|
||||||
|
l, t, r, b = detection.to_ltrb()
|
||||||
|
cv2.rectangle(frame.img, (l, t), (r,b), color)
|
||||||
|
cv2.putText(frame.img, f"{detection.track_id}", (l, b+10), cv2.FONT_HERSHEY_DUPLEX, 1, color=color)
|
||||||
|
conf = f"{detection.conf:.3f}" if detection.conf is not None else "None"
|
||||||
|
cv2.putText(frame.img, f"{detection.det_class} - {conf}", (l, t), cv2.FONT_HERSHEY_DUPLEX, .7, color=color)
|
||||||
|
cv2.imshow('frame',cv2.resize(frame.img, (1920, 1080)))
|
||||||
|
cv2.waitKey(1)
|
||||||
|
|
||||||
|
bar.set_description(f"[{frames.video_nr}/{len(frames.video_srcs)}] [{frames.frame_idx}/{frames.frame_count}] {str(frames.video_path)}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def interpolate_missing_frames(data: pd.DataFrame):
|
||||||
|
missing=0
|
||||||
|
old_size=len(data)
|
||||||
|
# slow way to append missing steps to the dataset
|
||||||
|
for ind, row in tqdm.tqdm(data.iterrows()):
|
||||||
|
if row['diff'] > 1:
|
||||||
|
for s in range(1, int(row['diff'])):
|
||||||
|
# add as many entries as missing
|
||||||
|
missing += 1
|
||||||
|
data.loc[len(data)] = [row['frame_id']-s, row['track_id'], np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 1, 1]
|
||||||
|
# new_frame = [data.loc[ind-1]['frame_id']+s, row['track_id'], np.nan, np.nan, np.nan, np.nan, np.nan]
|
||||||
|
# data.loc[len(data)] = new_frame
|
||||||
|
|
||||||
|
logger.info(f'was:{old_size} added:{missing}, new length: {len(data)}')
|
||||||
|
|
||||||
|
# now sort, so that the added data is in the right place
|
||||||
|
data.sort_values(by=['track_id', 'frame_id'], inplace=True)
|
||||||
|
|
||||||
|
df=data.copy()
|
||||||
|
df = df.groupby('track_id').apply(lambda group: group.interpolate(method='linear'))
|
||||||
|
df.reset_index(drop=True, inplace=True)
|
||||||
|
|
||||||
|
# update diff, shouldnow be 1 | NaN
|
||||||
|
data['diff'] = data.groupby(['track_id'])['frame_id'].diff()
|
||||||
|
|
||||||
|
# data = df
|
||||||
|
return df
|
||||||
|
|
||||||
|
def smooth(data: pd.DataFrame):
|
||||||
|
|
||||||
|
df=data.copy()
|
||||||
|
if 'x_raw' not in df:
|
||||||
|
df['x_raw'] = df['x']
|
||||||
|
if 'y_raw' not in df:
|
||||||
|
df['y_raw'] = df['y']
|
||||||
|
|
||||||
|
print("Running smoother")
|
||||||
|
# print(df)
|
||||||
|
# from tsmoothie.smoother import KalmanSmoother, ConvolutionSmoother
|
||||||
|
smoother = Smoother(convolution=False)
|
||||||
|
def smoothing(data):
|
||||||
|
# smoother = ConvolutionSmoother(window_len=SMOOTHING_WINDOW, window_type='ones', copy=None)
|
||||||
|
return smoother.smooth(data).tolist()
|
||||||
|
# df=df.assign(smooth_data=smoother.smooth_data[0])
|
||||||
|
# return smoother.smooth_data[0].tolist()
|
||||||
|
|
||||||
|
# operate smoothing per axis
|
||||||
|
print("smooth x")
|
||||||
|
df['x'] = df.groupby('track_id')['x_raw'].transform(smoothing)
|
||||||
|
print("smooth y")
|
||||||
|
df['y'] = df.groupby('track_id')['y_raw'].transform(smoothing)
|
||||||
|
|
||||||
|
return df
|
||||||
|
|
||||||
|
def load_tracks_from_csv(file: Path, fps: float, grid_size: Optional[int] = None, sample: Optional[int] = None):
|
||||||
|
cache_file = Path('/tmp/load_tracks-smooth-' + file.name)
|
||||||
|
if cache_file.exists():
|
||||||
|
data = pd.read_pickle(cache_file)
|
||||||
|
|
||||||
|
else:
|
||||||
|
# grid_size is in points per meter
|
||||||
|
# sample: sample to every n-th point. Thus sample=5 converts 12fps to 2.4fps, and 4 to 3fps
|
||||||
|
data = pd.read_csv(file, delimiter="\t", index_col=False, header=None)
|
||||||
|
# l,t,w,h: image space (pixels)
|
||||||
|
# x,y: world space (meters or cm depending on homography)
|
||||||
|
data.columns = ['frame_id', 'track_id', 'l', 't', 'w', 'h', 'x', 'y', 'state']
|
||||||
|
data['frame_id'] = pd.to_numeric(data['frame_id'], downcast='integer')
|
||||||
|
data['frame_id'] = data['frame_id'] // 10 # compatibility with Trajectron++
|
||||||
|
|
||||||
|
data.sort_values(by=['track_id', 'frame_id'],inplace=True)
|
||||||
|
|
||||||
|
data.set_index(['track_id', 'frame_id'])
|
||||||
|
|
||||||
|
# cm to meter
|
||||||
|
data['x'] = data['x']/100
|
||||||
|
data['y'] = data['y']/100
|
||||||
|
|
||||||
|
if grid_size is not None:
|
||||||
|
data['x'] = (data['x']*grid_size).round() / grid_size
|
||||||
|
data['y'] = (data['y']*grid_size).round() / grid_size
|
||||||
|
|
||||||
|
data['diff'] = data.groupby(['track_id'])['frame_id'].diff() #.fillna(0)
|
||||||
|
data['diff'] = pd.to_numeric(data['diff'], downcast='integer')
|
||||||
|
|
||||||
|
data = interpolate_missing_frames(data)
|
||||||
|
|
||||||
|
|
||||||
|
data = smooth(data)
|
||||||
|
data.to_pickle(cache_file)
|
||||||
|
|
||||||
|
|
||||||
|
if sample is not None:
|
||||||
|
print(f"Samping 1/{sample}, of {data.shape[0]} items")
|
||||||
|
data["idx_in_track"] = data.groupby(['track_id']).cumcount() # create index in group
|
||||||
|
groups = data.groupby(['track_id'])
|
||||||
|
# print(groups, data)
|
||||||
|
# selection = groups['idx_in_track'].apply(lambda x: x % sample == 0)
|
||||||
|
# print(selection)
|
||||||
|
selection = data["idx_in_track"].apply(lambda x: x % sample == 0)
|
||||||
|
# data = data[selection]
|
||||||
|
data = data.loc[selection].copy() # avoid errors
|
||||||
|
|
||||||
|
# # convert from e.g. 12Hz, to 2.4Hz (1/5)
|
||||||
|
# sampled_groups = []
|
||||||
|
# for name, group in data.groupby('track_id'):
|
||||||
|
# sampled_groups.append(group.iloc[::sample])
|
||||||
|
# print(f"Sampled {len(sampled_groups)} groups")
|
||||||
|
# data = pd.concat(sampled_groups, axis=1).T
|
||||||
|
print(f"Done sampling kept {data.shape[0]} items")
|
||||||
|
|
||||||
|
|
||||||
|
# String ot int
|
||||||
|
data['track_id'] = pd.to_numeric(data['track_id'], downcast='integer')
|
||||||
|
|
||||||
|
# redo diff after possible sampling:
|
||||||
|
data['diff'] = data.groupby(['track_id'])['frame_id'].diff()
|
||||||
|
# timestep to seconds
|
||||||
|
data['dt'] = data['diff'] * (1/fps)
|
||||||
|
|
||||||
|
# "Deriving displacement, velocity and accelation from x and y")
|
||||||
|
data['dx'] = data.groupby(['track_id'])['x'].diff()
|
||||||
|
data['dy'] = data.groupby(['track_id'])['y'].diff()
|
||||||
|
data['vx'] = data['dx'].div(data['dt'], axis=0)
|
||||||
|
data['vy'] = data['dy'].div(data['dt'], axis=0)
|
||||||
|
|
||||||
|
data['ax'] = data.groupby(['track_id'])['vx'].diff().div(data['dt'], axis=0)
|
||||||
|
data['ay'] = data.groupby(['track_id'])['vy'].diff().div(data['dt'], axis=0)
|
||||||
|
|
||||||
|
# then we need the velocity itself
|
||||||
|
data['v'] = np.sqrt(data['vx'].pow(2) + data['vy'].pow(2))
|
||||||
|
# and derive acceleration
|
||||||
|
data['a'] = data.groupby(['track_id'])['v'].diff().div(data['dt'], axis=0)
|
||||||
|
|
||||||
|
# we can calculate heading based on the velocity components
|
||||||
|
data['heading'] = (np.arctan2(data['vy'], data['vx']) * 180 / np.pi) % 360
|
||||||
|
|
||||||
|
# and derive it to get the rate of change of the heading
|
||||||
|
data['d_heading'] = data.groupby(['track_id'])['heading'].diff().div(data['dt'], axis=0)
|
||||||
|
|
||||||
|
# we can backfill the derived parameters (v and a), assuming they were constant when entering the frame
|
||||||
|
# so that our model can make estimations, based on these assumed values
|
||||||
|
group = data.groupby(['track_id'])
|
||||||
|
for field in ['dx', 'dy', 'vx', 'vy', 'ax', 'ay', 'v', 'a', 'heading', 'd_heading']:
|
||||||
|
data[field] = group[field].bfill()
|
||||||
|
|
||||||
|
data.set_index(['track_id', 'frame_id'], inplace=True) # use for quick access
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def filter_short_tracks(data: pd.DataFrame, n):
|
||||||
|
return data.groupby(['track_id']).filter(lambda group: len(group) >= n) # a lenght of 3 is neccessary to have all relevant derivatives of position
|
||||||
|
|
||||||
|
# print(filtered_data.shape[0], "items in filtered set, out of", data.shape[0], "in total set")
|
||||||
|
|
||||||
|
def normalise_position(data: pd.DataFrame):
|
||||||
|
mu = data[['x','y']].mean(axis=0)
|
||||||
|
std = data[['x','y']].std(axis=0)
|
||||||
|
|
||||||
|
data[['x_norm','y_norm']] = (data[['x','y']] - mu) / std
|
||||||
|
return data, mu, std
|
566
trap/tracker.py
566
trap/tracker.py
|
@ -8,13 +8,14 @@ from multiprocessing import Event
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import pickle
|
import pickle
|
||||||
import time
|
import time
|
||||||
from typing import Optional
|
from typing import Optional, List
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import torch
|
import torch
|
||||||
|
import torchvision
|
||||||
import zmq
|
import zmq
|
||||||
import cv2
|
import cv2
|
||||||
|
|
||||||
from torchvision.models.detection import retinanet_resnet50_fpn_v2, RetinaNet_ResNet50_FPN_V2_Weights, keypointrcnn_resnet50_fpn, KeypointRCNN_ResNet50_FPN_Weights, maskrcnn_resnet50_fpn_v2, MaskRCNN_ResNet50_FPN_V2_Weights
|
from torchvision.models.detection import retinanet_resnet50_fpn_v2, RetinaNet_ResNet50_FPN_V2_Weights, keypointrcnn_resnet50_fpn, KeypointRCNN_ResNet50_FPN_Weights, maskrcnn_resnet50_fpn_v2, MaskRCNN_ResNet50_FPN_V2_Weights, FasterRCNN_ResNet50_FPN_V2_Weights, fasterrcnn_resnet50_fpn_v2
|
||||||
from deep_sort_realtime.deepsort_tracker import DeepSort
|
from deep_sort_realtime.deepsort_tracker import DeepSort
|
||||||
from torchvision.models import ResNet50_Weights
|
from torchvision.models import ResNet50_Weights
|
||||||
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
||||||
|
@ -23,10 +24,11 @@ from ultralytics import YOLO
|
||||||
from ultralytics.engine.results import Results as YOLOResult
|
from ultralytics.engine.results import Results as YOLOResult
|
||||||
|
|
||||||
from trap.frame_emitter import DetectionState, Frame, Detection, Track
|
from trap.frame_emitter import DetectionState, Frame, Detection, Track
|
||||||
|
from bytetracker import BYTETracker
|
||||||
|
|
||||||
from tsmoothie.smoother import KalmanSmoother, ConvolutionSmoother
|
from tsmoothie.smoother import KalmanSmoother, ConvolutionSmoother
|
||||||
import tsmoothie.smoother
|
import tsmoothie.smoother
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
# Detection = [int, int, int, int, float, int]
|
# Detection = [int, int, int, int, float, int]
|
||||||
# Detections = [Detection]
|
# Detections = [Detection]
|
||||||
|
@ -44,26 +46,206 @@ DETECTOR_MASKRCNN = 'maskrcnn'
|
||||||
DETECTOR_FASTERRCNN = 'fasterrcnn'
|
DETECTOR_FASTERRCNN = 'fasterrcnn'
|
||||||
DETECTOR_YOLOv8 = 'ultralytics'
|
DETECTOR_YOLOv8 = 'ultralytics'
|
||||||
|
|
||||||
|
TRACKER_DEEPSORT = 'deepsort'
|
||||||
|
TRACKER_BYTETRACK = 'bytetrack'
|
||||||
|
|
||||||
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8]
|
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8]
|
||||||
|
TRACKERS =[TRACKER_DEEPSORT, TRACKER_BYTETRACK]
|
||||||
|
|
||||||
|
TRACKER_CONFIDENCE_MINIMUM = .2
|
||||||
|
TRACKER_BYTETRACK_MINIMUM = .1 # bytetrack can track items iwth lower thershold
|
||||||
|
NON_MAXIMUM_SUPRESSION = 1
|
||||||
|
RCNN_SCALE = .4 # seems to have no impact on detections in the corners
|
||||||
|
|
||||||
|
def _yolov8_track(frame: Frame, model: YOLO, **kwargs) -> List[Detection]:
|
||||||
|
|
||||||
|
results: List[YOLOResult] = list(model.track(frame.img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, **kwargs))
|
||||||
|
if results[0].boxes is None or results[0].boxes.id is None:
|
||||||
|
# work around https://github.com/ultralytics/ultralytics/issues/5968
|
||||||
|
return []
|
||||||
|
|
||||||
|
boxes = results[0].boxes.xywh.cpu()
|
||||||
|
track_ids = results[0].boxes.id.int().cpu().tolist()
|
||||||
|
classes = results[0].boxes.cls.int().cpu().tolist()
|
||||||
|
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], 1, DetectionState.Confirmed, frame.index, class_id) for bbox, track_id, class_id in zip(boxes, track_ids, classes)]
|
||||||
|
|
||||||
|
class Multifile():
|
||||||
|
def __init__(self, srcs: List[Path]):
|
||||||
|
self.srcs = srcs
|
||||||
|
self.g = self.__iter__()
|
||||||
|
self.current_file = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self):
|
||||||
|
return ", ".join([s.name for s in self.srcs])
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
for path in self.srcs:
|
||||||
|
self.current_file = path.name
|
||||||
|
with path.open('r') as fp:
|
||||||
|
for l in fp:
|
||||||
|
yield l
|
||||||
|
|
||||||
|
def readline(self):
|
||||||
|
return self.g.__next__()
|
||||||
|
|
||||||
|
|
||||||
|
class TrainingDataWriter:
|
||||||
|
def __init__(self, training_path: Optional[Path]):
|
||||||
|
if training_path is None:
|
||||||
|
self.path = None
|
||||||
|
return
|
||||||
|
|
||||||
|
if not isinstance(training_path, Path):
|
||||||
|
raise ValueError("save-for-training should be a path")
|
||||||
|
if not training_path.exists():
|
||||||
|
logger.info(f"Making path for training data: {training_path}")
|
||||||
|
training_path.mkdir(parents=True, exist_ok=False)
|
||||||
|
else:
|
||||||
|
logger.warning(f"Path for training-data exists: {training_path}. Continuing assuming that's ok.")
|
||||||
|
|
||||||
|
# following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
|
||||||
|
|
||||||
|
self.path = training_path
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
if self.path:
|
||||||
|
d = datetime.now().isoformat(timespec="minutes")
|
||||||
|
self.training_fp = open(self.path / f'all-{d}.txt', 'w')
|
||||||
|
logger.debug(f"Writing tracker data to {self.training_fp.name}")
|
||||||
|
# following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
|
||||||
|
self.csv = csv.DictWriter(self.training_fp, fieldnames=['frame_id', 'track_id', 'l', 't', 'w', 'h', 'x', 'y', 'state'], delimiter='\t', quoting=csv.QUOTE_NONE)
|
||||||
|
self.count = 0
|
||||||
|
return self
|
||||||
|
|
||||||
|
def add(self, frame: Frame, tracks: List[Track]):
|
||||||
|
if not self.path:
|
||||||
|
# skip if disabled
|
||||||
|
return
|
||||||
|
|
||||||
|
self.csv.writerows([{
|
||||||
|
'frame_id': round(frame.index * 10., 1), # not really time
|
||||||
|
'track_id': t.track_id,
|
||||||
|
'l': float(t.history[-1].l), # to float, so we're sure it's not a torch.tensor()
|
||||||
|
't': float(t.history[-1].t),
|
||||||
|
'w': float(t.history[-1].w),
|
||||||
|
'h': float(t.history[-1].h),
|
||||||
|
'x': t.get_projected_history(frame.H, frame.camera)[-1][0],
|
||||||
|
'y': t.get_projected_history(frame.H, frame.camera)[-1][1],
|
||||||
|
'state': t.history[-1].state.value
|
||||||
|
# only keep _actual_detections, no lost entries
|
||||||
|
} for t in tracks
|
||||||
|
# if t.history[-1].state != DetectionState.Lost
|
||||||
|
])
|
||||||
|
self.count += len(tracks)
|
||||||
|
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_value, exc_tb):
|
||||||
|
# ... ignore exception (type, value, traceback)
|
||||||
|
if not self.path:
|
||||||
|
return
|
||||||
|
|
||||||
|
self.training_fp.close()
|
||||||
|
|
||||||
|
source_files = list(self.path.glob("*.txt")) # we loop twice, so need a list instead of generator
|
||||||
|
total = 0
|
||||||
|
sources = Multifile(source_files)
|
||||||
|
for line in sources:
|
||||||
|
if len(line) > 3: # make sure not to count empty lines
|
||||||
|
total += 1
|
||||||
|
|
||||||
|
|
||||||
|
lines = {
|
||||||
|
'train': int(total * .8),
|
||||||
|
'val': int(total * .12),
|
||||||
|
'test': int(total * .08),
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(f"Splitting gathered data from {sources.name}")
|
||||||
|
# for source_file in source_files:
|
||||||
|
|
||||||
|
tracks_file = self.path / 'tracks.json'
|
||||||
|
tracks = defaultdict(lambda: [])
|
||||||
|
|
||||||
|
for name, line_nrs in lines.items():
|
||||||
|
dir_path = self.path / name
|
||||||
|
dir_path.mkdir(exist_ok=True)
|
||||||
|
file = dir_path / 'tracked.txt'
|
||||||
|
logger.debug(f"- Write {line_nrs} lines to {file}")
|
||||||
|
with file.open('w') as target_fp:
|
||||||
|
max_track_id = 0
|
||||||
|
offset = 0
|
||||||
|
prev_file = None
|
||||||
|
for i in range(line_nrs):
|
||||||
|
line = sources.readline()
|
||||||
|
current_file = sources.current_file
|
||||||
|
if prev_file != current_file:
|
||||||
|
offset = max_track_id
|
||||||
|
|
||||||
|
logger.debug(f'{name} - update offset {offset} ({sources.current_file})')
|
||||||
|
prev_file = current_file
|
||||||
|
|
||||||
|
parts = line.split('\t')
|
||||||
|
track_id = int(parts[1]) + offset
|
||||||
|
|
||||||
|
if track_id > max_track_id:
|
||||||
|
max_track_id = track_id
|
||||||
|
|
||||||
|
parts[1] = str(track_id)
|
||||||
|
target_fp.write("\t".join(parts))
|
||||||
|
tracks[track_id].append(parts)
|
||||||
|
|
||||||
|
with tracks_file.open('w') as fp:
|
||||||
|
json.dump(tracks, fp)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class TrackerWrapper():
|
||||||
|
def __init__(self, tracker):
|
||||||
|
self.tracker = tracker
|
||||||
|
|
||||||
|
def track_detections():
|
||||||
|
raise RuntimeError("Not implemented")
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def init_type(cls, tracker_type: str):
|
||||||
|
if tracker_type == TRACKER_BYTETRACK:
|
||||||
|
return ByteTrackWrapper(BYTETracker(track_thresh=TRACKER_BYTETRACK_MINIMUM, match_thresh=TRACKER_CONFIDENCE_MINIMUM, frame_rate=12)) # TODO)) Framerate from emitter
|
||||||
|
else:
|
||||||
|
return DeepSortWrapper(DeepSort(n_init=5, max_age=30, nms_max_overlap=NON_MAXIMUM_SUPRESSION,
|
||||||
|
embedder='torchreid', embedder_wts="../MODELS/osnet_x1_0_imagenet.pth"
|
||||||
|
))
|
||||||
|
|
||||||
|
class DeepSortWrapper(TrackerWrapper):
|
||||||
|
def track_detections(self, detections, img: cv2.Mat, frame_idx: int):
|
||||||
|
detections = Tracker.detect_persons_deepsort_wrapper(detections)
|
||||||
|
tracks: List[DeepsortTrack] = self.tracker.update_tracks(detections, frame=img)
|
||||||
|
active_tracks = [t for t in tracks if t.is_confirmed()]
|
||||||
|
return [Detection.from_deepsort(t, frame_idx) for t in active_tracks]
|
||||||
|
# raw_detections, embeds=None, frame=None, today=None, others=None, instance_masks=Non
|
||||||
|
|
||||||
|
|
||||||
|
class ByteTrackWrapper(TrackerWrapper):
|
||||||
|
def __init__(self, tracker: BYTETracker):
|
||||||
|
self.tracker = tracker
|
||||||
|
|
||||||
|
def track_detections(self, detections: tuple[list[float,float,float,float], float, float], img: cv2.Mat, frame_idx: int):
|
||||||
|
# detections
|
||||||
|
if detections.shape[0] == 0:
|
||||||
|
detections = np.ndarray((0,0)) # needs to be 2-D
|
||||||
|
|
||||||
|
_ = self.tracker.update(detections)
|
||||||
|
active_tracks = [track for track in self.tracker.tracked_stracks if track.is_activated]
|
||||||
|
active_tracks = [track for track in active_tracks if track.start_frame < (self.tracker.frame_id - 5)]
|
||||||
|
return [Detection.from_bytetrack(track, frame_idx) for track in active_tracks]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class Tracker:
|
class Tracker:
|
||||||
def __init__(self, config: Namespace, is_running: Event):
|
def __init__(self, config: Namespace):
|
||||||
self.config = config
|
self.config = config
|
||||||
self.is_running = is_running
|
|
||||||
|
|
||||||
context = zmq.Context()
|
|
||||||
self.frame_sock = context.socket(zmq.SUB)
|
|
||||||
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
|
||||||
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
|
||||||
self.frame_sock.connect(config.zmq_frame_addr)
|
|
||||||
|
|
||||||
self.trajectory_socket = context.socket(zmq.PUB)
|
|
||||||
self.trajectory_socket.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
|
|
||||||
self.trajectory_socket.bind(config.zmq_trajectory_addr)
|
|
||||||
|
|
||||||
|
|
||||||
# # TODO: config device
|
# # TODO: config device
|
||||||
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||||
|
|
||||||
|
@ -73,30 +255,39 @@ class Tracker:
|
||||||
|
|
||||||
logger.debug(f"Load tracker: {self.config.detector}")
|
logger.debug(f"Load tracker: {self.config.detector}")
|
||||||
|
|
||||||
|
conf = TRACKER_BYTETRACK_MINIMUM if self.config.tracker == TRACKER_BYTETRACK else TRACKER_CONFIDENCE_MINIMUM
|
||||||
if self.config.detector == DETECTOR_RETINANET:
|
if self.config.detector == DETECTOR_RETINANET:
|
||||||
# weights = RetinaNet_ResNet50_FPN_V2_Weights.DEFAULT
|
# weights = RetinaNet_ResNet50_FPN_V2_Weights.DEFAULT
|
||||||
# self.model = retinanet_resnet50_fpn_v2(weights=weights, score_thresh=0.2)
|
# self.model = retinanet_resnet50_fpn_v2(weights=weights, score_thresh=0.2)
|
||||||
weights = KeypointRCNN_ResNet50_FPN_Weights.DEFAULT
|
weights = KeypointRCNN_ResNet50_FPN_Weights.COCO_V1
|
||||||
self.model = keypointrcnn_resnet50_fpn(weights=weights, box_score_thresh=0.35)
|
self.model = keypointrcnn_resnet50_fpn(weights=weights, box_score_thresh=conf)
|
||||||
self.model.to(self.device)
|
self.model.to(self.device)
|
||||||
# Put the model in inference mode
|
# Put the model in inference mode
|
||||||
self.model.eval()
|
self.model.eval()
|
||||||
# Get the transforms for the model's weights
|
# Get the transforms for the model's weights
|
||||||
self.preprocess = weights.transforms().to(self.device)
|
self.preprocess = weights.transforms().to(self.device)
|
||||||
self.mot_tracker = DeepSort(max_iou_distance=1, max_cosine_distance=0.5, max_age=15, nms_max_overlap=0.9,
|
self.mot_tracker = TrackerWrapper.init_type(self.config.tracker)
|
||||||
# embedder='torchreid', embedder_wts="../MODELS/osnet_x1_0_imagenet.pth"
|
elif self.config.detector == DETECTOR_FASTERRCNN:
|
||||||
)
|
# weights = RetinaNet_ResNet50_FPN_V2_Weights.DEFAULT
|
||||||
|
# self.model = retinanet_resnet50_fpn_v2(weights=weights, score_thresh=0.2)
|
||||||
|
weights = FasterRCNN_ResNet50_FPN_V2_Weights.COCO_V1
|
||||||
|
self.model = fasterrcnn_resnet50_fpn_v2(weights=weights, box_score_thresh=conf)
|
||||||
|
self.model.to(self.device)
|
||||||
|
# Put the model in inference mode
|
||||||
|
self.model.eval()
|
||||||
|
# Get the transforms for the model's weights
|
||||||
|
self.preprocess = weights.transforms().to(self.device)
|
||||||
|
self.mot_tracker = TrackerWrapper.init_type(self.config.tracker)
|
||||||
elif self.config.detector == DETECTOR_MASKRCNN:
|
elif self.config.detector == DETECTOR_MASKRCNN:
|
||||||
weights = MaskRCNN_ResNet50_FPN_V2_Weights.COCO_V1
|
weights = MaskRCNN_ResNet50_FPN_V2_Weights.COCO_V1
|
||||||
self.model = maskrcnn_resnet50_fpn_v2(weights=weights, box_score_thresh=0.7)
|
self.model = maskrcnn_resnet50_fpn_v2(weights=weights, box_score_thresh=conf) # if we use ByteTrack we can work with low probablity!
|
||||||
self.model.to(self.device)
|
self.model.to(self.device)
|
||||||
# Put the model in inference mode
|
# Put the model in inference mode
|
||||||
self.model.eval()
|
self.model.eval()
|
||||||
# Get the transforms for the model's weights
|
# Get the transforms for the model's weights
|
||||||
self.preprocess = weights.transforms().to(self.device)
|
self.preprocess = weights.transforms().to(self.device)
|
||||||
self.mot_tracker = DeepSort(n_init=5, max_iou_distance=1, max_cosine_distance=0.5, max_age=15, nms_max_overlap=0.9,
|
# self.mot_tracker = DeepSort(n_init=5, max_iou_distance=1, max_cosine_distance=0.2, max_age=15, nms_max_overlap=0.9,
|
||||||
# embedder='torchreid', embedder_wts="../MODELS/osnet_x1_0_imagenet.pth"
|
self.mot_tracker = TrackerWrapper.init_type(self.config.tracker)
|
||||||
)
|
|
||||||
elif self.config.detector == DETECTOR_YOLOv8:
|
elif self.config.detector == DETECTOR_YOLOv8:
|
||||||
self.model = YOLO('EXPERIMENTS/yolov8x.pt')
|
self.model = YOLO('EXPERIMENTS/yolov8x.pt')
|
||||||
else:
|
else:
|
||||||
|
@ -105,180 +296,190 @@ class Tracker:
|
||||||
|
|
||||||
# homography = list(source.glob('*img2world.txt'))[0]
|
# homography = list(source.glob('*img2world.txt'))[0]
|
||||||
|
|
||||||
self.H = np.loadtxt(self.config.homography, delimiter=',')
|
self.H = self.config.H
|
||||||
|
|
||||||
if self.config.smooth_tracks:
|
if self.config.smooth_tracks:
|
||||||
logger.info("Smoother enabled")
|
logger.info("Smoother enabled")
|
||||||
self.smoother = Smoother()
|
fps = 12 # TODO)) make configurable, or get from cam
|
||||||
|
self.smoother = Smoother(window_len=fps*5, convolution=False)
|
||||||
else:
|
else:
|
||||||
logger.info("Smoother Disabled (enable with --smooth-tracks)")
|
logger.info("Smoother Disabled (enable with --smooth-tracks)")
|
||||||
|
|
||||||
|
|
||||||
logger.debug("Set up tracker")
|
logger.debug("Set up tracker")
|
||||||
|
|
||||||
|
def track_frame(self, frame: Frame):
|
||||||
|
if self.config.detector == DETECTOR_YOLOv8:
|
||||||
|
detections: List[Detection] = _yolov8_track(frame, self.model, classes=[0], imgsz=[1152, 640])
|
||||||
|
else :
|
||||||
|
detections: List[Detection] = self._resnet_track(frame, scale = RCNN_SCALE)
|
||||||
|
|
||||||
|
for detection in detections:
|
||||||
|
track = self.tracks[detection.track_id]
|
||||||
|
track.track_id = detection.track_id # for new tracks
|
||||||
|
|
||||||
|
track.history.append(detection) # add to history
|
||||||
|
|
||||||
|
return detections
|
||||||
|
|
||||||
|
|
||||||
def track(self):
|
def track(self, is_running: Event):
|
||||||
|
"""
|
||||||
|
Live tracking of frames coming in over zmq
|
||||||
|
"""
|
||||||
|
|
||||||
|
self.is_running = is_running
|
||||||
|
|
||||||
|
|
||||||
|
context = zmq.Context()
|
||||||
|
self.frame_sock = context.socket(zmq.SUB)
|
||||||
|
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||||
|
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||||
|
self.frame_sock.connect(self.config.zmq_frame_addr)
|
||||||
|
|
||||||
|
self.trajectory_socket = context.socket(zmq.PUB)
|
||||||
|
self.trajectory_socket.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
|
||||||
|
self.trajectory_socket.bind(self.config.zmq_trajectory_addr)
|
||||||
|
|
||||||
prev_run_time = 0
|
prev_run_time = 0
|
||||||
|
|
||||||
training_fp = None
|
# training_fp = None
|
||||||
training_csv = None
|
# training_csv = None
|
||||||
training_frames = 0
|
# training_frames = 0
|
||||||
|
|
||||||
if self.config.save_for_training is not None:
|
# if self.config.save_for_training is not None:
|
||||||
if not isinstance(self.config.save_for_training, Path):
|
# if not isinstance(self.config.save_for_training, Path):
|
||||||
raise ValueError("save-for-training should be a path")
|
# raise ValueError("save-for-training should be a path")
|
||||||
if not self.config.save_for_training.exists():
|
# if not self.config.save_for_training.exists():
|
||||||
logger.info(f"Making path for training data: {self.config.save_for_training}")
|
# logger.info(f"Making path for training data: {self.config.save_for_training}")
|
||||||
self.config.save_for_training.mkdir(parents=True, exist_ok=False)
|
# self.config.save_for_training.mkdir(parents=True, exist_ok=False)
|
||||||
else:
|
# else:
|
||||||
logger.warning(f"Path for training-data exists: {self.config.save_for_training}. Continuing assuming that's ok.")
|
# logger.warning(f"Path for training-data exists: {self.config.save_for_training}. Continuing assuming that's ok.")
|
||||||
training_fp = open(self.config.save_for_training / 'all.txt', 'w')
|
# training_fp = open(self.config.save_for_training / 'all.txt', 'w')
|
||||||
# following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
|
# # following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
|
||||||
training_csv = csv.DictWriter(training_fp, fieldnames=['frame_id', 'track_id', 'l', 't', 'w', 'h', 'x', 'y', 'state'], delimiter='\t', quoting=csv.QUOTE_NONE)
|
# training_csv = csv.DictWriter(training_fp, fieldnames=['frame_id', 'track_id', 'l', 't', 'w', 'h', 'x', 'y', 'state'], delimiter='\t', quoting=csv.QUOTE_NONE)
|
||||||
|
|
||||||
prev_frame_i = -1
|
prev_frame_i = -1
|
||||||
|
|
||||||
while self.is_running.is_set():
|
with TrainingDataWriter(self.config.save_for_training) as writer:
|
||||||
# this waiting for target_dt causes frame loss. E.g. with target_dt at .1, it
|
end_time = None
|
||||||
# skips exactly 1 frame on a 10 fps video (which, it obviously should not do)
|
tracker_dt = None
|
||||||
# so for now, timing should move to emitter
|
w_time = None
|
||||||
# this_run_time = time.time()
|
while self.is_running.is_set():
|
||||||
# # logger.debug(f'test {prev_run_time - this_run_time}')
|
# this waiting for target_dt causes frame loss. E.g. with target_dt at .1, it
|
||||||
# time.sleep(max(0, prev_run_time - this_run_time + TARGET_DT))
|
# skips exactly 1 frame on a 10 fps video (which, it obviously should not do)
|
||||||
# prev_run_time = time.time()
|
# so for now, timing should move to emitter
|
||||||
|
# this_run_time = time.time()
|
||||||
|
# # logger.debug(f'test {prev_run_time - this_run_time}')
|
||||||
|
# time.sleep(max(0, prev_run_time - this_run_time + TARGET_DT))
|
||||||
|
# prev_run_time = time.time()
|
||||||
|
|
||||||
zmq_ev = self.frame_sock.poll(timeout=2000)
|
poll_time = time.time()
|
||||||
if not zmq_ev:
|
zmq_ev = self.frame_sock.poll(timeout=2000)
|
||||||
logger.warn('skip poll after 2000ms')
|
if not zmq_ev:
|
||||||
# when there's no data after timeout, loop so that is_running is checked
|
logger.warning('skip poll after 2000ms')
|
||||||
continue
|
# when there's no data after timeout, loop so that is_running is checked
|
||||||
|
continue
|
||||||
|
|
||||||
start_time = time.time()
|
start_time = time.time()
|
||||||
frame: Frame = self.frame_sock.recv_pyobj() # frame delivery in current setup: 0.012-0.03s
|
frame: Frame = self.frame_sock.recv_pyobj() # frame delivery in current setup: 0.012-0.03s
|
||||||
|
|
||||||
if frame.index > (prev_frame_i+1):
|
if frame.index > (prev_frame_i+1):
|
||||||
logger.warn(f"Dropped {frame.index - prev_frame_i - 1} frames ({frame.index=}, {prev_frame_i=})")
|
logger.warning(f"Dropped {frame.index - prev_frame_i - 1} frames ({frame.index=}, {prev_frame_i=}) -- poll time {start_time-poll_time:.5f}")
|
||||||
|
if tracker_dt:
|
||||||
|
logger.warning(f"last loop took {tracker_dt} (finished {start_time - end_time:0.5f} ago, writing took {w_time-end_time} and finshed {start_time - w_time} ago).. {writer.path}")
|
||||||
prev_frame_i = frame.index
|
|
||||||
# load homography into frame (TODO: should this be done in emitter?)
|
|
||||||
if frame.H is None:
|
|
||||||
# logger.warning('Falling back to default H')
|
|
||||||
# fallback: load configured H
|
|
||||||
frame.H = self.H
|
|
||||||
|
|
||||||
# logger.info(f"Frame delivery delay = {time.time()-frame.time}s")
|
|
||||||
|
|
||||||
|
|
||||||
if self.config.detector == DETECTOR_YOLOv8:
|
|
||||||
detections: [Detection] = self._yolov8_track(frame)
|
|
||||||
else :
|
|
||||||
detections: [Detection] = self._resnet_track(frame.img, scale = 1)
|
|
||||||
|
|
||||||
|
|
||||||
# Store detections into tracklets
|
prev_frame_i = frame.index
|
||||||
projected_coordinates = []
|
# load homography into frame (TODO: should this be done in emitter?)
|
||||||
for detection in detections:
|
if frame.H is None:
|
||||||
track = self.tracks[detection.track_id]
|
# logger.warning('Falling back to default H')
|
||||||
track.track_id = detection.track_id # for new tracks
|
# fallback: load configured H
|
||||||
|
frame.H = self.H
|
||||||
|
|
||||||
track.history.append(detection) # add to history
|
# logger.info(f"Frame delivery delay = {time.time()-frame.time}s")
|
||||||
# projected_coordinates.append(track.get_projected_history(self.H)) # then get full history
|
|
||||||
|
|
||||||
# TODO: hadle occlusions, and dissappearance
|
|
||||||
# if len(track.history) > 30: # retain 90 tracks for 90 frames
|
|
||||||
# track.history.pop(0)
|
|
||||||
|
|
||||||
|
|
||||||
# trajectories = {}
|
|
||||||
# for detection in detections:
|
|
||||||
# tid = str(detection.track_id)
|
|
||||||
# track = self.tracks[detection.track_id]
|
|
||||||
# coords = track.get_projected_history(self.H) # get full history
|
|
||||||
# trajectories[tid] = {
|
|
||||||
# "id": tid,
|
|
||||||
# "det_conf": detection.conf,
|
|
||||||
# "bbox": detection.to_ltwh(),
|
|
||||||
# "history": [{"x":c[0], "y":c[1]} for c in coords[0]] if not self.config.bypass_prediction else coords[0].tolist() # already doubles nested, fine for test
|
|
||||||
# }
|
|
||||||
active_track_ids = [d.track_id for d in detections]
|
|
||||||
active_tracks = {t.track_id: t for t in self.tracks.values() if t.track_id in active_track_ids}
|
|
||||||
# logger.info(f"{trajectories}")
|
|
||||||
frame.tracks = active_tracks
|
|
||||||
|
|
||||||
# if self.config.bypass_prediction:
|
detections: List[Detection] = self.track_frame(frame)
|
||||||
# self.trajectory_socket.send_string(json.dumps(trajectories))
|
|
||||||
# else:
|
|
||||||
# self.trajectory_socket.send(pickle.dumps(frame))
|
|
||||||
if self.config.smooth_tracks:
|
|
||||||
frame = self.smoother.smooth_frame_tracks(frame)
|
|
||||||
|
|
||||||
self.trajectory_socket.send_pyobj(frame)
|
|
||||||
|
|
||||||
current_time = time.time()
|
|
||||||
logger.debug(f"Trajectories: {len(active_tracks)}. Current frame delay = {current_time-frame.time}s (trajectories: {current_time - start_time}s)")
|
|
||||||
|
|
||||||
# self.trajectory_socket.send_string(json.dumps(trajectories))
|
|
||||||
# provide a {ID: {id: ID, history: [[x,y],[x,y],...]}}
|
|
||||||
# TODO: provide a track object that actually keeps history (unlike tracker)
|
|
||||||
|
|
||||||
#TODO calculate fps (also for other loops to see asynchonity)
|
|
||||||
# fpsfilter=fpsfilter*.9+(1/dt)*.1 #trust value in order to stabilize fps display
|
|
||||||
if training_csv:
|
|
||||||
training_csv.writerows([{
|
|
||||||
'frame_id': round(frame.index * 10., 1), # not really time
|
|
||||||
'track_id': t.track_id,
|
|
||||||
'l': t.history[-1].l,
|
|
||||||
't': t.history[-1].t,
|
|
||||||
'w': t.history[-1].w,
|
|
||||||
'h': t.history[-1].h,
|
|
||||||
'x': t.get_projected_history(frame.H)[-1][0],
|
|
||||||
'y': t.get_projected_history(frame.H)[-1][1],
|
|
||||||
'state': t.history[-1].state.value
|
|
||||||
# only keep _actual_detections, no lost entries
|
|
||||||
} for t in active_tracks.values()
|
|
||||||
# if t.history[-1].state != DetectionState.Lost
|
|
||||||
])
|
|
||||||
training_frames += len(active_tracks)
|
|
||||||
# print(time.time() - start_time)
|
|
||||||
|
|
||||||
|
|
||||||
if training_fp:
|
|
||||||
training_fp.close()
|
|
||||||
lines = {
|
|
||||||
'train': int(training_frames * .8),
|
|
||||||
'val': int(training_frames * .12),
|
|
||||||
'test': int(training_frames * .08),
|
|
||||||
}
|
|
||||||
logger.info(f"Splitting gathered data from {training_fp.name}")
|
|
||||||
with open(training_fp.name, 'r') as source_fp:
|
|
||||||
for name, line_nrs in lines.items():
|
|
||||||
dir_path = self.config.save_for_training / name
|
|
||||||
dir_path.mkdir(exist_ok=True)
|
|
||||||
file = dir_path / 'tracked.txt'
|
|
||||||
logger.debug(f"- Write {line_nrs} lines to {file}")
|
|
||||||
with file.open('w') as target_fp:
|
|
||||||
for i in range(line_nrs):
|
|
||||||
target_fp.write(source_fp.readline())
|
|
||||||
|
|
||||||
|
# Store detections into tracklets
|
||||||
|
projected_coordinates = []
|
||||||
|
# now in track_frame()
|
||||||
|
# for detection in detections:
|
||||||
|
# track = self.tracks[detection.track_id]
|
||||||
|
# track.track_id = detection.track_id # for new tracks
|
||||||
|
|
||||||
|
# track.history.append(detection) # add to history
|
||||||
|
# projected_coordinates.append(track.get_projected_history(self.H)) # then get full history
|
||||||
|
|
||||||
|
# TODO: hadle occlusions, and dissappearance
|
||||||
|
# if len(track.history) > 30: # retain 90 tracks for 90 frames
|
||||||
|
# track.history.pop(0)
|
||||||
|
|
||||||
|
|
||||||
|
# trajectories = {}
|
||||||
|
# for detection in detections:
|
||||||
|
# tid = str(detection.track_id)
|
||||||
|
# track = self.tracks[detection.track_id]
|
||||||
|
# coords = track.get_projected_history(self.H) # get full history
|
||||||
|
# trajectories[tid] = {
|
||||||
|
# "id": tid,
|
||||||
|
# "det_conf": detection.conf,
|
||||||
|
# "bbox": detection.to_ltwh(),
|
||||||
|
# "history": [{"x":c[0], "y":c[1]} for c in coords[0]] if not self.config.bypass_prediction else coords[0].tolist() # already doubles nested, fine for test
|
||||||
|
# }
|
||||||
|
active_track_ids = [d.track_id for d in detections]
|
||||||
|
active_tracks = {t.track_id: t for t in self.tracks.values() if t.track_id in active_track_ids}
|
||||||
|
# logger.info(f"{trajectories}")
|
||||||
|
frame.tracks = active_tracks
|
||||||
|
|
||||||
|
# if self.config.bypass_prediction:
|
||||||
|
# self.trajectory_socket.send_string(json.dumps(trajectories))
|
||||||
|
# else:
|
||||||
|
# self.trajectory_socket.send(pickle.dumps(frame))
|
||||||
|
if self.config.smooth_tracks:
|
||||||
|
frame = self.smoother.smooth_frame_tracks(frame)
|
||||||
|
|
||||||
|
self.trajectory_socket.send_pyobj(frame)
|
||||||
|
|
||||||
|
end_time = time.time()
|
||||||
|
tracker_dt = end_time - start_time
|
||||||
|
|
||||||
|
|
||||||
|
# having {end_time-frame.time} creates incidental delay... don't know why, maybe because of send?. So add n/a for now
|
||||||
|
# or is it {len(active_tracks)} or {tracker_dt}
|
||||||
|
# logger.debug(f"Trajectories: n/a. Current frame delay = n/a s (trajectories:s)")
|
||||||
|
|
||||||
|
# self.trajectory_socket.send_string(json.dumps(trajectories))
|
||||||
|
# provide a {ID: {id: ID, history: [[x,y],[x,y],...]}}
|
||||||
|
# TODO: provide a track object that actually keeps history (unlike tracker)
|
||||||
|
|
||||||
|
# TODO calculate fps (also for other loops to see asynchonity)
|
||||||
|
# fpsfilter=fpsfilter*.9+(1/dt)*.1 #trust value in order to stabilize fps display
|
||||||
|
|
||||||
|
writer.add(frame, active_tracks.values())
|
||||||
|
|
||||||
|
w_time = time.time()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
logger.info('Stopping')
|
logger.info('Stopping')
|
||||||
|
|
||||||
def _yolov8_track(self, frame: Frame,) -> [Detection]:
|
|
||||||
results: [YOLOResult] = self.model.track(frame.img, persist=True, tracker="bytetrack.yaml", verbose=False)
|
def _resnet_track(self, frame: Frame, scale: float = 1) -> List[Detection]:
|
||||||
if results[0].boxes is None or results[0].boxes.id is None:
|
img = frame.img
|
||||||
# work around https://github.com/ultralytics/ultralytics/issues/5968
|
|
||||||
return []
|
|
||||||
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], 1, DetectionState.Confirmed, frame.index) for bbox, track_id in zip(results[0].boxes.xywh.cpu(), results[0].boxes.id.int().cpu().tolist())]
|
|
||||||
|
|
||||||
def _resnet_track(self, img, scale: float = 1) -> [Detection]:
|
|
||||||
if scale != 1:
|
if scale != 1:
|
||||||
dsize = (int(img.shape[1] * scale), int(img.shape[0] * scale))
|
dsize = (int(img.shape[1] * scale), int(img.shape[0] * scale))
|
||||||
img = cv2.resize(img, dsize)
|
img = cv2.resize(img, dsize)
|
||||||
detections = self._resnet_detect_persons(img)
|
detections = self._resnet_detect_persons(img)
|
||||||
tracks: [DeepsortTrack] = self.mot_tracker.update_tracks(detections, frame=img)
|
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame.index)
|
||||||
return [Detection.from_deepsort(t).get_scaled(1/scale) for t in tracks]
|
# active_tracks = [t for t in tracks if t.is_confirmed()]
|
||||||
|
return [d.get_scaled(1/scale) for d in tracks]
|
||||||
|
|
||||||
def _resnet_detect_persons(self, frame) -> [Detection]:
|
def _resnet_detect_persons(self, frame) -> List[Detection]:
|
||||||
t = torch.from_numpy(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
|
t = torch.from_numpy(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
|
||||||
# change axes of image loaded image to be compatilbe with torch.io.read_image (which has C,W,H format instead of W,H,C)
|
# change axes of image loaded image to be compatilbe with torch.io.read_image (which has C,W,H format instead of W,H,C)
|
||||||
t = t.permute(2, 0, 1)
|
t = t.permute(2, 0, 1)
|
||||||
|
@ -293,8 +494,19 @@ class Tracker:
|
||||||
mask = prediction['labels'] == 1 # if we want more than one label: np.isin(prediction['labels'], [1,86])
|
mask = prediction['labels'] == 1 # if we want more than one label: np.isin(prediction['labels'], [1,86])
|
||||||
|
|
||||||
scores = prediction['scores'][mask]
|
scores = prediction['scores'][mask]
|
||||||
|
# print(scores, prediction['labels'])
|
||||||
labels = prediction['labels'][mask]
|
labels = prediction['labels'][mask]
|
||||||
boxes = prediction['boxes'][mask]
|
boxes = prediction['boxes'][mask]
|
||||||
|
# print(prediction['scores'])
|
||||||
|
|
||||||
|
if NON_MAXIMUM_SUPRESSION < 1:
|
||||||
|
nms_mask = torch.zeros(scores.shape[0]).bool()
|
||||||
|
nms_keep_ids = torchvision.ops.nms(boxes, scores, NON_MAXIMUM_SUPRESSION)
|
||||||
|
nms_mask[nms_keep_ids] = True
|
||||||
|
print(scores.shape[0], nms_keep_ids, nms_mask)
|
||||||
|
scores = scores[nms_mask]
|
||||||
|
labels = labels[nms_mask]
|
||||||
|
boxes = boxes[nms_mask]
|
||||||
|
|
||||||
# TODO: introduce confidence and NMS supression: https://github.com/cfotache/pytorch_objectdetecttrack/blob/master/PyTorch_Object_Tracking.ipynb
|
# TODO: introduce confidence and NMS supression: https://github.com/cfotache/pytorch_objectdetecttrack/blob/master/PyTorch_Object_Tracking.ipynb
|
||||||
# (which I _think_ we better do after filtering)
|
# (which I _think_ we better do after filtering)
|
||||||
|
@ -302,7 +514,7 @@ class Tracker:
|
||||||
|
|
||||||
# dets - a numpy array of detections in the format [[x1,y1,x2,y2,score, label],[x1,y1,x2,y2,score, label],...]
|
# dets - a numpy array of detections in the format [[x1,y1,x2,y2,score, label],[x1,y1,x2,y2,score, label],...]
|
||||||
detections = np.array([np.append(bbox, [score, label]) for bbox, score, label in zip(boxes.cpu(), scores.cpu(), labels.cpu())])
|
detections = np.array([np.append(bbox, [score, label]) for bbox, score, label in zip(boxes.cpu(), scores.cpu(), labels.cpu())])
|
||||||
detections = self.detect_persons_deepsort_wrapper(detections)
|
|
||||||
|
|
||||||
return detections
|
return detections
|
||||||
|
|
||||||
|
@ -316,15 +528,27 @@ class Tracker:
|
||||||
|
|
||||||
|
|
||||||
def run_tracker(config: Namespace, is_running: Event):
|
def run_tracker(config: Namespace, is_running: Event):
|
||||||
router = Tracker(config, is_running)
|
router = Tracker(config)
|
||||||
router.track()
|
router.track(is_running)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class Smoother:
|
class Smoother:
|
||||||
|
|
||||||
def __init__(self, window_len=2):
|
def __init__(self, window_len=6, convolution=False):
|
||||||
self.smoother = ConvolutionSmoother(window_len=window_len, window_type='ones', copy=None)
|
# for some reason this smoother messes the predictions. Probably skews the points too much??
|
||||||
|
if convolution:
|
||||||
|
self.smoother = ConvolutionSmoother(window_len=window_len, window_type='ones', copy=None)
|
||||||
|
else:
|
||||||
|
# "Unlike Kalman filtering, which focuses on predicting and updating the current state using historical measurements, Kalman smoothing enhances the accuracy of past state values"
|
||||||
|
# see https://medium.com/@shahalkp1/kalman-smoothing-using-tsmoothie-0175260464e5
|
||||||
|
self.smoother = KalmanSmoother(component='level_trend', component_noise={'level':0.03, 'season': .02, 'trend':0.04},n_seasons = 2, copy=None)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def smooth(self, points: List[float]):
|
||||||
|
self.smoother.smooth(points)
|
||||||
|
return self.smoother.smooth_data[0]
|
||||||
|
|
||||||
|
|
||||||
def smooth_frame_tracks(self, frame: Frame) -> Frame:
|
def smooth_frame_tracks(self, frame: Frame) -> Frame:
|
||||||
|
@ -342,7 +566,7 @@ class Smoother:
|
||||||
ws = self.smoother.smooth_data[0]
|
ws = self.smoother.smooth_data[0]
|
||||||
self.smoother.smooth(hs)
|
self.smoother.smooth(hs)
|
||||||
hs = self.smoother.smooth_data[0]
|
hs = self.smoother.smooth_data[0]
|
||||||
new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
|
new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
|
||||||
new_track = Track(track.track_id, new_history, track.predictor_history, track.predictions)
|
new_track = Track(track.track_id, new_history, track.predictor_history, track.predictions)
|
||||||
new_tracks.append(new_track)
|
new_tracks.append(new_track)
|
||||||
frame.tracks = {t.track_id: t for t in new_tracks}
|
frame.tracks = {t.track_id: t for t in new_tracks}
|
||||||
|
|
Loading…
Reference in a new issue