Art installation with TRAjectory Prediction
Find a file
2024-12-17 10:37:30 +01:00
trap CV renderer and binning options 2024-12-17 10:37:30 +01:00
.gitignore Tools for blacklisting tracks 2024-11-28 16:08:55 +01:00
.python-version Trajectory prediction - test in browser 2023-10-12 20:28:17 +02:00
build_opencv_with_gstreamer.sh Build python-opencv with gstreamer support 2024-11-14 11:37:37 +01:00
custom_bytetrack.yaml Tweak tracker to get better tracks 2024-11-20 16:59:22 +01:00
poetry.lock Update YOLO & move Trajectron dependency to git 2024-12-13 13:37:24 +01:00
pyproject.toml Update YOLO & move Trajectron dependency to git 2024-12-13 13:37:24 +01:00
README.md Experiment with prediction 2024-11-12 21:37:20 +01:00
test_custom_rnn.ipynb Test a custom LSTM/RNN 2024-11-17 19:39:32 +01:00
test_homography.ipynb testing the tracker 2024-04-25 16:31:51 +02:00
test_model.ipynb Notebook to visualise predictions of a trained model 2024-12-06 08:29:42 +01:00
test_tracker.ipynb Run tracker with smoother enabled 2024-04-29 14:46:44 +02:00
test_tracking_data.ipynb Notebook updates 2024-12-10 15:43:30 +01:00
test_training_data.ipynb Notebook updates 2024-12-10 15:43:30 +01:00

Trajectory Prediction Video installation

Install

  • Run bash build_opencv_with_gstreamer.sh to build opencv with gstreamer support
  • Use pyenv + poetry to install

How to

See also the sibling repo traptools for camera calibration and homography tools that are needed for this repo.

These are roughly the steps to go from datagathering to training

  1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
    • Recording can be done with ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4
  2. Follow the steps in the auxilary traptools repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
  3. Run the tracker, e.g. poetry run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/
    • Note: You can run this right of the camera stream: poetry run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/, each recording adding a new file to the raw folder.
  4. Parse tracker data to Trajectron format: poetry run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME Optionally, smooth tracks: --smooth-tracks
  5. Train Trajectron model poetry run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data
  6. The run!
    • On a video file (you can use a wildcard) DISPLAY=:1 poetry run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
    • or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
    • To just have a single trajectory pulled from distribution use --full-dist. Also try --z_mode.