Art installation with TRAjectory Prediction
Find a file
2025-08-29 15:01:16 +02:00
EXPERIMENTS predict position instead of velocity 2025-01-02 16:24:00 +01:00
trap tweaking tracker, adding RT-DETR 2025-08-29 15:01:16 +02:00
.gitignore Tools for blacklisting tracks 2024-11-28 16:08:55 +01:00
.python-version Trajectory prediction - test in browser 2023-10-12 20:28:17 +02:00
build_opencv_with_gstreamer.sh Build python-opencv with gstreamer support 2024-11-14 11:37:37 +01:00
custom_bytetrack.yaml tweaking tracker, adding RT-DETR 2025-08-29 15:01:16 +02:00
pyproject.toml noisy squigly lines 2025-07-11 14:27:23 +02:00
README.md Update reference to laserspace 2025-07-10 13:26:48 +02:00
supervisord.conf no point in autostarting monitor 2025-07-11 14:27:52 +02:00
test_custom_rnn.ipynb Test with alternative NN for prediction 2025-04-03 20:59:40 +02:00
test_custom_rnn_lstm-attention.ipynb Test with alternative NN for prediction 2025-04-03 20:59:40 +02:00
test_homography.ipynb Map in inference 2024-12-27 16:12:50 +01:00
test_model.ipynb Notebook to visualise predictions of a trained model 2024-12-06 08:29:42 +01:00
test_path_transforms.ipynb Fade effect, and prep for more advanced line corretion using Shapely 2025-04-29 16:58:21 +02:00
test_tracker.ipynb Run tracker with smoother enabled 2024-04-29 14:46:44 +02:00
test_tracking_data.ipynb Notebook updates 2024-12-10 15:43:30 +01:00
test_training.ipynb Fix remote logging 2025-05-19 14:11:56 +02:00
test_training_data.ipynb Map in inference 2024-12-27 16:12:50 +01:00
test_trajectron_maps.ipynb Fixes to config.json for heading derivates and map rendering 2024-12-28 21:02:07 +01:00
uv.lock noisy squigly lines 2025-07-11 14:27:23 +02:00

Trajectory Prediction Video installation

Install

  • Run bash build_opencv_with_gstreamer.sh to build opencv with gstreamer support
  • Use uv to install

How to

See also the sibling repo traptools for camera calibration and homography tools that are needed for this repo. Also, laserspace is used to map the shapes (which are generated by stage.py) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.

These are roughly the steps to go from datagathering to training

  1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
    • Recording can be done with ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4
  2. Follow the steps in the auxilary traptools repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
  3. Run the tracker, e.g. uv run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/
    • Note: You can run this right of the camera stream: uv run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/, each recording adding a new file to the raw folder.
  4. Parse tracker data to Trajectron format: uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME Optionally, smooth tracks: --smooth-tracks
    • Optionally, add a map: ideally a RGB png: 3 layers of 0-255
      • uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png
  5. Train Trajectron model uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data
  6. The run!
    • uv run supervisord

Testnight 2025-06-13

Stappenplan:

  • Hang lasers. Connect all cables etc.
  • DISPLAY=:0 cargo run --example laser_frame_stream_gui
    • Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
  • In trap folder: uv run supervisorctl start video
  • In laserspace folder: DISPLAY=:0 cargo run --bin render_lines_gui and use gui to draw and tweak projection area
    • Use the save button to store configuration /*
  • in trap folder: DISPLAY=:0 uv run trap_laser_calibration
    • follow instructions:
      • camera points: 1-9 or cursor to create/select/move points
      • move laser: vim movement keys : hjkl, use shift to move faster
      • c to calibrate. Matrix is output to cli.
      • q to quit
    • saved to laser_calib.json, copy H field to trap_rust/src/trap/laser.rs (to e.g. TMP_STUDIO_CM_8)
  • Restart render_lines_gui with new homographies
    • DISPLAY=:0 cargo run --bin render_lines_gui */
  • change video source in supervisord.conf and run uv run supervisorctl update to switch
    • if tracking is slow and there's no prediction.
      • uv run python -c "import torch;print(torch.cuda.is_available())"