trap/README.md
2025-07-10 13:26:48 +02:00

4.7 KiB

Trajectory Prediction Video installation

Install

  • Run bash build_opencv_with_gstreamer.sh to build opencv with gstreamer support
  • Use uv to install

How to

See also the sibling repo traptools for camera calibration and homography tools that are needed for this repo. Also, laserspace is used to map the shapes (which are generated by stage.py) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.

These are roughly the steps to go from datagathering to training

  1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
    • Recording can be done with ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4
  2. Follow the steps in the auxilary traptools repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
  3. Run the tracker, e.g. uv run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/
    • Note: You can run this right of the camera stream: uv run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/, each recording adding a new file to the raw folder.
  4. Parse tracker data to Trajectron format: uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME Optionally, smooth tracks: --smooth-tracks
    • Optionally, add a map: ideally a RGB png: 3 layers of 0-255
      • uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png
  5. Train Trajectron model uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data
  6. The run!
    • uv run supervisord

Testnight 2025-06-13

Stappenplan:

  • Hang lasers. Connect all cables etc.
  • DISPLAY=:0 cargo run --example laser_frame_stream_gui
    • Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
  • In trap folder: uv run supervisorctl start video
  • In laserspace folder: DISPLAY=:0 cargo run --bin render_lines_gui and use gui to draw and tweak projection area
    • Use the save button to store configuration /*
  • in trap folder: DISPLAY=:0 uv run trap_laser_calibration
    • follow instructions:
      • camera points: 1-9 or cursor to create/select/move points
      • move laser: vim movement keys : hjkl, use shift to move faster
      • c to calibrate. Matrix is output to cli.
      • q to quit
    • saved to laser_calib.json, copy H field to trap_rust/src/trap/laser.rs (to e.g. TMP_STUDIO_CM_8)
  • Restart render_lines_gui with new homographies
    • DISPLAY=:0 cargo run --bin render_lines_gui */
  • change video source in supervisord.conf and run uv run supervisorctl update to switch
    • if tracking is slow and there's no prediction.
      • uv run python -c "import torch;print(torch.cuda.is_available())"