65 lines
6 KiB
Markdown
65 lines
6 KiB
Markdown
# Trajectory Prediction Video installation
|
|
|
|
## Install
|
|
|
|
* Run `bash build_opencv_with_gstreamer.sh` to build opencv with gstreamer support
|
|
* Use `uv` to install
|
|
|
|
## How to
|
|
|
|
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo. Also, [laserspace](https://git.rubenvandeven.com/security_vision/laserspace) is used to map the shapes (which are generated by `stage.py`) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.
|
|
|
|
These are roughly the steps to go from datagathering to training
|
|
|
|
1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
|
|
* Recording can be done with `ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4`
|
|
2. Follow the steps in the auxilary [traptools](https://git.rubenvandeven.com/security_vision/traptools) repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
|
|
3. Track lidar or video data:
|
|
1. Video: Run the video source & video tracker nodes:
|
|
* `uv run trap_video_source --homography ../DATASETS/hof4-test-angle/homography.json --video-src gige://../DATASETS/hof4-test-angle/gige_config.json --calibration ../DATASETS/hof4-test-angle/calibration.json` (Optionally, use recorded video with `--video-src videos/render-source-2025-10-19T21\:09.mp4 --video-offset 300`)
|
|
* `uv run trap_tracker --smooth-tracks --eval_device cuda:0 --detector ultralytics`
|
|
2. Lidar: `uv run trap_lidar --min-box-area 0 --pi LOCAL_IP --smooth-tracks`
|
|
4. Save the tracks emitted by the video or lidar tracker: `uv run trap_track_writer --output-dir EXPERIMENTS/raw/hof-lidar`
|
|
* Each recording adds a new txt file to the `raw` folder.
|
|
4. Parse tracker data to Trajectron format: `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME`
|
|
* Optionally, smooth tracks: `--smooth-tracks`
|
|
* Optionally, and variations with noise: `--noise-tracks 2` (creates 2 variations)
|
|
* Optionally, and variations with at a random offset: `--offset-tracks 2` (creates 2 variations)
|
|
* Optionally, add a map: ideally a RGB png: 3 layers of 0-255
|
|
* `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
|
|
* See [[tests/trajectron_maps.ipynb]] for more info how to do so (e.g. the homography map/scale settings, which are also set in process_data)
|
|
|
|
5. Train Trajectron model `uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
|
|
* For faster training disalble edges:
|
|
` uv run trajectron_train --eval_every 200 --train_data_dict dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11_train.pkl --eval_data_dict dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir /home/ruben/suspicion/trap/SETTINGS/2025-11-dortmund/models --log_tag _dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11 --train_epochs 100 --conf /home/ruben/suspicion/trap/SETTINGS/2025-11-dortmund/trajectron.json --data_dir SETTINGS/2025-11-dortmund/trajectron --map_encoding --no_edge_encoding --dynamic_edges yes --no_edge_encoding --edge_influence_combine_method max --batch_size 512`
|
|
6. The run!
|
|
* `uv run supervisord`
|
|
<!-- * On a video file (you can use a wildcard) `DISPLAY=:1 uv run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
|
|
* or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
|
|
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`. -->
|
|
|
|
|
|
## Testnight 2025-06-13
|
|
|
|
Stappenplan:
|
|
|
|
* Hang lasers. Connect all cables etc.
|
|
* `DISPLAY=:0 cargo run --example laser_frame_stream_gui`
|
|
* Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
|
|
* In trap folder: `uv run supervisorctl start video`
|
|
* In laserspace folder: `DISPLAY=:0 cargo run --bin render_lines_gui` and use gui to draw and tweak projection area
|
|
* Use the save button to store configuration
|
|
/*
|
|
* in trap folder: `DISPLAY=:0 uv run trap_laser_calibration`
|
|
* follow instructions:
|
|
* camera points: 1-9 or cursor to create/select/move points
|
|
* move laser: vim movement keys : hjkl, use shift to move faster
|
|
* `c` to calibrate. Matrix is output to cli.
|
|
* `q` to quit
|
|
* saved to `laser_calib.json`, copy H field to `trap_rust/src/trap/laser.rs` (to e.g. TMP_STUDIO_CM_8)
|
|
* Restart `render_lines_gui` with new homographies
|
|
* `DISPLAY=:0 cargo run --bin render_lines_gui`
|
|
*/
|
|
* change video source in `supervisord.conf` and run `uv run supervisorctl update` to switch
|
|
* **if tracking is slow and there's no prediction.**
|
|
* `uv run python -c "import torch;print(torch.cuda.is_available())"`
|