53 lines
4.7 KiB
Markdown
53 lines
4.7 KiB
Markdown
# Trajectory Prediction Video installation
|
|
|
|
## Install
|
|
|
|
* Run `bash build_opencv_with_gstreamer.sh` to build opencv with gstreamer support
|
|
* Use `uv` to install
|
|
|
|
## How to
|
|
|
|
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo. Also, [laserspace](https://git.rubenvandeven.com/security_vision/laserspace) is used to map the shapes (which are generated by `stage.py`) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.
|
|
|
|
These are roughly the steps to go from datagathering to training
|
|
|
|
1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
|
|
* Recording can be done with `ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4`
|
|
2. Follow the steps in the auxilary [traptools](https://git.rubenvandeven.com/security_vision/traptools) repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
|
|
3. Run the tracker, e.g. `uv run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`
|
|
* Note: You can run this right of the camera stream: `uv run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`, each recording adding a new file to the `raw` folder.
|
|
4. Parse tracker data to Trajectron format: `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME` Optionally, smooth tracks: `--smooth-tracks`
|
|
* Optionally, add a map: ideally a RGB png: 3 layers of 0-255
|
|
* `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
|
|
5. Train Trajectron model `uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
|
|
6. The run!
|
|
* `uv run supervisord`
|
|
<!-- * On a video file (you can use a wildcard) `DISPLAY=:1 uv run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
|
|
* or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
|
|
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`. -->
|
|
|
|
|
|
## Testnight 2025-06-13
|
|
|
|
Stappenplan:
|
|
|
|
* Hang lasers. Connect all cables etc.
|
|
* `DISPLAY=:0 cargo run --example laser_frame_stream_gui`
|
|
* Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
|
|
* In trap folder: `uv run supervisorctl start video`
|
|
* In laserspace folder: `DISPLAY=:0 cargo run --bin render_lines_gui` and use gui to draw and tweak projection area
|
|
* Use the save button to store configuration
|
|
/*
|
|
* in trap folder: `DISPLAY=:0 uv run trap_laser_calibration`
|
|
* follow instructions:
|
|
* camera points: 1-9 or cursor to create/select/move points
|
|
* move laser: vim movement keys : hjkl, use shift to move faster
|
|
* `c` to calibrate. Matrix is output to cli.
|
|
* `q` to quit
|
|
* saved to `laser_calib.json`, copy H field to `trap_rust/src/trap/laser.rs` (to e.g. TMP_STUDIO_CM_8)
|
|
* Restart `render_lines_gui` with new homographies
|
|
* `DISPLAY=:0 cargo run --bin render_lines_gui`
|
|
*/
|
|
* change video source in `supervisord.conf` and run `uv run supervisorctl update` to switch
|
|
* **if tracking is slow and there's no prediction.**
|
|
* `uv run python -c "import torch;print(torch.cuda.is_available())"`
|