Art installation with TRAjectory Prediction
EXPERIMENTS | ||
trap | ||
.gitignore | ||
.python-version | ||
build_opencv_with_gstreamer.sh | ||
custom_bytetrack.yaml | ||
pyproject.toml | ||
README.md | ||
supervisord.conf | ||
test_custom_rnn.ipynb | ||
test_custom_rnn_lstm-attention.ipynb | ||
test_homography.ipynb | ||
test_model.ipynb | ||
test_path_transforms.ipynb | ||
test_tracker.ipynb | ||
test_tracking_data.ipynb | ||
test_training.ipynb | ||
test_training_data.ipynb | ||
test_trajectron_maps.ipynb | ||
uv.lock |
Trajectory Prediction Video installation
Install
- Run
bash build_opencv_with_gstreamer.sh
to build opencv with gstreamer support - Use
uv
to install
How to
See also the sibling repo traptools for camera calibration and homography tools that are needed for this repo. Also, trap_rust is used to map the shapes (which are generated by
stage.py
) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.
These are roughly the steps to go from datagathering to training
- Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
- Recording can be done with
ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4
- Recording can be done with
- Follow the steps in the auxilary traptools repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
- Run the tracker, e.g.
uv run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/
- Note: You can run this right of the camera stream:
uv run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/
, each recording adding a new file to theraw
folder.
- Note: You can run this right of the camera stream:
- Parse tracker data to Trajectron format:
uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME
Optionally, smooth tracks:--smooth-tracks
- Optionally, add a map: ideally a RGB png: 3 layers of 0-255
uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png
- Optionally, add a map: ideally a RGB png: 3 layers of 0-255
- Train Trajectron model
uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data
- The run!
uv run supervisord