Compare commits

..

No commits in common. "main" and "animation_window" have entirely different histories.

53 changed files with 5284 additions and 20472 deletions

View file

@ -2,12 +2,12 @@
"batch_size": 512,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.001,
"learning_rate": 0.01,
"min_learning_rate": 1e-05,
"learning_decay_rate": 0.9999,
"prediction_horizon": 60,
"prediction_horizon": 30,
"minimum_history_length": 5,
"maximum_history_length": 150,
"maximum_history_length": 50,
"map_encoder": {
"PEDESTRIAN": {
"heading_state_index": [2, 3],
@ -96,7 +96,7 @@
},
"pred_state": {
"PEDESTRIAN": {
"position": [
"velocity": [
"x",
"y"
]

View file

@ -3,63 +3,24 @@
## Install
* Run `bash build_opencv_with_gstreamer.sh` to build opencv with gstreamer support
* Use `uv` to install
* Use pyenv + poetry to install
## How to
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo. Also, [laserspace](https://git.rubenvandeven.com/security_vision/laserspace) is used to map the shapes (which are generated by `stage.py`) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo.
These are roughly the steps to go from datagathering to training
1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
* Recording can be done with `ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4`
2. Follow the steps in the auxilary [traptools](https://git.rubenvandeven.com/security_vision/traptools) repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
3. Track lidar or video data:
1. Video: Run the video source & video tracker nodes:
* `uv run trap_video_source --homography ../DATASETS/hof4-test-angle/homography.json --video-src gige://../DATASETS/hof4-test-angle/gige_config.json --calibration ../DATASETS/hof4-test-angle/calibration.json` (Optionally, use recorded video with `--video-src videos/render-source-2025-10-19T21\:09.mp4 --video-offset 300`)
* `uv run trap_tracker --smooth-tracks --eval_device cuda:0 --detector ultralytics`
2. Lidar: `uv run trap_lidar --min-box-area 0 --pi LOCAL_IP --smooth-tracks`
4. Save the tracks emitted by the video or lidar tracker: `uv run trap_track_writer --output-dir EXPERIMENTS/raw/hof-lidar`
* Each recording adds a new txt file to the `raw` folder.
4. Parse tracker data to Trajectron format: `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME`
* Optionally, smooth tracks: `--smooth-tracks`
* Optionally, and variations with noise: `--noise-tracks 2` (creates 2 variations)
* Optionally, and variations with at a random offset: `--offset-tracks 2` (creates 2 variations)
3. Run the tracker, e.g. `poetry run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`
* Note: You can run this right of the camera stream: `poetry run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`, each recording adding a new file to the `raw` folder.
4. Parse tracker data to Trajectron format: `poetry run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME` Optionally, smooth tracks: `--smooth-tracks`
* Optionally, add a map: ideally a RGB png: 3 layers of 0-255
* `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
* See [[tests/trajectron_maps.ipynb]] for more info how to do so (e.g. the homography map/scale settings, which are also set in process_data)
5. Train Trajectron model `uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
* For faster training disalble edges:
` uv run trajectron_train --eval_every 200 --train_data_dict dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11_train.pkl --eval_data_dict dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir /home/ruben/suspicion/trap/SETTINGS/2025-11-dortmund/models --log_tag _dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11 --train_epochs 100 --conf /home/ruben/suspicion/trap/SETTINGS/2025-11-dortmund/trajectron.json --data_dir SETTINGS/2025-11-dortmund/trajectron --map_encoding --no_edge_encoding --dynamic_edges yes --no_edge_encoding --edge_influence_combine_method max --batch_size 512`
* `poetry run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
5. Train Trajectron model `poetry run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
6. The run!
* `uv run supervisord`
<!-- * On a video file (you can use a wildcard) `DISPLAY=:1 uv run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
* On a video file (you can use a wildcard) `DISPLAY=:1 poetry run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
* or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`. -->
## Testnight 2025-06-13
Stappenplan:
* Hang lasers. Connect all cables etc.
* `DISPLAY=:0 cargo run --example laser_frame_stream_gui`
* Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
* In trap folder: `uv run supervisorctl start video`
* In laserspace folder: `DISPLAY=:0 cargo run --bin render_lines_gui` and use gui to draw and tweak projection area
* Use the save button to store configuration
/*
* in trap folder: `DISPLAY=:0 uv run trap_laser_calibration`
* follow instructions:
* camera points: 1-9 or cursor to create/select/move points
* move laser: vim movement keys : hjkl, use shift to move faster
* `c` to calibrate. Matrix is output to cli.
* `q` to quit
* saved to `laser_calib.json`, copy H field to `trap_rust/src/trap/laser.rs` (to e.g. TMP_STUDIO_CM_8)
* Restart `render_lines_gui` with new homographies
* `DISPLAY=:0 cargo run --bin render_lines_gui`
*/
* change video source in `supervisord.conf` and run `uv run supervisorctl update` to switch
* **if tracking is slow and there's no prediction.**
* `uv run python -c "import torch;print(torch.cuda.is_available())"`
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`.

View file

@ -1,130 +0,0 @@
{
"batch_size": 512,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.001,
"min_learning_rate": 1e-05,
"learning_decay_rate": 0.9999,
"prediction_horizon": 60,
"minimum_history_length": 5,
"maximum_history_length": 150,
"map_encoder": {
"PEDESTRIAN": {
"heading_state_index": [2, 3],
"patch_size": [
50,
10,
50,
90
],
"map_channels": 3,
"hidden_channels": [
10,
20,
5,
1
],
"output_size": 32,
"masks": [
5,
5,
5,
5
],
"strides": [
1,
1,
1,
1
],
"dropout": 0.5
}
},
"k": 1,
"k_eval": 1,
"kl_min": 0.07,
"kl_weight": 100.0,
"kl_weight_start": 0,
"kl_decay_rate": 0.99995,
"kl_crossover": 400,
"kl_sigmoid_divisor": 4,
"rnn_kwargs": {
"dropout_keep_prob": 0.75
},
"MLP_dropout_keep_prob": 0.9,
"enc_rnn_dim_edge": 1,
"enc_rnn_dim_edge_influence": 1,
"enc_rnn_dim_history": 32,
"enc_rnn_dim_future": 32,
"dec_rnn_dim": 128,
"q_z_xy_MLP_dims": null,
"p_z_x_MLP_dims": 32,
"GMM_components": 1,
"log_p_yt_xz_max": 6,
"N": 1,
"K": 25,
"tau_init": 2.0,
"tau_final": 0.05,
"tau_decay_rate": 0.997,
"use_z_logit_clipping": true,
"z_logit_clip_start": 0.05,
"z_logit_clip_final": 5.0,
"z_logit_clip_crossover": 300,
"z_logit_clip_divisor": 5,
"dynamic": {
"PEDESTRIAN": {
"name": "SingleIntegrator",
"distribution": true,
"limits": {}
}
},
"state": {
"PEDESTRIAN": {
"position": [
"x",
"y"
],
"velocity": [
"x",
"y"
],
"acceleration": [
"x",
"y"
]
}
},
"pred_state": {
"PEDESTRIAN": {
"position": [
"x",
"y"
]
}
},
"log_histograms": false,
"dynamic_edges": "yes",
"edge_state_combine_method": "sum",
"edge_influence_combine_method": "max",
"edge_addition_filter": [
0.25,
0.5,
0.75,
1.0
],
"edge_removal_filter": [
1.0,
0.0
],
"offline_scene_graph": "yes",
"incl_robot_node": false,
"node_freq_mult_train": false,
"node_freq_mult_eval": false,
"scene_freq_mult_train": false,
"scene_freq_mult_eval": false,
"scene_freq_mult_viz": false,
"edge_encoding": false,
"use_map_encoding": true,
"augment": false,
"override_attention_radius": []
}

View file

@ -2,10 +2,10 @@
# Default YOLO tracker settings for ByteTrack tracker https://github.com/ifzhang/ByteTrack
tracker_type: bytetrack # tracker type, ['botsort', 'bytetrack']
track_high_thresh: 0.000001 # threshold for the first association
track_low_thresh: 0.000001 # threshold for the second association
new_track_thresh: 0.000001 # threshold for init new track if the detection does not match any tracks
track_buffer: 10 # buffer to calculate the time when to remove tracks
match_thresh: 0.99 # threshold for matching tracks
track_high_thresh: 0.0001 # threshold for the first association
track_low_thresh: 0.0001 # threshold for the second association
new_track_thresh: 0.0001 # threshold for init new track if the detection does not match any tracks
track_buffer: 50 # buffer to calculate the time when to remove tracks
match_thresh: 0.95 # threshold for matching tracks
fuse_score: True # Whether to fuse confidence scores with the iou distances before matching
# min_box_area: 10 # threshold for min box areas(for tracker evaluation, not used for now)

3930
poetry.lock generated Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,52 +1,11 @@
[project]
[tool.poetry]
name = "trap"
version = "0.1.0"
description = "Art installation with trajectory prediction"
authors = [{ name = "Ruben van de Ven", email = "git@rubenvandeven.com" }]
requires-python = "~=3.10.4"
authors = ["Ruben van de Ven <git@rubenvandeven.com>"]
readme = "README.md"
dependencies = [
"trajectron-plus-plus",
"torch==1.12.1",
"torchvision==0.13.1",
"deep-sort-realtime>=1.3.2,<2",
"ultralytics~=8.3",
"ffmpeg-python>=0.2.0,<0.3",
"torchreid>=0.2.5,<0.3",
"gdown>=4.7.1,<5",
"pandas-helper-calc",
"tsmoothie>=1.0.5,<2",
"pyglet>=2.1.8,<3",
"pyglet-cornerpin>=0.3.0,<0.4",
"opencv-python",
"setproctitle>=1.3.3,<2",
"bytetracker",
"jsonlines>=4.0.0,<5",
"tensorboardx>=2.6.2.2,<3",
"shapely>=2.1",
#"shapely>=1,<2",
"baumer-neoapi",
"qrcode~=8.0",
"pyusb>=1.3.1,<2",
"ipywidgets>=8.1.5,<9",
"foucault",
"python-statemachine>=2.5.0",
"facenet-pytorch>=2.5.3",
"simplification>=0.7.12",
"supervisor>=4.2.5",
"superfsmon>=1.2.3",
"noise>=1.2.2",
"svgpathtools>=1.7.1",
"velodyne-decoder>=3.1.0",
"open3d>=0.19.0",
"nptyping>=2.5.0",
"py-to-proto>=0.6.0",
"grpcio-tools>=1.76.0",
"dearpygui>=2.1.0",
]
[project.scripts]
start = "trap.conductofconduct:run"
[tool.poetry.scripts]
trapserv = "trap.plumber:start"
tracker = "trap.tools:tracker_preprocess"
compare = "trap.tools:tracker_compare"
@ -54,36 +13,37 @@ process_data = "trap.process_data:main"
blacklist = "trap.tools:blacklist_tracks"
rewrite_tracks = "trap.tools:rewrite_raw_track_files"
model_train = "trap.models.train:train"
[tool.poetry.dependencies]
python = "^3.10,<3.12,"
trap_video_source = "trap.frame_emitter:FrameEmitter.parse_and_start"
trap_video_writer = "trap.frame_writer:FrameWriter.parse_and_start"
trap_tracker = "trap.tracker:Tracker.parse_and_start"
trap_track_writer = "trap.track_writer:TrackWriter.parse_and_start"
trap_lidar = "trap.lidar_tracker:Lidar.parse_and_start"
trap_stage = "trap.stage:Stage.parse_and_start"
trap_render_stage = "trap.stage_renderer:StageRenderer.parse_and_start"
trap_prediction = "trap.prediction_server:PredictionServer.parse_and_start"
trap_render_cv = "trap.cv_renderer:CvRenderer.parse_and_start"
trap_monitor = "trap.monitor:Monitor.parse_and_start" # migrate timer
trap_laser_calibration = "trap.laser_calibration:LaserCalibration.parse_and_start" # migrate timer
trap_settings = "trap.settings:Settings.parse_and_start" # migrate timer
trajectron-plus-plus = { path = "../Trajectron-plus-plus/", develop = true }
#trajectron-plus-plus = { git = "https://git.rubenvandeven.com/security_vision/Trajectron-plus-plus/" }
torch = [
{ version="1.12.1" },
# { url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp38-cp38-linux_x86_64.whl", markers = "python_version ~= '3.8' and sys_platform == 'linux'" },
{ url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl", markers = "python_version ~= '3.10' and sys_platform == 'linux'" },
]
[tool.uv]
[tool.uv.sources]
trajectron-plus-plus = { path = "../Trajectron-plus-plus/", editable = true }
torch = [{ url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl", marker = "python_version ~= '3.10' and sys_platform == 'linux'" }]
torchvision = [{ url = "https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp310-cp310-linux_x86_64.whl", marker = "python_version ~= '3.10' and sys_platform == 'linux'" }]
pandas-helper-calc = { git = "https://github.com/scls19fr/pandas-helper-calc" }
bytetracker = { git = "https://github.com/rubenvandeven/bytetrack-pip" }
baumer-neoapi = { path = "../../Downloads/Baumer_neoAPI_1.5.0_lin_x86_64_python/wheel/baumer_neoapi-1.5.0-cp34.cp35.cp36.cp37.cp38.cp39.cp310.cp311.cp312-none-linux_x86_64.whl" }
foucault = { git = "https://git.rubenvandeven.com/r/conductofconduct" }
opencv-python = {path="./opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"}
[tool.uv.workspace]
members = ["CenterTrack"]
torchvision = [
{ version="0.13.1" },
# { url = "https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp38-cp38-linux_x86_64.whl", markers = "python_version ~= '3.8' and sys_platform == 'linux'" },
{ url = "https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp310-cp310-linux_x86_64.whl", markers = "python_version ~= '3.10' and sys_platform == 'linux'" },
]
deep-sort-realtime = "^1.3.2"
ultralytics = "^8.3"
ffmpeg-python = "^0.2.0"
torchreid = "^0.2.5"
gdown = "^4.7.1"
pandas-helper-calc = {git = "https://github.com/scls19fr/pandas-helper-calc"}
tsmoothie = "^1.0.5"
pyglet = "^2.0.15"
pyglet-cornerpin = "^0.3.0"
opencv-python = {file="./opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"}
setproctitle = "^1.3.3"
bytetracker = { git = "https://github.com/rubenvandeven/bytetrack-pip" }
jsonlines = "^4.0.0"
tensorboardx = "^2.6.2.2"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View file

@ -1,97 +0,0 @@
[inet_http_server]
port = *:8293
# username = user
# password = 123
[supervisord]
nodaemon = false
; The rpcinterface:supervisor section must remain in the config file for
; RPC (supervisorctl/web interface) to work. Additional interfaces may be
; added by defining them in separate [rpcinterface:x] sections.
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl = http://localhost:8293
[program:monitor]
command=uv run trap_monitor
numprocs=1
directory=%(here)s
autostart=false
[program:video]
# command=uv run trap_video_source --homography ../DATASETS/hof3/homography.json --video-src ../DATASETS/hof3/hof3-cam-demo-twoperson.mp4 --calibration ../DATASETS/hof3/calibration.json --video-loop
command=uv run trap_video_source --homography ../DATASETS/hof3-cam-baumer-cropped/homography.json --video-src gige://../DATASETS/hof3-cam-baumer-cropped/gige_config.json --calibration ../DATASETS/hof3-cam-baumer-cropped/calibration.json
directory=%(here)s
[program:tracker]
command=uv run trap_tracker --smooth-tracks
# command=uv run trap_lidar --min-box-area 0 --viz --smooth-tracks
# environment=DISPLAY=":0"
directory=%(here)s
autostart=false
[program:lidar]
command=uv run trap_lidar --min-box-area 0.1 --viz
environment=DISPLAY=":0"
directory=%(here)s
autostart=false
[program:track_writer]
command=uv run trap_track_writer --output-dir EXPERIMENTS/raw/hof-lidar
# environment=DISPLAY=":0"
directory=%(here)s
autostart=false
stopwaitsecs=60
[program:stage]
# command=uv run trap_stage
command=uv run trap_stage --verbose --camera-fps 12 --homography ../DATASETS/hof3/homography.json --calibration ../DATASETS/hof3/calibration.json --cache-path /tmp/history_cache-hof3.pcl --tracker-output-dir EXPERIMENTS/raw/hof3/
directory=%(here)s
[program:settings]
command=uv run trap_settings
autostart=true
environment=DISPLAY=":0"
directory=%(here)s
[program:predictor]
# command=uv run trap_prediction --eval_device cuda:0 --model_dir EXPERIMENTS/models/models_20241229_21_35_13_hof3-m2-ud-split-conv12-f2.0-map-2024-12-29/ --num-samples 1 --map_encoding --eval_data_dict EXPERIMENTS/trajectron-data/hof3-m2-ud-split-nostep-conv12-f2.0-map-2024-12-29_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode
command=uv run trap_prediction --eval_device cuda:0 --model_dir SETTINGS/2025-11-dortmund/models/models_20251111_19_06_29_dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11/ --num-samples 1 --map_encoding --eval_data_dict SETTINGS/2025-11-dortmund/trajectron/dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-12_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode --conf SETTINGS/2025-11-dortmund/trajectron.json
# command=uv run trap_prediction --eval_device cuda:0 --model_dir EXPERIMENTS/models/models_20251106_11_51_00_hof-lidar-m2-ud-nostep-kalsmooth-noise2-offsets2-f2.0-map-2025-11-06/ --num-samples 1 --map_encoding --eval_data_dict EXPERIMENTS/trajectron-data/hof-lidar-m2-ud-nostep-kalsmooth-noise2-offsets2-f2.0-map-2025-11-06_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode
# uv run trajectron_train --continue_training_from EXPERIMENTS/models/models_20241229_21_35_13_hof3-m2-ud-split-conv12-f2.0-map-2024-12-29/ --eval_every 5 --train_data_dict hof3-nostep-conv12-f2.0-map-2024-12-27_train.pkl --eval_data_dict hof3-nostep-conv12-f2.0-map-2024-12-27_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _hof3-conv12-f2.0-map-2024-12-27 --train_epochs 10 --conf EXPERIMENTS/config.json --data_dir EXPERIMENTS/trajectron-data --map_encoding
directory=%(here)s
[program:render_cv]
command=uv run trap_render_cv
directory=%(here)s
environment=DISPLAY=":0"
autostart=false
; can be long to quit if rendering to video file
stopwaitsecs=60
[program:render_cv]
command=uv run trap_render_cv
directory=%(here)s
environment=DISPLAY=":0"
autostart=false
; can be long to quit if rendering to video file
stopwaitsecs=60
[program:laserspace]
command=cargo run --release tcp://127.0.0.1:99174 ../trap/SETTINGS/2025-11-dortmund/laserspace.json
directory=%(here)s/../laserspace
environment=DISPLAY=":0"
autostart=false
; can be long to quit if rendering to video file
stopwaitsecs=60
# during development auto restart some services when the code changes
[program:superfsmon]
command=superfsmon trap/stage.py stage
directory=%(here)s
autostart=false

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -84,7 +84,7 @@ class AnimationRenderer:
config = pyglet.gl.Config(sample_buffers=1, samples=4)
# , fullscreen=self.config.render_window
display = pyglet.display.get_display()
display = pyglet.canvas.get_display()
idx = -1 if self.config.render_window else 0
screen = display.get_screens()[idx]
print(display.get_screens())
@ -170,6 +170,9 @@ class AnimationRenderer:
self.init_shapes()
self.init_labels()
@ -197,6 +200,52 @@ class AnimationRenderer:
)
# return process
def init_shapes(self):
'''
Due to error when running headless, we need to configure options before extending the shapes class
'''
class GradientLine(shapes.Line):
def __init__(self, x, y, x2, y2, width=1, color1=[255,255,255], color2=[255,255,255], batch=None, group=None):
# print('colors!', colors)
# assert len(colors) == 6
r, g, b, *a = color1
self._rgba1 = (r, g, b, a[0] if a else 255)
r, g, b, *a = color2
self._rgba2 = (r, g, b, a[0] if a else 255)
# print('rgba', self._rgba)
super().__init__(x, y, x2, y2, width, color1, batch=None, group=None)
# <pyglet.graphics.vertexdomain.VertexList
# pyglet.graphics.vertexdomain
# print(self._vertex_list)
def _create_vertex_list(self):
'''
copy of super()._create_vertex_list but with additional colors'''
self._vertex_list = self._group.program.vertex_list(
6, self._draw_mode, self._batch, self._group,
position=('f', self._get_vertices()),
colors=('Bn', self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1 ),
translation=('f', (self._x, self._y) * self._num_verts))
def _update_colors(self):
self._vertex_list.colors[:] = self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1
def color1(self, color):
r, g, b, *a = color
self._rgba1 = (r, g, b, a[0] if a else 255)
self._update_colors()
def color2(self, color):
r, g, b, *a = color
self._rgba2 = (r, g, b, a[0] if a else 255)
self._update_colors()
self.gradientLine = GradientLine
def init_labels(self):
base_color = COLOR_PRIMARY

View file

@ -1,181 +0,0 @@
from __future__ import annotations
import logging
from typing import List
import numpy as np
from trap.base import ProjectedTrack
from trap.lines import AppendableLine, Coordinate, DeltaT, ProceduralChain, RenderableLines, SrgbaColor, StaticLine
logger = logging.getLogger('anomaly')
def calc_anomaly(segments: List[DiffSegment], window: int = 3):
"""Calculate anomaly score based on provided segments
considering a sliding window of the last n items
"""
relevant_segments = segments[-window:]
scores = [s.avg_score() for s in relevant_segments]
s = list(filter(lambda x: x is not None,scores))
return np.average(s)
class DiffSegment():
"""
A segment of a prediction track, that can be diffed
with a track. The track is continously updated.
If a new prediction comes in, the diff is marked as
finished. After which it is animated and added to the
Scenario's anomaly score.
"""
DRAW_DECAY_SPEED = 25
POINT_INTERVAL = 4
def __init__(self, prediction: ProjectedTrack):
self.ptrack = prediction
self._last_diff_frame_idx: int = 0
self.finished = False
self.line = StaticLine()
self.points: List[Coordinate] = []
self._drawn_points = []
self._target_track = prediction
self.score = 0
def finish(self):
self.finished = True
def nr_of_passed_points(self) -> int:
if not self._last_diff_frame_idx:
return 0
return self._last_diff_frame_idx - self.ptrack.frame_index
# if isinstance(self.line, AppendableLine):
# return self.line.nr_of_passed_points() * self.POINT_INTERVAL
# else:
# return len(self.points) * self.POINT_INTERVAL
def avg_score(self):
frames_passed = self.nr_of_passed_points()
if not frames_passed:
return None
else:
return self.score/frames_passed
# run on each track update received
def update_track(self, track: ProjectedTrack):
self._target_track = track
if self.finished:
# don't add new points if finished
return
# migrate SceneraioScene function
start_frame_idx = max(self.ptrack.frame_index, self._last_diff_frame_idx)
traj_diff_steps_back = track.frame_index - start_frame_idx # positive value
pred_diff_steps_forward = start_frame_idx - self.ptrack.frame_index # positive value
if traj_diff_steps_back < 0 or len(track.history) < traj_diff_steps_back:
logger.warning("Track history doesn't reach prediction start. Should not be possible. Skip")
# elif len(ptrack.predictions[0]) < pred_diff_steps_back:
# logger.warning("Prediction does not reach prediction start. Should not be possible. Skip")
else:
trajectory = track.projected_history
# from start to as far as it gets
trajectory_range = trajectory[-1*traj_diff_steps_back:]
prediction_range = self.ptrack.predictions[0][pred_diff_steps_forward:] # in world coordinate space
line = []
for i, (p1, p2) in enumerate(zip(trajectory_range, prediction_range)):
diff = (p1[0]-p2[0], p1[1]-p2[1])
self.score += np.linalg.norm(diff)
offset_from_start = (pred_diff_steps_forward + i)
if offset_from_start % self.POINT_INTERVAL == 0:
self.line.extend([p1, p2])
self.points.extend([p1, p2])
self._last_diff_frame_idx = track.frame_index
# # run each render tick
# def update_drawn_positions(self, dt: DeltaT):
# if isinstance(self.line, AppendableLine):
# if self.finished and self.line.ready:
# # convert when fully drawn
# # print(self, "CONVERT LINE")
# self.line = ProceduralChain.from_appendable_line(self.line)
# if isinstance(self.line, ProceduralChain):
# self.line.target = self._target_track.projected_history[-1]
# # if not self.finished or not self.line.ready:
# self.line.update_drawn_positions(dt)
def as_renderable(self) -> RenderableLines:
color = SrgbaColor(0,0,1,1)
# if not self.finished or not self.line.ready:
return self.line.as_renderable(color)
# return self.line.as_renderable(color)
def calculate_loitering_scores(track: ProjectedTrack, min_duration_to_linger, linger_factor, velocity_threshold, window = None):
"""
Calculates a loitering score (0-1) for each track.
Args:
tracks: A list of tracks, where each track is a list of (frame_id, x, y, width, height).
min_duration_to_linger: Minimum number of frames to start considering a segment as lingering.
linger_factor: Divide number of lingering frames by 'linger_factor' to get a score 0-1
velocity_threshold: Maximum velocity (meters/frame) to consider as lingering.
Returns:
A generator providing loitering scores for each frame
"""
total_frames = len(track.projected_history)
if total_frames < 2:
return 0.0 # Not enough data
offset = window * -1 if window is not None else 0
x_coords = [t[0] for t in track.projected_history[offset:]]
y_coords = [t[1] for t in track.projected_history[offset:]]
# Calculate velocities
velocities = np.sqrt(np.diff(x_coords)**2 + np.diff(y_coords)**2)
# Calculate distances
# distances = np.diff(x_coords)
# distances_y = np.diff(y_coords)
# distances_total = np.sqrt(distances**2 + distances_y**2)
linger_duration = 0
linger_frames = 0
for i in range(len(velocities)):
if velocities[i] < velocity_threshold:
linger_duration += 1
if linger_duration >= min_duration_to_linger:
linger_frames +=1
else:
# decay if moving faster
linger_duration = max(linger_duration - 1.5, 0)
linger_frames = max(linger_frames - 1.5, 0)
# Calculate loitering score
if total_frames > 0:
loitering_score = min(1, max(0, linger_frames / linger_factor))
else:
loitering_score = 0.0
yield loitering_score

View file

@ -1,794 +0,0 @@
from __future__ import annotations
from abc import ABC, abstractmethod
import argparse
from collections import defaultdict
from copy import deepcopy
from enum import IntFlag
from itertools import cycle
import json
import logging
from pathlib import Path
import time
import types
from typing import Iterable, Optional, Tuple, Union, List
import cv2
from dataclasses import dataclass, field
import dataclasses
from nptyping import Float64, NDArray, Shape
import numpy as np
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
from bytetracker.byte_tracker import STrack as ByteTrackTrack
from bytetracker.basetrack import TrackState as ByteTrackTrackState
import pandas as pd
from shapely import Point
from trap.utils import get_bins, inv_lerp, lerp
from trajectron.environment import Environment, Node, Scene
from urllib.parse import urlparse
from cv2.typing import MatLike
logger = logging.getLogger('trap.base')
class UrlOrPath():
"""
Some video sources are on a path (files), others a url (some cameras).
Provide some utilities to easily deal with either.
"""
def __init__(self, string):
self.url = urlparse(str(string))
def __str__(self) -> str:
return self.url.geturl()
def is_url(self) -> bool:
return len(self.url.netloc) > 0
def path(self) -> Path:
if self.is_url():
return Path(self.url.path)
return Path(self.url.geturl()) # can include scheme, such as C:/
class Space(IntFlag):
Image = 1 # As detected in the image
Undistorted = 2 # After applying lense undistortiion
World = 4 # After lens undistort and homography
Render = 8 # View space of renderer
@dataclass
class Position:
x: float
y: float
conf: float
state: DetectionState
frame_nr: int
det_class: str
class DetectionState(IntFlag):
Tentative = 1 # state before n_init (see DeepsortTrack)
Confirmed = 2 # after tentative
Lost = 4 # lost when DeepsortTrack.time_since_update > 0 but not Deleted
Interpolated = 8 # A position estimated through interpolation of adjecent detections
# Interpolated = 8 # A position estimated through interpolation of adjecent detections
@classmethod
def from_deepsort_track(cls, track: DeepsortTrack):
if track.state == DeepsortTrackState.Tentative:
return cls.Tentative
if track.state == DeepsortTrackState.Confirmed:
if track.time_since_update > 0:
return cls.Lost
return cls.Confirmed
raise RuntimeError("Should not run into Deleted entries here")
@classmethod
def from_bytetrack_track(cls, track: ByteTrackTrack):
if track.state == ByteTrackTrackState.New:
return cls.Tentative
if track.state == ByteTrackTrackState.Removed:
return cls.Lost
# if track.time_since_update > 0:
if track.state == ByteTrackTrackState.Tracked:
return cls.Confirmed
if track.state == ByteTrackTrackState.Lost:
return cls.Tentative
raise RuntimeError("Should not run into Deleted entries here")
def H_from_path(path: Path):
if path.suffix == '.json':
with path.open('r') as fp:
H = np.array(json.load(fp))
else:
H = np.loadtxt(path, delimiter=',')
return H
PointList = List[Tuple[float, float]] | np.ndarray | cv2.typing.MatLike
def scale_homography(H: cv2.Mat, scale: float):
"""Transform the given matrix so that it immediately converts
the points to img space"""
new_H = H.copy()
new_H[:2] = H[:2] * scale
return new_H
class DistortedCamera(ABC):
@abstractmethod
def undistort_img(self, img: MatLike):
return cv2.remap(img, self.map1, self.map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
def project_img(self, undistorted_img: MatLike, scale: float = 1.0):
w, h = undistorted_img.shape[1], undistorted_img.shape[0]
if scale != 1:
H = scale_homography(self.H, scale)
else:
H = self.H
return cv2.warpPerspective(undistorted_img, H,(w, h))
def img_to_world(self, img: MatLike, scale = 1.):
img = self.undistort_img(img)
return self.project_img(img, scale)
@abstractmethod
def undistort_points(self, distorted_points: PointList):
pass
def project_point(self, point):
return self.project_points([point])[0]
def project_points(self, points: PointList, scale: float = 1.0):
if scale != 1:
H = scale_homography(self.H, scale)
else:
H = self.H
coords = cv2.perspectiveTransform(np.array([points]),H)
# if coords.shape[1:] == (1,2):
coords = np.reshape(coords, (len(points), 2))
return coords
@classmethod
def from_calibfile(cls, calibration_path, H, fps):
with calibration_path.open('r') as fp:
data = json.load(fp)
camera = cls.from_calibdata(data, H, fps)
return camera
@classmethod
def from_paths(cls, calibration_path: Path, h_path: Path, fps: float):
H = H_from_path(h_path)
with calibration_path.open('r') as fp:
calibdata = json.load(fp)
if 'type' in calibdata and calibdata['type'] == 'fisheye':
camera = FisheyeCamera.from_calibdata(calibdata, H, fps)
elif 'type' in calibdata and calibdata['type'] == 'undistorted':
camera = UndistortedCamera(calibdata['fps'])
else:
camera = Camera.from_calibdata(calibdata, H, fps)
return camera
# return cls.from_calibfile(calibration_path, H, fps)
def points_img_to_world(self, points: PointList, scale = 1.):
# undistort & project
coords = self.undistort_points(points)
coords = self.project_points(coords, scale)
return coords
class FisheyeCamera(DistortedCamera):
def __init__(self, dim1, dim2, dim3, K, D, new_K, scaled_K, balance, H, fps):
# dimensions as per: https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-part-2-13990f1b157f
self.dim1 = dim1 # original image
self.dim2 = dim2 # dimension of the box you want to keep after un-distorting the image. influced by balance
self.dim3 = dim3 # Dimension of the final box where OpenCV will put the undistorted image.
self.K = K
self.D = D
self.new_K = new_K
self.scaled_K = scaled_K
self.balance = balance
self.H = H # Homography
self._R = np.eye(3)
self.fps = fps
self.map1, self.map2 = cv2.fisheye.initUndistortRectifyMap(self.scaled_K, self.D, self._R, self.new_K, self.dim3, cv2.CV_16SC2)
# self.map1, self.map2 = cv2.fisheye.initUndistortRectifyMap(self.scaled_K, self.D, self._R, self.new_K, self.dim3, cv2.CV_32FC1)
def undistort_img(self, img: MatLike):
# map1, map2 = adjust_remap_maps(self.map1, self.map2, 2, (0,0))
# this only works on the undistort, but screws up when doing subsequent homography,
# there needs to be a way to combine both this remap and warpPerspective into a
# single remap call...
# scale = 0.3
# cx = self.dim3[0] / 2
# cy = self.dim3[1] / 2
# map1 = (self.map1 - cx) / scale + cx
# map2 = (self.map2 - cy) / scale + cy
# map1 += 900 #translate x (>0 left, <0 right)
# map2 += 1500 #translate y (>0 up, <0 down)
return cv2.remap(img, self.map1, self.map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
def undistort_points(self, distorted_points: PointList):
points = cv2.fisheye.undistortPoints (np.array([distorted_points]).astype(np.float32), K=self.scaled_K, D=self.D, R=self._R, P=self.new_K)
return points[0]
@property
def projected_w(self):
return self.dim3[0]
@property
def projected_h(self):
return self.dim3[1]
@classmethod
def from_calibdata(cls, data, H, fps):
return cls(
data['dim1'],
data['dim2'],
data['dim3'],
np.array(data['K']),
np.array(data['D']),
np.array(data['new_K']),
np.array(data['scaled_K']),
data['balance'],
H, fps)
class UndistortedCamera(DistortedCamera):
def __init__(self, fps = 10):
self.fps = fps
self.H = np.eye(3,3)
def undistort_img(self, img: MatLike):
return deepcopy(img)
def undistort_points(self, distorted_points: PointList):
return deepcopy(distorted_points)
class Camera(DistortedCamera):
def __init__(self, mtx: cv2.Mat, dist: cv2.Mat, w: float, h: float, H: cv2.Mat, fps: float):
self.mtx = mtx
self.dist = dist
self.w = w
self.h = h
self.H = H
self.fps = fps
self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(self.mtx, self.dist, (self.w,self.h), 1, (self.w,self.h))
@classmethod
def from_calibdata(cls, data, H, fps):
return cls(
np.array(data['camera_matrix']),
np.array(data['dist_coeff']),
data['dim']['width'],
data['dim']['height'],
H, fps)
@property
def projected_w(self):
return self.w
@property
def projected_h(self):
return self.h
def undistort_img(self, img: MatLike):
return cv2.undistort(img, self.mtx, self.dist, None, self.newcameramtx)
def undistort_points(self, distorted_points: PointList):
points = cv2.undistortPoints(np.array([distorted_points]).astype('float32'), self.mtx, self.dist, None, self.newcameramtx)
# print(points.reshape())
return points.reshape(points.shape[0], 2)
@dataclass
class Detection:
track_id: str # deepsort track id association
l: int # left - image space
t: int # top - image space
w: int # width - image space
h: int # height - image space
conf: float # object detector probablity
state: DetectionState
frame_nr: int
det_class: str
def get_foot_coords(self) -> list[float, float]:
return [self.l + 0.5 * self.w, self.t+self.h]
@classmethod
def from_deepsort(cls, dstrack: DeepsortTrack, frame_nr: int):
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf or 0, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
@classmethod
def from_bytetrack(cls, bstrack: ByteTrackTrack, frame_nr: int):
return cls(bstrack.track_id, *bstrack.tlwh, bstrack.score, DetectionState.from_bytetrack_track(bstrack), frame_nr, bstrack.cls)
def get_scaled(self, scale: float = 1):
if scale == 1:
return self
return Detection(
self.track_id,
self.l*scale,
self.t*scale,
self.w*scale,
self.h*scale,
self.conf,
self.state,
self.frame_nr,
self.det_class)
def to_ltwh(self):
return (int(self.l), int(self.t), int(self.w), int(self.h))
def to_ltrb(self):
return (int(self.l), int(self.t), int(self.l+self.w), int(self.t+self.h))
# Proxy'd Track, which caches projected history
class ProjectedTrack(object):
def __init__(self, track: Track, camera: Camera):
self._track = track
self.camera = camera # keep to wrap other calls
self.projected_history = track.get_projected_history(camera=camera)
# TODO wrap functions of Track()
def __getattr__(self, attr):
return getattr(self._track, attr)
@dataclass
class Track:
"""A bit of an haphazardous wrapper around the 'real' tracker to provide
a history, with which the predictor can work, as we then can deduce velocity
and acceleration.
"""
track_id: str = None
history: List[Detection] = field(default_factory=list)
predictor_history: Optional[list] = None # in image space
predictions: Optional[list] = None
fps: int = 12 # TODO)) convert this to camera? That way, incorporates H and dist, alternatively, each track is as a whole attached to a space
source: Optional[int] = None # to keep track of processed tracks
lost: bool = False
created_at: Optional[float] = None
frame_index: int = 0
updated_at: Optional[float] = None
def __post_init__(self):
if not self.created_at:
self.created_at = time.time()
if not self.updated_at:
self.updated_at = time.time()
def track_age(self) -> float:
return time.time() - self.created_at
def track_update_dt(self) -> float:
return time.time() - self.updated_at
def get_projected_history(self, H: Optional[cv2.Mat] = None, camera: Optional[DistortedCamera]= None) -> NDArray[Shape["*, 2"], Float64]:
foot_coordinates = [d.get_foot_coords() for d in self.history]
# TODO)) Undistort points before perspective transform
if len(foot_coordinates):
if camera:
coords = camera.points_img_to_world(foot_coordinates)
return coords
# coords = cv2.undistortPoints(np.array([foot_coordinates]).astype('float32'), camera.mtx, camera.dist, None, camera.newcameramtx)
# coords = cv2.perspectiveTransform(np.array(coords),camera.H)
# return coords.reshape((coords.shape[0],2))
else:
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
return coords[0]
return np.empty(shape=(0,2)) #np.array([], shape)
def get_projected_history_as_dict(self, H, camera: Optional[DistortedCamera]= None) -> dict:
coords = self.get_projected_history(H, camera)
return [{"x":c[0], "y":c[1]} for c in coords]
def get_with_interpolated_history(self) -> Track:
# new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
# new_track = Track(track.track_id, new_history, track.predictor_history, track.predictions)
new_history = []
for j in range(len(self.history)):
a = self.history[j]
new_history.append(Detection(a.track_id, a.l, a.t, a.w, a.h, a.conf, a.state, a.frame_nr, a.det_class))
if j+1 >= len(self.history):
break
b = self.history[j+1]
gap = b.frame_nr - a.frame_nr
if gap < 1:
logger.error(f"WARNING, gap between frames {a.frame_nr} -> {b.frame_nr} is negative?")
if gap > 1:
for g in range(1, gap):
l = lerp(a.l, b.l, g/gap)
t = lerp(a.t, b.t, g/gap)
w = lerp(a.w, b.w, g/gap)
h = lerp(a.h, b.h, g/gap)
conf = 0
state = DetectionState.Lost
frame_nr = a.frame_nr + g
new_history.append(Detection(a.track_id, l, t, w, h, conf, state, frame_nr, a.det_class))
return self.get_with_new_history(new_history)
def get_with_new_history(self, new_history: List[Detection]):
return Track(
self.track_id,
new_history,
self.predictor_history,
self.predictions,
self.fps,
self.source,
self.lost,
self.created_at,
self.frame_index,
self.updated_at)
def is_complete(self):
diffs = [(b.frame_nr - a.frame_nr) for a,b in zip(self.history[:-1], self.history[1:])]
return any([d != 1 for d in diffs])
def get_sampled(self, step_size = 1, offset=0):
"""Get copy of track, with every n-th frame"""
if not self.is_complete():
t = self.get_with_interpolated_history()
else:
t = self
return Track(
t.track_id,
t.history[offset::step_size],
t.predictor_history,
t.predictions,
t.fps/step_size,
self.source,
self.lost,
self.created_at,
self.frame_index,
self.updated_at)
def get_simplified_history(self, distance: float, camera: Camera) -> list[tuple[float, float]]:
# TODO)) Simplify to get a point every n-th meter
# usefull for both predicting and rendering with laser
# raise RuntimeError("Not Implemented Yet")
if len(self.history) < 1:
return []
path = self.get_projected_history(H=None, camera=camera)
new_path: List[dict] = [path[0]]
lengths = np.sqrt(np.sum(np.diff(path, axis=0)**2, axis=1))
cum_lengths = np.cumsum(lengths)
pos = distance
for a, b, l_a, l_b in zip(path[:-1], path[1:], cum_lengths[:-1], cum_lengths[1:]):
# check if segment has our next point (pos)
# because running sequentially, this is if point b
# is lower then our target position
if l_b <= pos:
continue
relative_t = inv_lerp(l_a, l_b, pos)
x = lerp(a[0], b[0], relative_t)
y = lerp(a[1], b[1], relative_t)
new_path.append([x,y])
pos += distance
return new_path
def get_simplified_history_with_absolute_distance(self, distance: float, camera: Camera) -> list[tuple[float, float]]:
# Similar to get_simplified_history, but with absolute world-space distance
# not the distance of the track length
if len(self.history) < 1:
return []
path = self.get_projected_history(H=None, camera=camera)
new_path: List[dict] = [path[0]]
distance_sq = distance**2
for a, b in zip(path[:-1], path[1:]):
# check if segment has our next point (pos)
# because running sequentially, this is if point b
# is lower then our target position
b_distance_sq = ((b[0]-new_path[0])**2 + (b[1]-new_path[1])**2)
if b_distance_sq <= distance_sq:
continue
a_distance_sq = ((a[0]-new_path[0])**2 + (a[1]-new_path[1])**2)
relative_t = inv_lerp(a_distance_sq, b_distance_sq, distance_sq)
x = lerp(a[0], b[0], relative_t)
y = lerp(a[1], b[1], relative_t)
new_path.append([x,y])
return new_path
def get_binned(self, bin_size, camera: Camera, bin_start=True):
"""
For an experiment: what if we predict using only concrete positions, by mapping
dx,dy to a grid. Thus prediction can be for 8 moves, or rather headings
see ~/notes/attachments example svg
"""
history = self.get_projected_history_as_dict(H=None, camera=camera)
def round_to_grid_precision(x):
factor = 1/bin_size
return round(x * factor) / factor
new_history: List[dict] = []
for i, (det0, det1) in enumerate(zip(history[:-1], history[1:])):
if i == 0:
new_history.append({
'x': round_to_grid_precision(det0['x']),
'y': round_to_grid_precision(det0['y'])
} if bin_start else det0)
continue
if abs(det1['x'] - new_history[-1]['x']) < bin_size and abs(det1['y'] - new_history[-1]['y']) < bin_size:
continue
# det1 falls outside of the box [-bin_size:+bin_size] around last detection
# 1. Interpolate exact point between det0 and det1 that this happens
if abs(det1['x'] - new_history[-1]['x']) >= bin_size:
if det1['x'] - new_history[-1]['x'] >= bin_size:
# det1 left of last
x = new_history[-1]['x'] + bin_size
f = inv_lerp(det0['x'], det1['x'], x)
elif new_history[-1]['x'] - det1['x'] >= bin_size:
# det1 left of last
x = new_history[-1]['x'] - bin_size
f = inv_lerp(det0['x'], det1['x'], x)
y = lerp(det0['y'], det1['y'], f)
if abs(det1['y'] - new_history[-1]['y']) >= bin_size:
if det1['y'] - new_history[-1]['y'] >= bin_size:
# det1 left of last
y = new_history[-1]['y'] + bin_size
f = inv_lerp(det0['y'], det1['y'], y)
elif new_history[-1]['y'] - det1['y'] >= bin_size:
# det1 left of last
y = new_history[-1]['y'] - bin_size
f = inv_lerp(det0['y'], det1['y'], y)
x = lerp(det0['x'], det1['x'], f)
# 2. Find closest point on rectangle (rectangle's four corners, or 4 midpoints)
points = get_bins(bin_size)
points = [[new_history[-1]['x']+p[0], new_history[-1]['y'] + p[1]] for p in points]
distances = [np.linalg.norm([p[0] - x, p[1]-y]) for p in points]
closest = np.argmin(distances)
point = points[closest]
new_history.append({'x': point[0], 'y':point[1]})
# todo Offsets to points:[ history for in points]
return new_history
def to_dataframe(self, camera: Camera) -> pd.DataFrame:
positions = self.get_projected_history(None, camera)
velocity = np.gradient(positions, 1/self.fps, axis=0)
acceleration = np.gradient(velocity, 1/self.fps, axis=0)
# # we can calculate heading based on the velocity components
# heading = (np.arctan2(velocity[:,1], velocity[:,0]) * 180 / np.pi) % 360
# # and derive it to get the rate of change of the heading
# d_heading = np.gradient(heading, 1/self.fps, axis=0)
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
# data_columns = data_columns.append(pd.MultiIndex.from_tuples([('heading', '°'), ('heading', 'd°')]))
# vx = derivative_of(x, scene.dt)
# vy = derivative_of(y, scene.dt)
# ax = derivative_of(vx, scene.dt)
# ay = derivative_of(vy, scene.dt)
data_dict = {
('position', 'x'): positions[:,0],
('position', 'y'): positions[:,1],
('velocity', 'x'): velocity[:,0],
('velocity', 'y'): velocity[:,1],
('acceleration', 'x'): acceleration[:,0],
('acceleration', 'y'): acceleration[:,1],
# ('heading', '°'): heading,
# ('heading', 'd°'): d_heading,
}
return pd.DataFrame(data_dict, columns=data_columns)
def to_flat_dataframe(self, camera: Camera) -> pd.DataFrame:
positions = self.get_projected_history(None, camera)
data = pd.DataFrame(positions, columns=['x', 'y'])
data['dx'] = data['x'].diff()
data['dy'] = data['y'].diff()
return data.bfill()
def to_trajectron_node(self, camera: Camera, env: Environment) -> Node:
node_data = self.to_dataframe(camera)
new_first_idx = self.history[0].frame_nr
return Node(node_type=env.NodeType.PEDESTRIAN, node_id=self.track_id, data=node_data, first_timestep=new_first_idx)
@dataclass
class Frame:
index: int
img: np.array
time: float= field(default_factory=lambda: time.time())
tracks: Optional[dict[str, Track]] = None
H: Optional[np.array] = None
camera: Optional[Camera] = None
maps: Optional[List[cv2.Mat]] = None
log: dict = field(default_factory=lambda: {}) # settings used during processing. All intermediate nodes can store their config here
def aslist(self) -> List[dict]:
return { t.track_id:
{
'id': t.track_id,
'history': t.get_projected_history(self.H).tolist(),
'det_conf': t.history[-1].conf,
# 'det_conf': trajectory_data[node.id]['det_conf'],
# 'bbox': trajectory_data[node.id]['bbox'],
# 'history': history.tolist(),
'predictions': t.predictions
} for t in self.tracks.values()
}
def without_img(self):
return Frame(self.index, None, self.time, self.tracks, self.H, self.camera, self.maps)
class DataclassJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, np.ndarray):
return o.tolist()
# if isinstance(o, np.float32):
# return "float32!{o}"
if dataclasses.is_dataclass(o):
if isinstance(o, Frame):
tracks = {}
for track_id, track in o.tracks.items():
track_obj = dataclasses.asdict(track)
track_obj['history'] = track.get_projected_history(None, o.camera)
tracks[track_id] = track_obj
d = {
'index': o.index,
'time': o.time,
'tracks': tracks,
'camera': dataclasses.asdict(o.camera),
}
else:
d = dataclasses.asdict(o)
# if isinstance(o, Frame):
# # Don't send images over JSON
# del d['img']
return d
return super().default(o)
def video_src_from_config(config) -> Iterable[UrlOrPath]:
"""deprecated, now in video_source"""
if config.video_loop:
video_srcs: Iterable[UrlOrPath] = cycle(config.video_src)
else:
video_srcs: Iterable[UrlOrPath] = config.video_src
return video_srcs
@dataclass
class Trajectory:
# TODO)) Replace history and predictions in Track with Trajectory
space: Space
fps: int = 12
points: List[Detection] = field(default_factory=list)
def __iter__(self):
for d in self.points:
yield d
class HomographyAction(argparse.Action):
def __init__(self, option_strings, dest, nargs=None, **kwargs):
if nargs is not None:
raise ValueError("nargs not allowed")
super().__init__(option_strings, dest, **kwargs)
def __call__(self, parser, namespace, values: Path, option_string=None):
if values.suffix == '.json':
with values.open('r') as fp:
H = np.array(json.load(fp))
else:
H = np.loadtxt(values, delimiter=',')
setattr(namespace, self.dest, values)
setattr(namespace, 'H', H)
class CameraAction(argparse.Action):
def __init__(self, option_strings, dest, nargs=None, **kwargs):
if nargs is not None:
raise ValueError("nargs not allowed")
super().__init__(option_strings, dest, **kwargs)
def __call__(self, parser, namespace, values, option_string=None):
if values is None:
setattr(namespace, self.dest, None)
else:
values = Path(values)
with values.open('r') as fp:
data = json.load(fp)
if 'type' in data and data['type'] == 'fisheye':
camera = FisheyeCamera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
elif 'type' in data and data['type'] == 'undistorted':
camera = UndistortedCamera(namespace.camera_fps)
else:
camera = Camera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
# # print(data)
# # print(data['camera_matrix'])
# # camera = {
# # 'camera_matrix': np.array(data['camera_matrix']),
# # 'dist_coeff': np.array(data['dist_coeff']),
# # }
# camera = Camera(np.array(data['camera_matrix']), np.array(data['dist_coeff']), data['dim']['width'], data['dim']['height'], namespace.H, namespace.camera_fps)
setattr(namespace, 'camera', camera)
class LambdaParser(argparse.ArgumentParser):
"""Execute lambda functions
"""
def parse_args(self, args=None, namespace=None):
args = super().parse_args(args, namespace)
for key in vars(args):
f = args.__dict__[key]
if type(f) == types.LambdaType:
print(f'Getting default value for {key}')
args.__dict__[key] = f()
return args

View file

@ -6,10 +6,23 @@ import json
from trap.tracker import DETECTORS, TRACKER_BYTETRACK, TRACKERS
from trap.frame_emitter import Camera
from trap.base import CameraAction, HomographyAction, LambdaParser
from pyparsing import Optional
from trap.frame_emitter import UrlOrPath
class LambdaParser(argparse.ArgumentParser):
"""Execute lambda functions
"""
def parse_args(self, args=None, namespace=None):
args = super().parse_args(args, namespace)
for key in vars(args):
f = args.__dict__[key]
if type(f) == types.LambdaType:
print(f'Getting default value for {key}')
args.__dict__[key] = f()
return args
parser = LambdaParser()
# parser.parse_args()
@ -40,6 +53,44 @@ frame_emitter_parser = parser.add_argument_group('Frame emitter')
tracker_parser = parser.add_argument_group('Tracker')
render_parser = parser.add_argument_group('Renderer')
class HomographyAction(argparse.Action):
def __init__(self, option_strings, dest, nargs=None, **kwargs):
if nargs is not None:
raise ValueError("nargs not allowed")
super().__init__(option_strings, dest, **kwargs)
def __call__(self, parser, namespace, values: Path, option_string=None):
if values.suffix == '.json':
with values.open('r') as fp:
H = np.array(json.load(fp))
else:
H = np.loadtxt(values, delimiter=',')
setattr(namespace, self.dest, values)
setattr(namespace, 'H', H)
class CameraAction(argparse.Action):
def __init__(self, option_strings, dest, nargs=None, **kwargs):
if nargs is not None:
raise ValueError("nargs not allowed")
super().__init__(option_strings, dest, **kwargs)
def __call__(self, parser, namespace, values, option_string=None):
if values is None:
setattr(namespace, self.dest, None)
else:
camera = Camera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
# values = Path(values)
# with values.open('r') as fp:
# data = json.load(fp)
# # print(data)
# # print(data['camera_matrix'])
# # camera = {
# # 'camera_matrix': np.array(data['camera_matrix']),
# # 'dist_coeff': np.array(data['dist_coeff']),
# # }
# camera = Camera(np.array(data['camera_matrix']), np.array(data['dist_coeff']), data['dim']['width'], data['dim']['height'], namespace.H, namespace.camera_fps)
setattr(namespace, 'camera', camera)
inference_parser.add_argument("--step-size",
# TODO)) Make dataset/model metadata
@ -186,14 +237,6 @@ connection_parser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
connection_parser.add_argument('--zmq-face-addr',
help='Manually specity communication addr for the face detector messages',
type=str,
default="ipc:///tmp/feeds_faces")
connection_parser.add_argument('--zmq-stage-addr',
help='Manually specity communication addr for the stage messages (the rendered lines)',
type=str,
default="tcp://0.0.0.0:99174")
connection_parser.add_argument('--zmq-camera-stream-addr',
help='Manually specity communication addr for the camera stream messages',
@ -236,10 +279,6 @@ frame_emitter_parser.add_argument("--video-offset",
help="Start playback from given frame. Note that when src is an array, this applies to all videos individually.",
default=None,
type=int)
frame_emitter_parser.add_argument("--video-end",
help="End (or loop) playback at given frame.",
default=None,
type=int)
#TODO: camera as source
frame_emitter_parser.add_argument("--video-loop",
@ -248,6 +287,7 @@ frame_emitter_parser.add_argument("--video-loop",
#TODO: camera as source
# Tracker
tracker_parser.add_argument("--camera-fps",
help="Camera FPS",
@ -263,8 +303,6 @@ tracker_parser.add_argument("--calibration",
# type=Path,
default=None,
action=CameraAction)
# Tracker
tracker_parser.add_argument("--save-for-training",
help="Specify the path in which to save",
type=Path,
@ -306,9 +344,6 @@ render_parser.add_argument("--render-window",
render_parser.add_argument("--render-animation",
help="Render animation (pyglet)",
action='store_true')
render_parser.add_argument("--render-laser",
help="Render laser (Helios DAC)",
action='store_true')
render_parser.add_argument("--render-debug-shapes",
help="Lines and points for debugging/mapping",
action='store_true')
@ -318,9 +353,6 @@ render_parser.add_argument("--render-hide-stats",
render_parser.add_argument("--full-screen",
help="Set Window full screen",
action='store_true')
render_parser.add_argument("--render-clusters",
help="renders arrowd clusters instead of individual predictions",
action='store_true')
render_parser.add_argument("--render-url",
help="""Stream renderer on given URL. Two easy approaches:

View file

@ -1,124 +0,0 @@
import collections
from gc import is_finalized
import logging
import statistics
import threading
import time
from typing import MutableSequence
import zmq
logger = logging.getLogger('counter')
class CounterSender:
def __init__(self, address = "ipc:///tmp/trap-counters2"):
# self.name = name
self.context = zmq.Context()
self.sock = self.context.socket(zmq.PUB)
self.sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
# self.sock.sndhwm = 1
self.sock.connect(address)
def set(self, name:str, value:float):
try:
# we cannot use send_multipart in combination with conflate
self.sock.send_pyobj([name, value], flags=zmq.NOBLOCK)
except zmq.ZMQError as e:
logger.warning(f"No space in que to count {name} as {value}")
class CounterFpsSender():
def __init__(self, name:str , sender: CounterSender):
self.name = name
self.sender = sender
self.tocs: MutableSequence[(float, int)] = collections.deque(maxlen=5)
self.iterations: int = 0
# threading.Event.wait()
# TODO thread to daeomic loop so it automatically stops
self.thread = threading.Thread(target=self.interval, daemon=True)
self.is_finished = threading.Event()
def tick(self):
"""
returns dt since previous tock
"""
self.iterations += 1
self.snapshot()
if len(self.tocs) > 1:
return float(self.tocs[-1][0] - self.tocs[-2][0])
else:
return 0.
def snapshot(self):
self.tocs.append((time.perf_counter(), self.iterations))
self.sender.set(self.name, self.fps)
@property
def fps(self):
if len(self.tocs) < 2:
return 0
dt = self.tocs[-1][0] - self.tocs[0][0]
di = self.tocs[-1][1] - self.tocs[0][1]
return di/dt
def interval(self):
while True:
self.is_finished.wait(.5)
if self.is_finished.is_set():
break
self.snapshot()
# timer = threading.Timer(.5, self.interval)
# timer.start()
class CounterLog():
def __init__(self, history = 20):
self.history: MutableSequence[(float, float)] = collections.deque(maxlen=history)
def add(self, value):
self.history.append((time.perf_counter(), value))
def value(self):
return self.history[-1][1]
def has_value(self):
if not len(self.history):
return False
if (time.perf_counter() - self.history[-1][0]) > 4:
# no update in 4s: very slow. Dead thread?
return False
return True
def avg(self):
if not len(self.history):
return 0.
return statistics.fmean([h[1] for h in self.history])
class CounterListerner():
def __init__(self, address = "ipc:///tmp/trap-counters2"):
self.context = zmq.Context()
self.sock = self.context.socket(zmq.SUB)
self.sock.bind(address)
self.sock.subscribe( b'')
self.values: collections.defaultdict[str, CounterLog] = collections.defaultdict(lambda: CounterLog())
def snapshot(self):
messages = []
while self.sock.poll(0) == zmq.POLLIN:
msg = self.sock.recv_pyobj()
# print(msg)
name, value = msg
# name, value = name.decode('utf8'),float(value.decode('utf8'))
self.values[name].add(float(value))
def get_latest(self):
self.snapshot()
return self.values
def to_string(self):
strs = [(f"{k}: {v.value():.2f} ({v.avg():.2f})" if v.has_value() else f"{k}: --") for (k,v) in self.values.items()]
return " ".join(strs)

View file

@ -1,50 +1,71 @@
# used for "Forward Referencing of type annotations"
from __future__ import annotations
import datetime
import json
import logging
from pathlib import Path
import time
from argparse import ArgumentParser, Namespace
from multiprocessing.synchronize import Event as BaseEvent
from typing import Dict, List, Optional
from charset_normalizer import detect
import cv2
import ffmpeg
from argparse import Namespace
import datetime
import logging
from multiprocessing import Event
from multiprocessing.synchronize import Event as BaseEvent
import cv2
import numpy as np
import json
import pyglet
import pyglet.event
import zmq
from pyglet import shapes
import tempfile
from pathlib import Path
import shutil
import math
from typing import Dict, Iterable, Optional
from trap.base import Detection, UndistortedCamera
from trap.counter import CounterListerner
from trap.frame_emitter import Frame, Track
from trap.lines import load_lines_from_svg
from trap.node import Node
from pyglet import shapes
from PIL import Image
from trap.frame_emitter import DetectionState, Frame, Track, Camera
from trap.preview_renderer import FrameWriter
from trap.tools import draw_track_predictions, draw_track_projected, to_point
from trap.utils import convert_world_points_to_img_points
from trap.tools import draw_track, draw_track_predictions, draw_track_projected, draw_trackjectron_history, to_point
from trap.utils import convert_world_points_to_img_points, convert_world_space_to_img_space
logger = logging.getLogger("trap.simple_renderer")
class CvRenderer(Node):
def setup(self):
self.prediction_sock = self.sub(self.config.zmq_prediction_addr)
self.tracker_sock = self.sub(self.config.zmq_trajectory_addr)
self.detector_sock = self.sub(self.config.zmq_detection_addr)
self.frame_sock = self.sub(self.config.zmq_frame_addr)
class CvRenderer:
def __init__(self, config: Namespace, is_running: BaseEvent):
self.config = config
self.is_running = is_running
# self.H = self.config.H
# self.inv_H = np.linalg.pinv(self.H)
context = zmq.Context()
self.prediction_sock = context.socket(zmq.SUB)
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
# self.prediction_sock.connect(config.zmq_prediction_addr if not self.config.bypass_prediction else config.zmq_trajectory_addr)
self.prediction_sock.connect(config.zmq_prediction_addr)
self.tracker_sock = context.socket(zmq.SUB)
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.tracker_sock.setsockopt(zmq.SUBSCRIBE, b'')
self.tracker_sock.connect(config.zmq_trajectory_addr)
self.frame_sock = context.socket(zmq.SUB)
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
self.frame_sock.connect(config.zmq_frame_addr)
self.H = self.config.H
self.inv_H = np.linalg.pinv(self.H)
# TODO: get FPS from frame_emitter
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
self.fps = 60
self.frame_size = None # configure on first frame recv
# self.frame_size = (self.config.camera.projected_w,self.config.camera.projected_h)
self.frame_size = (self.config.camera.w,self.config.camera.h)
self.hide_stats = False
self.out_writer = self.start_writer() if self.config.render_file else None
self.streaming_process = self.start_streaming() if self.config.render_url else None
@ -52,15 +73,85 @@ class CvRenderer(Node):
self.frame: Frame|None= None
self.tracker_frame: Frame|None = None
self.prediction_frame: Frame|None = None
self.detections: List[Detection]|None = None
self.tracks: Dict[str, Track] = {}
self.predictions: Dict[str, Track] = {}
self.scale = 100
self.debug_lines = debug_lines = load_lines_from_svg(self.config.debug_map, self.scale, '') if self.config.debug_map else []
# self.init_shapes()
# self.init_labels()
def init_shapes(self):
'''
Due to error when running headless, we need to configure options before extending the shapes class
'''
class GradientLine(shapes.Line):
def __init__(self, x, y, x2, y2, width=1, color1=[255,255,255], color2=[255,255,255], batch=None, group=None):
# print('colors!', colors)
# assert len(colors) == 6
r, g, b, *a = color1
self._rgba1 = (r, g, b, a[0] if a else 255)
r, g, b, *a = color2
self._rgba2 = (r, g, b, a[0] if a else 255)
# print('rgba', self._rgba)
super().__init__(x, y, x2, y2, width, color1, batch=None, group=None)
# <pyglet.graphics.vertexdomain.VertexList
# pyglet.graphics.vertexdomain
# print(self._vertex_list)
def _create_vertex_list(self):
'''
copy of super()._create_vertex_list but with additional colors'''
self._vertex_list = self._group.program.vertex_list(
6, self._draw_mode, self._batch, self._group,
position=('f', self._get_vertices()),
colors=('Bn', self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1 ),
translation=('f', (self._x, self._y) * self._num_verts))
def _update_colors(self):
self._vertex_list.colors[:] = self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1
def color1(self, color):
r, g, b, *a = color
self._rgba1 = (r, g, b, a[0] if a else 255)
self._update_colors()
def color2(self, color):
r, g, b, *a = color
self._rgba2 = (r, g, b, a[0] if a else 255)
self._update_colors()
self.gradientLine = GradientLine
def init_labels(self):
base_color = (255,)*4
color_predictor = (255,255,0, 255)
color_info = (255,0, 255, 255)
color_tracker = (0,255, 255, 255)
options = []
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
options.append(f"{option}: {self.config.__dict__[option]}")
self.labels = {
'waiting': pyglet.text.Label("Waiting for prediction"),
'frame_idx': pyglet.text.Label("", x=20, y=self.window.height - 17, color=base_color, batch=self.batch_overlay),
'tracker_idx': pyglet.text.Label("", x=90, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
'pred_idx': pyglet.text.Label("", x=110, y=self.window.height - 17, color=color_predictor, batch=self.batch_overlay),
'frame_time': pyglet.text.Label("t", x=140, y=self.window.height - 17, color=base_color, batch=self.batch_overlay),
'frame_latency': pyglet.text.Label("", x=235, y=self.window.height - 17, color=color_info, batch=self.batch_overlay),
'tracker_time': pyglet.text.Label("", x=300, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
'pred_time': pyglet.text.Label("", x=360, y=self.window.height - 17, color=color_predictor, batch=self.batch_overlay),
'track_len': pyglet.text.Label("", x=800, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
'options1': pyglet.text.Label(options.pop(-1), x=20, y=30, color=base_color, batch=self.batch_overlay),
'options2': pyglet.text.Label(" | ".join(options), x=20, y=10, color=base_color, batch=self.batch_overlay),
}
def refresh_labels(self, dt: float):
"""Every frame"""
@ -79,6 +170,130 @@ class CvRenderer(Node):
self.labels['pred_time'].text = f"{self.prediction_frame.time - time.time():.3f}s"
# self.labels['track_len'].text = f"{len(self.prediction_frame.tracks)} tracks"
# cv2.putText(img, f"{frame.index:06d}", (20,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
# cv2.putText(img, f"{frame.time - first_time:.3f}s", (120,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
# if prediction_frame:
# # render Δt and Δ frames
# cv2.putText(img, f"{prediction_frame.index - frame.index}", (90,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
# cv2.putText(img, f"{prediction_frame.time - time.time():.2f}s", (200,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
# cv2.putText(img, f"{len(prediction_frame.tracks)} tracks", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
# cv2.putText(img, f"h: {np.average([len(t.history or []) for t in prediction_frame.tracks.values()]):.2f}", (580,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
# cv2.putText(img, f"ph: {np.average([len(t.predictor_history or []) for t in prediction_frame.tracks.values()]):.2f}", (660,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
# cv2.putText(img, f"p: {np.average([len(t.predictions or []) for t in prediction_frame.tracks.values()]):.2f}", (740,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
# options = []
# for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
# options.append(f"{option}: {config.__dict__[option]}")
# cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
# cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
def check_frames(self, dt):
new_tracks = False
try:
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
if not self.first_time:
self.first_time = self.frame.time
img = cv2.GaussianBlur(self.frame.img, (15, 15), 0)
img = cv2.flip(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), 0)
img = pyglet.image.ImageData(self.frame_size[0], self.frame_size[1], 'RGB', img.tobytes())
# don't draw in batch, so that it is the background
self.video_sprite = pyglet.sprite.Sprite(img=img, batch=self.batch_bg)
self.video_sprite.opacity = 100
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
try:
self.prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
new_tracks = True
except zmq.ZMQError as e:
pass
try:
self.tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
new_tracks = True
except zmq.ZMQError as e:
pass
def on_key_press(self, symbol, modifiers):
print('A key was pressed, use f to hide')
if symbol == ord('f'):
self.window.set_fullscreen(not self.window.fullscreen)
if symbol == ord('h'):
self.hide_stats = not self.hide_stats
def check_running(self, dt):
if not self.is_running.is_set():
self.window.close()
self.event_loop.exit()
def on_close(self):
self.is_running.clear()
def on_refresh(self, dt: float):
# update shapes
# self.bg =
for track_id, track in self.drawn_tracks.items():
track.update_drawn_positions(dt)
self.refresh_labels(dt)
# self.shape1 = shapes.Circle(700, 150, 100, color=(50, 0, 30), batch=self.batch_anim)
# self.shape3 = shapes.Circle(800, 150, 100, color=(100, 225, 30), batch=self.batch_anim)
pass
def on_draw(self):
self.window.clear()
self.batch_bg.draw()
for track in self.drawn_tracks.values():
for shape in track.shapes:
shape.draw() # for some reason the batches don't work
for track in self.drawn_tracks.values():
for shapes in track.pred_shapes:
for shape in shapes:
shape.draw()
# self.batch_anim.draw()
self.batch_overlay.draw()
# pyglet.graphics.draw(3, pyglet.gl.GL_LINE, ("v2i", (100,200, 600,800)), ('c3B', (255,255,255, 255,255,255)))
if not self.hide_stats:
self.fps_display.draw()
# if streaming, capture buffer and send
try:
if self.streaming_process or self.out_writer:
buf = pyglet.image.get_buffer_manager().get_color_buffer()
img_data = buf.get_image_data()
data = img_data.get_data() # alternative: .get_data("RGBA", image_data.pitch)
img = np.asanyarray(data).reshape((img_data.height, img_data.width, 4))
img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGB)
img = np.flip(img, 0)
# img = cv2.flip(img, cv2.0)
# cv2.imshow('frame', img)
# cv2.waitKey(1)
if self.streaming_process:
self.streaming_process.stdin.write(img.tobytes())
if self.out_writer:
self.out_writer.write(img)
except Exception as e:
logger.exception(e)
def start_writer(self):
if not self.config.output_dir.exists():
raise FileNotFoundError("Path does not exist")
@ -87,16 +302,16 @@ class CvRenderer(Node):
filename = self.config.output_dir / f"render_predictions-{date_str}-{self.config.detector}.mp4"
logger.info(f"Write to {filename}")
return FrameWriter(str(filename), self.fps, None)
return FrameWriter(str(filename), self.fps, self.frame_size)
# fourcc = cv2.VideoWriter_fourcc(*'vp09')
fourcc = cv2.VideoWriter_fourcc(*'vp09')
# return cv2.VideoWriter(str(filename), fourcc, self.fps, self.frame_size)
return cv2.VideoWriter(str(filename), fourcc, self.fps, self.frame_size)
def start_streaming(self, frame_size=(1920,1080)):
def start_streaming(self):
return (
ffmpeg
.input('pipe:', format='rawvideo',codec="rawvideo", pix_fmt='bgr24', s='{}x{}'.format(*frame_size))
.input('pipe:', format='rawvideo',codec="rawvideo", pix_fmt='bgr24', s='{}x{}'.format(*self.frame_size))
.output(
self.config.render_url,
#codec = "copy", # use same codecs of the original video
@ -116,8 +331,11 @@ class CvRenderer(Node):
)
# return process
def run(self):
self.frame = None
def run(self, timer_counter):
frame = None
prediction_frame = None
tracker_frame = None
@ -126,15 +344,14 @@ class CvRenderer(Node):
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
# https://gist.github.com/ronekko/dc3747211543165108b11073f929b85e
cv2.moveWindow("frame", 0, -1)
if self.config.full_screen:
cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.moveWindow("frame", 1920, -1)
cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.setMouseCallback('frame',self.click_print_position)
# bgsub = cv2.createBackgroundSubtractorMOG2(120, 50, detectShadows=True)
while self.is_running.is_set():
i+=1
with timer_counter.get_lock():
timer_counter.value+=1
while self.run_loop():
i += 1
# zmq_ev = self.frame_sock.poll(timeout=2000)
# if not zmq_ev:
@ -142,7 +359,7 @@ class CvRenderer(Node):
# continue
try:
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
@ -151,12 +368,10 @@ class CvRenderer(Node):
# logger.debug(f'new video frame {frame.index}')
if self.frame is None and i < 100:
if frame is None:
# might need to wait a few iterations before first frame comes available
time.sleep(.1)
continue
else:
self.frame = Frame(i, np.zeros((1920,1080,3)), camera=UndistortedCamera(12))
try:
prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
@ -174,43 +389,28 @@ class CvRenderer(Node):
except zmq.ZMQError as e:
logger.debug(f'reuse tracks')
try:
self.detections = self.detector_sock.recv_pyobj(zmq.NOBLOCK)
# print('detections')
except zmq.ZMQError as e:
# print('no detections')
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
if first_time is None:
first_time = self.frame.time
first_time = frame.time
# img = frame.img
# save_file = Path("videos/snap.png")
# if not save_file.exists():
# img = frame.camera.img_to_world(frame.img, 100)
# cv2.imwrite(save_file, img)
img = decorate_frame(frame, tracker_frame, prediction_frame, first_time, self.config, self.tracks, self.predictions)
img = decorate_frame(self.frame, tracker_frame, prediction_frame, first_time, self.config, self.tracks, self.predictions, self.detections, self.config.render_clusters, self.debug_lines, self.scale)
logger.debug(f"write frame {self.frame.time - first_time:.3f}s")
logger.debug(f"write frame {frame.time - first_time:.3f}s")
if self.out_writer:
self.out_writer.write(img)
if self.streaming_process:
self.streaming_process.stdin.write(img.tobytes())
if not self.config.no_window:
if self.config.render_window:
cv2.imshow('frame',cv2.resize(img, (1920, 1080)))
# cv2.imshow('frame',img)
cv2.waitKey(10)
cv2.waitKey(1)
# clear out old tracks & predictions:
for track_id, track in list(self.tracks.items()):
if get_animation_position(track, self.frame) == 1:
if get_animation_position(track, frame) == 1:
self.tracks.pop(track_id)
for prediction_id, track in list(self.predictions.items()):
if get_animation_position(track, self.frame) == 1:
if get_animation_position(track, frame) == 1:
self.predictions.pop(prediction_id)
logger.info('Stopping')
@ -226,71 +426,6 @@ class CvRenderer(Node):
self.streaming_process.wait()
logger.info('stopped')
@classmethod
def arg_parser(cls):
render_parser = ArgumentParser()
render_parser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
render_parser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
render_parser.add_argument('--zmq-detection-addr',
help='Manually specity communication addr for the detection messages',
type=str,
default="ipc:///tmp/feeds_dets")
render_parser.add_argument('--zmq-prediction-addr',
help='Manually specity communication addr for the prediction messages',
type=str,
default="ipc:///tmp/feeds_preds")
render_parser.add_argument("--render-file",
help="Render a video file previewing the prediction, and its delay compared to the current frame",
action='store_true')
render_parser.add_argument("--no-window",
help="Disable a previewing to a window",
action='store_true')
render_parser.add_argument("--full-screen",
help="Set Window full screen",
action='store_true')
render_parser.add_argument("--render-clusters",
help="renders arrowd clusters instead of individual predictions",
action='store_true')
render_parser.add_argument("--render-url",
help="""Stream renderer on given URL. Two easy approaches:
- using zmq wrapper one can specify the LISTENING ip. To listen to any incoming connection: zmq:tcp://0.0.0.0:5556
- alternatively, using e.g. UDP one needs to specify the IP of the client. E.g. udp://100.69.123.91:5556/stream
Note that with ZMQ you can have multiple clients connecting simultaneously. E.g. using `ffplay zmq:tcp://100.109.175.82:5556`
When using udp, connecting can be done using `ffplay udp://100.109.175.82:5556/stream`
""",
type=str,
default=None)
render_parser.add_argument('--debug-map',
help='specify a map (svg-file) from which to load lines which will be overlayed',
type=str,
default="../DATASETS/hof-lidar/map_hof.svg")
return render_parser
def click_print_position(self, event,x,y,flags,param):
# if event == cv2.EVENT_LBUTTONDBLCLK:
if event == cv2.EVENT_LBUTTONUP:
if not self.frame:
return
scale = 100
print("click position:", x/scale, y/scale)
# self.frame.camera.points_img_to_world([[x, y]], 1)
# cv2.circle(img,(x,y),100,(255,0,0),-1)
mouseX,mouseY = x,y
# colorset = itertools.product([0,255], repeat=3) # but remove white
# colorset = [(0, 0, 0),
# (0, 0, 255),
@ -320,20 +455,16 @@ def get_animation_position(track: Track, current_frame: Frame):
def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame, first_time: float, config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track], detections: Optional[List[Detection]], as_clusters = True, debug_lines = [], scale: float = 100) -> np.array:
# Deprecated
def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame, first_time: float, config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track]) -> np.array:
# TODO: replace opencv with QPainter to support alpha? https://doc.qt.io/qtforpython-5/PySide2/QtGui/QPainter.html#PySide2.QtGui.PySide2.QtGui.QPainter.drawImage
# or https://github.com/pygobject/pycairo?tab=readme-ov-file
# or https://pyglet.readthedocs.io/en/latest/programming_guide/shapes.html
# and use http://code.astraw.com/projects/motmot/pygarrayimage.html or https://gist.github.com/nkymut/1cb40ea6ae4de0cf9ded7332f1ca0d55
# or https://api.arcade.academy/en/stable/index.html (supports gradient color in line -- "Arcade is built on top of Pyglet and OpenGL.")
dst_img = frame.camera.img_to_world(frame.img, scale)
# mask = bg_subtractor.apply(dst_img)
# mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2RGB).astype(float) / 255
# dst_img = dst_img * mask
# undistorted_img = cv2.undistort(frame.img, config.camera.mtx, config.camera.dist, None, config.camera.newcameramtx)
# dst_img = cv2.warpPerspective(undistorted_img,convert_world_space_to_img_space(config.camera.H),(config.camera.w,config.camera.h))
undistorted_img = cv2.undistort(frame.img, config.camera.mtx, config.camera.dist, None, config.camera.newcameramtx)
dst_img = cv2.warpPerspective(undistorted_img,convert_world_space_to_img_space(config.camera.H),(config.camera.w,config.camera.h))
# dst_img2 = cv2.warpPerspective(undistorted_img,convert_world_space_to_img_space(config.camera.H), None)
# cv2.imwrite('/home/ruben/suspicion/DATASETS/hof3/camera2.png', dst_img2)
@ -354,55 +485,12 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
# cv2.imwrite(str(self.config.output_dir / "orig.png"), warpedFrame)
cv2.rectangle(img, (0,0), (img.shape[1],25), (0,0,0), -1)
if detections:
for detection in detections:
points = [
detection.get_foot_coords(),
[detection.l, detection.t],
[detection.l + detection.w, detection.t + detection.h],
]
points = tracker_frame.camera.points_img_to_world(points, scale)
points = [to_point(p) for p in points] # to int
w = points[1][0]-points[2][0]
feet = [int(points[2][0] + .5 * w), points[2][1]]
cv2.rectangle(img, points[1], points[2], (255,255,0), 2)
cv2.circle(img, points[0], 5, (255,255,0), 2)
cv2.putText(img, f"{detection.conf:.02f}", (points[0][0], points[0][1]+20), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
def conversion(points):
return convert_world_points_to_img_points(points, scale)
if not tracker_frame:
cv2.putText(img, f"and track", (650,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
else:
for track_id, track in tracks.items():
inv_H = np.linalg.pinv(tracker_frame.H)
draw_track_projected(img, track, int(track_id), tracker_frame.camera, conversion)
for line in debug_lines:
for rp1, rp2 in zip(line.points, line.points[1:]):
p1 = (
int(rp1.position[0]*scale),
int(rp1.position[1]*scale),
)
p2 = (
int(rp2.position[0]*scale),
int(rp2.position[1]*scale),
)
cv2.line(img, p1, p2, (255,0,0), 2)
# points = [(int(point[0]*scale), int(point[1]*scale)) for point in points]
# for num, points in enumerate(frame.camera.debug_lines):
# cv2.line(img, points[0], points[1], (255,0,0), 2)
# if hasattr(frame.camera, 'debug_points'):
# for num, point in enumerate(frame.camera.debug_points):
# cv2.circle(img, (int(point[0]*scale), int(point[1]*scale)), 5, (255,0,0), 2)
# cv2.putText(img, f"{num}", (int(point[0]*scale)+20, int(point[1]*scale)), cv2.FONT_HERSHEY_PLAIN, 1, (255,0,0), 1)
draw_track_projected(img, track, int(track_id), config.camera, convert_world_points_to_img_points)
if not prediction_frame:
cv2.putText(img, f"Waiting for prediction...", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
@ -410,11 +498,10 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
else:
for track_id, track in predictions.items():
inv_H = np.linalg.pinv(prediction_frame.H)
# For debugging:
# draw_trackjectron_history(img, track, int(track.track_id), conversion)
# anim_position = get_animation_position(track, frame)
anim_position = 1
draw_track_predictions(img, track, int(track.track_id)+1, prediction_frame.camera, conversion, anim_position=anim_position, as_clusters=as_clusters)
# draw_track(img, track, int(track_id))
draw_trackjectron_history(img, track, int(track.track_id), convert_world_points_to_img_points)
anim_position = get_animation_position(track, frame)
draw_track_predictions(img, track, int(track.track_id)+1, config.camera, convert_world_points_to_img_points, anim_position=anim_position)
cv2.putText(img, f"{len(track.predictor_history) if track.predictor_history else 'none'}", to_point(track.history[0].get_foot_coords()), cv2.FONT_HERSHEY_COMPLEX, 1, (255,255,255), 1)
if prediction_frame.maps:
for i, m in enumerate(prediction_frame.maps):
@ -444,8 +531,6 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
cv2.putText(img, f"{frame.time - first_time: >10.2f}s", (150,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, f"{frame.time - time.time():.2f}s", (250,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
options = []
if prediction_frame:
# render Δt and Δ frames
cv2.putText(img, f"{tracker_frame.index - frame.index}", (90,17), cv2.FONT_HERSHEY_PLAIN, 1, tracker_color, 1)
@ -456,14 +541,14 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
cv2.putText(img, f"h: {np.average([len(t.history or []) for t in prediction_frame.tracks.values()]):.2f}", (700,17), cv2.FONT_HERSHEY_PLAIN, 1, tracker_color, 1)
cv2.putText(img, f"ph: {np.average([len(t.predictor_history or []) for t in prediction_frame.tracks.values()]):.2f}", (780,17), cv2.FONT_HERSHEY_PLAIN, 1, predictor_color, 1)
cv2.putText(img, f"p: {np.average([len(t.predictions or []) for t in prediction_frame.tracks.values()]):.2f}", (860,17), cv2.FONT_HERSHEY_PLAIN, 1, predictor_color, 1)
options = []
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
options.append(f"{option}: {config.__dict__[option]}")
for option, value in prediction_frame.log['predictor'].items():
options.append(f"{option}: {value}")
if len(options):
cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
return img

View file

@ -1,170 +0,0 @@
from argparse import Namespace
from collections import defaultdict
import csv
from dataclasses import dataclass, field
import json
import logging
from math import nan
from multiprocessing import Event
import multiprocessing
from pathlib import Path
import pickle
import time
from typing import DefaultDict, Dict, Optional, List
import jsonlines
import numpy as np
import torch
import torchvision
import ultralytics
import zmq
import cv2
from facenet_pytorch import InceptionResnetV1, MTCNN
from trap.base import Frame
logger = logging.getLogger('trap.face_detector')
class FaceDetector:
def __init__(self, config: Namespace):
self.config = config
self.context = zmq.Context()
self.frame_sock = self.context.socket(zmq.SUB)
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
self.frame_sock.connect(self.config.zmq_frame_addr)
self.face_socket = self.context.socket(zmq.PUB)
self.face_socket.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
self.face_socket.bind(self.config.zmq_face_addr)
# # TODO: config device
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def track(self, is_running: Event, timer_counter: int = 0):
"""
Live tracking of frames coming in over zmq
"""
self.is_running = is_running
prev_frame_i = -1
# For a model pretrained on CASIA-Webface
# model = InceptionResnetV1(pretrained='casia-webface').eval().to(self.device)
# mtcnn = MTCNN(
# image_size=160, margin=0, min_face_size=10,
# thresholds=[0.3, 0.3, 0.3], factor=0.709, post_process=True,
# device=self.device, keep_all=True
# )
# modelpath = Path("face_detection_yunet_2023mar_int8bq.onnx")
modelpath = Path("face_detection_yunet_2023mar_int8.onnx")
# model = YuNet(modelPath=args.model,
# inputSize=[320, 320],
# confThreshold=args.conf_threshold,
# nmsThreshold=args.nms_threshold,
# topK=args.top_k,
# backendId=backend_id,
# targetId=target_id)
detector = cv2.FaceDetectorYN.create(
str(modelpath),
"",
(320, 320),
.3,
.3,
5000,
cv2.dnn.DNN_BACKEND_CUDA,
target_id=cv2.dnn.DNN_TARGET_CUDA
)
while self.is_running.is_set():
with timer_counter.get_lock():
timer_counter.value += 1
poll_time = time.time()
zmq_ev = self.frame_sock.poll(timeout=2000)
if not zmq_ev:
logger.warning('skip poll after 2000ms')
# when there's no data after timeout, loop so that is_running is checked
continue
start_time = time.time()
frame: Frame = self.frame_sock.recv_pyobj() # frame delivery in current setup: 0.012-0.03s
# print(time.time()- frame.time)
if frame.index > (prev_frame_i+1):
logger.warning(f"Dropped {frame.index - prev_frame_i - 1} frames ({frame.index=}, {prev_frame_i=}) -- poll time {start_time-poll_time:.5f}")
height, width, channels = frame.img.shape
detector.setInputSize((width//2, height//2))
img = cv2.resize(frame.img, (width//2, height//2))
faces = detector.detect(img)
prev_frame_i = frame.index
# print(f"send to {self.trajectory_socket}, {self.config.zmq_trajectory_addr}")
self.face_socket.send_pyobj(faces) # ditch image for faster passthrough
logger.info('Stopping')
def run_detector(config: Namespace, is_running: Event, timer_counter):
router = FaceDetector(config)
router.track(is_running, timer_counter)
def run():
# Frame emitter
import argparse
argparser = argparse.ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
argparser.add_argument("--save-for-training",
help="Specify the path in which to save",
type=Path,
default=None)
argparser.add_argument("--detector",
help="Specify the detector to use",
type=str,
default=DETECTOR_YOLOv8,
choices=DETECTORS)
argparser.add_argument("--tracker",
help="Specify the detector to use",
type=str,
default=TRACKER_BYTETRACK,
choices=TRACKERS)
argparser.add_argument("--smooth-tracks",
help="Smooth the tracker tracks before sending them to the predictor",
action='store_true')
config = argparser.parse_args()
is_running = multiprocessing.Event()
is_running.set()
timer_counter = timer.Timer('frame_emitter')
router = Tracker(config)
router.track(is_running, timer_counter.iterations)
is_running.clear()

View file

@ -1,128 +1,560 @@
from __future__ import annotations
from argparse import Namespace
from dataclasses import dataclass, field
import dataclasses
from enum import IntFlag
from itertools import cycle
import json
import logging
import pickle
from argparse import ArgumentParser, Namespace
from multiprocessing import Event
from pathlib import Path
import pickle
import sys
import time
from typing import Iterable, List, Optional
import numpy as np
import cv2
import pandas as pd
import zmq
import os
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
from bytetracker.byte_tracker import STrack as ByteTrackTrack
from bytetracker.basetrack import TrackState as ByteTrackTrackState
from trajectron.environment import Environment, Node, Scene
from urllib.parse import urlparse
from trap import node
from trap.base import *
from trap.base import LambdaParser
from trap.gemma import ImgMovementFilter
from trap.preview_renderer import FrameWriter
from trap.video_sources import get_video_source
from trap.utils import get_bins
from trap.utils import inv_lerp, lerp
logger = logging.getLogger('trap.frame_emitter')
class DataclassJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, np.ndarray):
return o.tolist()
if dataclasses.is_dataclass(o):
if isinstance(o, Frame):
tracks = {}
for track_id, track in o.tracks.items():
track_obj = dataclasses.asdict(track)
track_obj['history'] = track.get_projected_history(None, o.camera)
tracks[track_id] = track_obj
d = {
'index': o.index,
'time': o.time,
'tracks': tracks,
'camera': dataclasses.asdict(o.camera),
}
else:
d = dataclasses.asdict(o)
# if isinstance(o, Frame):
# # Don't send images over JSON
# del d['img']
return d
return super().default(o)
class UrlOrPath():
def __init__(self, string):
self.url = urlparse(str(string))
def __str__(self) -> str:
return self.url.geturl()
def is_url(self) -> bool:
return len(self.url.netloc) > 0
def path(self) -> Path:
if self.is_url():
return Path(self.url.path)
return Path(self.url.geturl()) # can include scheme, such as C:/
class Space(IntFlag):
Image = 1 # As detected in the image
Undistorted = 2 # After applying lense undistortiion
World = 4 # After lens undistort and homography
Render = 8 # View space of renderer
class DetectionState(IntFlag):
Tentative = 1 # state before n_init (see DeepsortTrack)
Confirmed = 2 # after tentative
Lost = 4 # lost when DeepsortTrack.time_since_update > 0 but not Deleted
Interpolated = 8 # A position estimated through interpolation of adjecent detections
@classmethod
def from_deepsort_track(cls, track: DeepsortTrack):
if track.state == DeepsortTrackState.Tentative:
return cls.Tentative
if track.state == DeepsortTrackState.Confirmed:
if track.time_since_update > 0:
return cls.Lost
return cls.Confirmed
raise RuntimeError("Should not run into Deleted entries here")
@classmethod
def from_bytetrack_track(cls, track: ByteTrackTrack):
if track.state == ByteTrackTrackState.New:
return cls.Tentative
if track.state == ByteTrackTrackState.Lost:
return cls.Lost
# if track.time_since_update > 0:
if track.state == ByteTrackTrackState.Tracked:
return cls.Confirmed
raise RuntimeError("Should not run into Deleted entries here")
def H_from_path(path: Path):
if path.suffix == '.json':
with path.open('r') as fp:
H = np.array(json.load(fp))
else:
H = np.loadtxt(path, delimiter=',')
return H
@dataclass
class Camera:
mtx: cv2.Mat
dist: cv2.Mat
w: float
h: float
H: cv2.Mat # homography
newcameramtx: cv2.Mat = field(init=False)
roi: cv2.typing.Rect = field(init=False)
fps: float
def __post_init__(self):
self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(self.mtx, self.dist, (self.w,self.h), 1, (self.w,self.h))
@classmethod
def from_calibfile(cls, calibration_path, H, fps):
with calibration_path.open('r') as fp:
data = json.load(fp)
# print(data)
# print(data['camera_matrix'])
# camera = {
# 'camera_matrix': np.array(data['camera_matrix']),
# 'dist_coeff': np.array(data['dist_coeff']),
# }
return cls(
np.array(data['camera_matrix']),
np.array(data['dist_coeff']),
data['dim']['width'],
data['dim']['height'],
H, fps)
@classmethod
def from_paths(cls, calibration_path, h_path, fps):
H = H_from_path(h_path)
return cls.from_calibfile(calibration_path, H, fps)
# def __init__(self, mtx, dist, w, h, H):
# self.mtx = mtx
# self.dist = dist
# self.w = w
# self.h = h
# self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
# self.H = H # homography
@dataclass
class Position:
x: float
y: float
conf: float
state: DetectionState
frame_nr: int
det_class: str
@dataclass
class Detection:
track_id: str # deepsort track id association
l: int # left - image space
t: int # top - image space
w: int # width - image space
h: int # height - image space
conf: float # object detector probablity
state: DetectionState
frame_nr: int
det_class: str
def get_foot_coords(self) -> list[float, float]:
return [self.l + 0.5 * self.w, self.t+self.h]
@classmethod
def from_deepsort(cls, dstrack: DeepsortTrack, frame_nr: int):
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
class FrameEmitter(node.Node):
@classmethod
def from_bytetrack(cls, bstrack: ByteTrackTrack, frame_nr: int):
return cls(bstrack.track_id, *bstrack.tlwh, bstrack.score, DetectionState.from_bytetrack_track(bstrack), frame_nr, bstrack.cls)
def get_scaled(self, scale: float = 1):
if scale == 1:
return self
return Detection(
self.track_id,
self.l*scale,
self.t*scale,
self.w*scale,
self.h*scale,
self.conf,
self.state,
self.frame_nr,
self.det_class)
def to_ltwh(self):
return (int(self.l), int(self.t), int(self.w), int(self.h))
def to_ltrb(self):
return (int(self.l), int(self.t), int(self.l+self.w), int(self.t+self.h))
@dataclass
class Trajectory:
# TODO)) Replace history and predictions in Track with Trajectory
space: Space
fps: int = 12
points: List[Detection] = field(default_factory=list)
def __iter__(self):
for d in self.points:
yield d
@dataclass
class Track:
"""A bit of an haphazardous wrapper around the 'real' tracker to provide
a history, with which the predictor can work, as we then can deduce velocity
and acceleration.
"""
track_id: str = None
history: List[Detection] = field(default_factory=list)
predictor_history: Optional[list] = None # in image space
predictions: Optional[list] = None
fps: int = 12 # TODO)) convert this to camera? That way, incorporates H and dist, alternatively, each track is as a whole attached to a space
source: Optional[int] = None # to keep track of processed tracks
def get_projected_history(self, H: Optional[cv2.Mat] = None, camera: Optional[Camera]= None) -> np.array:
foot_coordinates = [d.get_foot_coords() for d in self.history]
# TODO)) Undistort points before perspective transform
if len(foot_coordinates):
if camera:
coords = cv2.undistortPoints(np.array([foot_coordinates]).astype('float32'), camera.mtx, camera.dist, None, camera.newcameramtx)
coords = cv2.perspectiveTransform(np.array(coords),camera.H)
return coords.reshape((coords.shape[0],2))
else:
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
return coords[0]
return np.array([])
def get_projected_history_as_dict(self, H, camera: Optional[Camera]= None) -> dict:
coords = self.get_projected_history(H, camera)
return [{"x":c[0], "y":c[1]} for c in coords]
def get_with_interpolated_history(self) -> Track:
# new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
# new_track = Track(track.track_id, new_history, track.predictor_history, track.predictions)
new_history = []
for j in range(len(self.history)):
a = self.history[j]
new_history.append(Detection(a.track_id, a.l, a.t, a.w, a.h, a.conf, a.state, a.frame_nr, a.det_class))
if j+1 >= len(self.history):
break
b = self.history[j+1]
gap = b.frame_nr - a.frame_nr
if gap < 1:
logger.error(f"WARNING, gap between frames {a.frame_nr} -> {b.frame_nr} is negative?")
if gap > 1:
for g in range(1, gap):
l = lerp(a.l, b.l, g/gap)
t = lerp(a.t, b.t, g/gap)
w = lerp(a.w, b.w, g/gap)
h = lerp(a.h, b.h, g/gap)
conf = 0
state = DetectionState.Lost
frame_nr = a.frame_nr + g
new_history.append(Detection(a.track_id, l, t, w, h, conf, state, frame_nr, a.det_class))
return Track(
self.track_id,
new_history,
self.predictor_history,
self.predictions,
self.fps)
def is_complete(self):
diffs = [(b.frame_nr - a.frame_nr) for a,b in zip(self.history[:-1], self.history[1:])]
return any([d != 1 for d in diffs])
def get_sampled(self, step_size = 1, offset=0):
if not self.is_complete():
t = self.get_with_interpolated_history()
else:
t = self
return Track(
t.track_id,
t.history[offset::step_size],
t.predictor_history,
t.predictions,
t.fps/step_size)
def get_binned(self, bin_size, camera: Camera, bin_start=True):
"""
For an experiment: what if we predict using only concrete positions, by mapping
dx,dy to a grid. Thus prediction can be for 8 moves, or rather headings
see ~/notes/attachments example svg
"""
history = self.get_projected_history_as_dict(H=None, camera=camera)
def round_to_grid_precision(x):
factor = 1/bin_size
return round(x * factor) / factor
new_history: List[dict] = []
for i, (det0, det1) in enumerate(zip(history[:-1], history[1:])):
if i == 0:
new_history.append({
'x': round_to_grid_precision(det0['x']),
'y': round_to_grid_precision(det0['y'])
} if bin_start else det0)
continue
if abs(det1['x'] - new_history[-1]['x']) < bin_size and abs(det1['y'] - new_history[-1]['y']) < bin_size:
continue
# det1 falls outside of the box [-bin_size:+bin_size] around last detection
# 1. Interpolate exact point between det0 and det1 that this happens
if abs(det1['x'] - new_history[-1]['x']) >= bin_size:
if det1['x'] - new_history[-1]['x'] >= bin_size:
# det1 left of last
x = new_history[-1]['x'] + bin_size
f = inv_lerp(det0['x'], det1['x'], x)
elif new_history[-1]['x'] - det1['x'] >= bin_size:
# det1 left of last
x = new_history[-1]['x'] - bin_size
f = inv_lerp(det0['x'], det1['x'], x)
y = lerp(det0['y'], det1['y'], f)
if abs(det1['y'] - new_history[-1]['y']) >= bin_size:
if det1['y'] - new_history[-1]['y'] >= bin_size:
# det1 left of last
y = new_history[-1]['y'] + bin_size
f = inv_lerp(det0['y'], det1['y'], y)
elif new_history[-1]['y'] - det1['y'] >= bin_size:
# det1 left of last
y = new_history[-1]['y'] - bin_size
f = inv_lerp(det0['y'], det1['y'], y)
x = lerp(det0['x'], det1['x'], f)
# 2. Find closest point on rectangle (rectangle's four corners, or 4 midpoints)
points = get_bins(bin_size)
points = [[new_history[-1]['x']+p[0], new_history[-1]['y'] + p[1]] for p in points]
distances = [np.linalg.norm([p[0] - x, p[1]-y]) for p in points]
closest = np.argmin(distances)
point = points[closest]
new_history.append({'x': point[0], 'y':point[1]})
# todo Offsets to points:[ history for in points]
return new_history
def to_trajectron_node(self, camera: Camera, env: Environment) -> Node:
positions = self.get_projected_history(None, camera)
velocity = np.gradient(positions, 1/self.fps, axis=0)
acceleration = np.gradient(velocity, 1/self.fps, axis=0)
new_first_idx = self.history[0].frame_nr
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
# vx = derivative_of(x, scene.dt)
# vy = derivative_of(y, scene.dt)
# ax = derivative_of(vx, scene.dt)
# ay = derivative_of(vy, scene.dt)
data_dict = {
('position', 'x'): positions[:,0],
('position', 'y'): positions[:,1],
('velocity', 'x'): velocity[:,0],
('velocity', 'y'): velocity[:,1],
('acceleration', 'x'): acceleration[:,0],
('acceleration', 'y'): acceleration[:,1]
}
node_data = pd.DataFrame(data_dict, columns=data_columns)
return Node(node_type=env.NodeType.PEDESTRIAN, node_id=self.track_id, data=node_data, first_timestep=new_first_idx)
@dataclass
class Frame:
index: int
img: np.array
time: float= field(default_factory=lambda: time.time())
tracks: Optional[dict[str, Track]] = None
H: Optional[np.array] = None
camera: Optional[Camera] = None
maps: Optional[List[cv2.Mat]] = None
def aslist(self) -> [dict]:
return { t.track_id:
{
'id': t.track_id,
'history': t.get_projected_history(self.H).tolist(),
'det_conf': t.history[-1].conf,
# 'det_conf': trajectory_data[node.id]['det_conf'],
# 'bbox': trajectory_data[node.id]['bbox'],
# 'history': history.tolist(),
'predictions': t.predictions
} for t in self.tracks.values()
}
def without_img(self):
return Frame(self.index, None, self.time, self.tracks, self.H, self.camera, self.maps)
def video_src_from_config(config) -> UrlOrPath:
if config.video_loop:
video_srcs: Iterable[UrlOrPath] = cycle(config.video_src)
else:
video_srcs: Iterable[UrlOrPath] = config.video_src
return video_srcs
class FrameEmitter:
'''
Emit frame in a separate threat so they can be throttled,
or thrown away when the rest of the system cannot keep up
'''
def setup(self) -> None:
self.frame_sock = self.pub(self.config.zmq_frame_addr)
self.frame_noimg_sock = self.pub(self.config.zmq_frame_noimg_addr)
def __init__(self, config: Namespace, is_running: Event) -> None:
self.config = config
self.is_running = is_running
context = zmq.Context()
# TODO: to make things faster, a multiprocessing.Array might be a tad faster: https://stackoverflow.com/a/65201859
self.frame_sock = context.socket(zmq.PUB)
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. make sure to set BEFORE connect/bind
self.frame_sock.bind(config.zmq_frame_addr)
self.frame_noimg_sock = context.socket(zmq.PUB)
self.frame_noimg_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. make sure to set BEFORE connect/bind
self.frame_noimg_sock.bind(config.zmq_frame_noimg_addr)
logger.info(f"Connection socket {self.config.zmq_frame_addr}")
logger.info(f"Connection socket {self.config.zmq_frame_noimg_addr}")
logger.info(f"Connection socket {config.zmq_frame_addr}")
logger.info(f"Connection socket {config.zmq_frame_noimg_addr}")
self.video_srcs = self.config.video_src
self.video_srcs = video_src_from_config(self.config)
def run(self):
offset = int(self.config.video_offset or 0)
source = get_video_source(self.video_srcs, self.config.camera, offset, self.config.video_end, self.config.video_loop)
video_gen = enumerate(source, start = offset)
def emit_video(self, timer_counter):
i = 0
delay_generation = False
for video_path in self.video_srcs:
logger.info(f"Play from '{str(video_path)}'")
if str(video_path).isdigit():
# numeric input is a CV camera
video = cv2.VideoCapture(int(str(video_path)))
# TODO: make config variables
video.set(cv2.CAP_PROP_FRAME_WIDTH, int(self.config.camera.w))
video.set(cv2.CAP_PROP_FRAME_HEIGHT, int(self.config.camera.h))
print("exposure!", video.get(cv2.CAP_PROP_AUTO_EXPOSURE))
video.set(cv2.CAP_PROP_FPS, 5)
fps=5
elif video_path.url.scheme == 'rtsp':
gst = f"rtspsrc location={video_path} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=0 drop=true"
logger.info(f"Capture gstreamer (gst-launch-1.0): {gst}")
video = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
fps=12
else:
# os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "fflags;nobuffer|flags;low_delay|avioflags;direct|rtsp_transport;udp"
video = cv2.VideoCapture(str(video_path))
delay_generation = True
fps = video.get(cv2.CAP_PROP_FPS)
target_frame_duration = 1./fps
logger.info(f"Emit frames at {fps} fps")
# writer = FrameWriter(self.config.record, None, None) if self.config.record else nullcontext
writer = FrameWriter(str(self.config.record), None, None) if self.config.record else None
try:
processor = ImgMovementFilter()
while self.run_loop():
if self.config.video_offset:
logger.info(f"Start at frame {self.config.video_offset}")
video.set(cv2.CAP_PROP_POS_FRAMES, self.config.video_offset)
i = self.config.video_offset
try:
i, img = next(video_gen)
except StopIteration as e:
logger.info("Video source ended")
# if '-' in video_path.path().stem:
# path_stem = video_path.stem[:video_path.stem.rfind('-')]
# else:
# path_stem = video_path.stem
# path_stem += "-homography"
# homography_path = video_path.with_stem(path_stem).with_suffix('.txt')
# logger.info(f'check homography file {homography_path}')
# if homography_path.exists():
# logger.info(f'Found custom homography file! Using {homography_path}')
# video_H = np.loadtxt(homography_path, delimiter=',')
# else:
# video_H = None
video_H = self.config.camera.H
prev_time = time.time()
while self.is_running.is_set():
with timer_counter.get_lock():
timer_counter.value += 1
ret, img = video.read()
# seek to 0 if video has finished. Infinite loop
if not ret:
# now loading multiple files
break
# video.set(cv2.CAP_PROP_POS_FRAMES, 0)
# ret, img = video.read()
# assert ret is not False # not really error proof...
frame = Frame(i, img=img, H=self.config.camera.H, camera=self.config.camera)
# frame.img = processor.apply(frame.img)
if "DATASETS/hof/" in str(video_path):
# hack to mask out area
cv2.rectangle(img, (0,0), (800,200), (0,0,0), -1)
frame = Frame(index=i, img=img, H=self.config.H, camera=self.config.camera)
# TODO: this is very dirty, need to find another way.
# perhaps multiprocessing Array?
self.frame_noimg_sock.send(pickle.dumps(frame.without_img()))
self.frame_sock.send(pickle.dumps(frame))
if writer:
writer.write(frame.img)
finally:
if writer:
writer.release()
# only delay consuming the next frame when using a file.
# Otherwise, go ASAP
if delay_generation:
# defer next loop
now = time.time()
time_diff = (now - prev_time)
if time_diff < target_frame_duration:
time.sleep(target_frame_duration - time_diff)
now += target_frame_duration - time_diff
prev_time = now
i += 1
if not self.is_running.is_set():
# if not running, also break out of infinite generator loop
break
logger.info("Stopping")
@classmethod
def arg_parser(cls) -> ArgumentParser:
argparser = LambdaParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument('--zmq-frame-noimg-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame2")
argparser.add_argument("--video-src",
help="source video to track from can be either a relative or absolute path, or a url, like an RTSP resource, or use gige://RELATIVE_PATH_TO_GIGE_CONFIG_JSON",
type=UrlOrPath,
nargs='+',
default=lambda: [UrlOrPath(p) for p in Path('../DATASETS/VIRAT_subset_0102x/').glob('*.mp4')])
argparser.add_argument("--video-offset",
help="Start playback from given frame. Note that when src is an array, this applies to all videos individually.",
default=0,
type=int)
argparser.add_argument("--video-end",
help="End (or loop) playback at given frame.",
default=None,
type=int)
argparser.add_argument("--record",
help="Record source video to given filename",
default=None,
type=Path)
argparser.add_argument("--video-loop",
help="By default it emitter will run only once. This allows it to loop the video file to keep testing.",
action='store_true')
argparser.add_argument("--camera-fps",
help="Camera FPS",
type=int,
default=12)
argparser.add_argument("--homography",
help="File with homography params [Deprecated]",
type=Path,
default='../DATASETS/VIRAT_subset_0102x/VIRAT_0102_homography_img2world.txt',
action=HomographyAction)
argparser.add_argument("--calibration",
help="File with camera intrinsics and lens distortion params (calibration.json)",
# type=Path,
required=True,
# default=None,
action=CameraAction)
return argparser
def run_frame_emitter(config: Namespace, is_running: Event, timer_counter: int):
router = FrameEmitter(config, is_running)
router.run(timer_counter)
is_running.clear()
router.emit_video(timer_counter)
is_running.clear()

View file

@ -1,97 +0,0 @@
# used for "Forward Referencing of type annotations"
from __future__ import annotations
import datetime
import logging
import time
from argparse import ArgumentParser
from pathlib import Path
import zmq
from trap.frame_emitter import Frame
from trap.node import Node
from trap.preview_renderer import FrameWriter as CvFrameWriter
logger = logging.getLogger("trap.simple_renderer")
class FrameWriter(Node):
def setup(self):
self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.out_writer = self.start_writer()
def start_writer(self):
if not self.config.output_dir.exists():
raise FileNotFoundError("Path does not exist")
date_str = datetime.datetime.now().isoformat(timespec="minutes")
filename = self.config.output_dir / f"render-source-{date_str}.mp4"
logger.info(f"Write to {filename}")
return CvFrameWriter(str(filename), None, None)
# fourcc = cv2.VideoWriter_fourcc(*'vp09')
# return cv2.VideoWriter(str(filename), fourcc, self.fps, self.frame_size)
def run(self):
i=0
try:
while self.run_loop():
i += 1
# zmq_ev = self.frame_sock.poll(timeout=2000)
# if not zmq_ev:
# # when no data comes in, loop so that is_running is checked
# continue
try:
frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
# else:
# logger.debug(f'new video frame {frame.index}')
if frame is None:
# might need to wait a few iterations before first frame comes available
time.sleep(.1)
continue
self.logger.debug(f"write frame {frame.time:.3f}")
self.out_writer.write(frame.img)
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
except KeyboardInterrupt as e:
print('stopping on interrupt')
self.logger.info('Stopping')
# if i>2:
if self.out_writer:
self.out_writer.release()
self.logger.info(f'Wrote to {self.out_writer.filename}')
self.logger.info('stopped')
@classmethod
def arg_parser(cls):
argparser = ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument("--output-dir",
help="Directory to save the video in",
required=True,
type=Path)
return argparser

View file

@ -1,631 +0,0 @@
# code by phar: https://github.com/phar/heliospy
import usb.core
import usb.util
import struct
import time
import queue
from trap.hersey import *
from threading import Thread
import matplotlib.pyplot as plt
import numpy as np
HELIOS_VID = 0x1209
HELIOS_PID = 0xE500
EP_BULK_OUT = 0x02
EP_BULK_IN = 0x81
EP_INT_OUT = 0x06
EP_INT_IN = 0x83
INTERFACE_INT = 0
INTERFACE_BULK = 1
INTERFACE_ISO = 2
HELIOS_MAX_POINTS = 0x1000
HELIOS_MAX_RATE = 0xFFFF
HELIOS_MIN_RATE = 7
HELIOS_SUCCESS = 1
# Functions return negative values if something went wrong
# Attempted to perform an action before calling OpenDevices()
HELIOS_ERROR_NOT_INITIALIZED =-1
# Attempted to perform an action with an invalid device number
HELIOS_ERROR_INVALID_DEVNUM = -2
# WriteFrame() called with null pointer to points
HELIOS_ERROR_NULL_POINTS = -3
# WriteFrame() called with a frame containing too many points
HELIOS_ERROR_TOO_MANY_POINTS = -4
# WriteFrame() called with pps higher than maximum allowed
HELIOS_ERROR_PPS_TOO_HIGH = -5
# WriteFrame() called with pps lower than minimum allowed
HELIOS_ERROR_PPS_TOO_LOW = -6
# Errors from the HeliosDacDevice class begin at -1000
# Attempted to perform an operation on a closed DAC device
HELIOS_ERROR_DEVICE_CLOSED = -1000
# Attempted to send a new frame with HELIOS_FLAGS_DONT_BLOCK before previous DoFrame() completed
HELIOS_ERROR_DEVICE_FRAME_READY = -1001
#/ Operation failed because SendControl() failed (if operation failed because of libusb_interrupt_transfer failure, the error code will be a libusb error instead)
HELIOS_ERROR_DEVICE_SEND_CONTROL = -1002
# Received an unexpected result from a call to SendControl()
HELIOS_ERROR_DEVICE_RESULT = -1003
# Attempted to call SendControl() with a null buffer pointer
HELIOS_ERROR_DEVICE_NULL_BUFFER = -1004
# Attempted to call SendControl() with a control signal that is too long
HELIOS_ERROR_DEVICE_SIGNAL_TOO_LONG = -1005
HELIOS_ERROR_LIBUSB_BASE = -5000
HELIOS_FLAGS_DEFAULT = 0
HELIOS_FLAGS_START_IMMEDIATELY = (1 << 0)
HELIOS_FLAGS_SINGLE_MODE = (1 << 1)
HELIOS_FLAGS_DONT_BLOCK = (1 << 2)
HELIOS_CMD_STOP =0x0001
HELIOS_CMD_SHUTTER =0x0002
HELIOS_CMD_GET_STATUS =0x0003
HELIOS_GET_FWVERSION =0x0004
HELIOS_CMD_GET_NAME =0x0005
HELIOS_CMD_SET_NAME =0x0006
HELIOS_SET_SDK_VERSION =0x0007
HELIOS_CMD_ERASE_FIRMWARE =0x00de
HELIOS_SDK_VERSION = 6
class HeliosPoint():
def __init__(self,x,y,c = 0xff0000,i= 255,blank=False):
self.x = x
self.y = y
self.c = 0x010203
self.i = i
self.blank = blank
def __str__(self):
return "HeleiosPoint(%d, %d,0x%0x,%d,%d)" % (self.x, self.y, self.c,self.i, self.blank)
class HeliosDAC():
def __init__(self,queuethread=True, debug=0):
self.debug=debug
self.closed = 1
self.frameReady = 0
self.framebuffer = ""
self.threadqueue = queue.Queue(maxsize=20)
self.nextframebuffer = ""
self.adcbits = 12
self.dev = usb.core.find(idVendor=HELIOS_VID, idProduct=HELIOS_PID)
self.cfg = self.dev.get_active_configuration()
self.intf = self.cfg[(0,1,2)]
self.dev.reset()
self.palette = [( 0, 0, 0 ), # Black/blanked (fixed)
( 255, 255, 255 ), # White (fixed)
( 255, 0, 0 ), # Red (fixed)
( 255, 255, 0 ), # Yellow (fixed)
( 0, 255, 0 ), # Green (fixed)
( 0, 255, 255 ), # Cyan (fixed)
( 0, 0, 255 ), # Blue (fixed)
( 255, 0, 255 ), # Magenta (fixed)
( 255, 128, 128 ), # Light red
( 255, 140, 128 ),
( 255, 151, 128 ),
( 255, 163, 128 ),
( 255, 174, 128 ),
( 255, 186, 128 ),
( 255, 197, 128 ),
( 255, 209, 128 ),
( 255, 220, 128 ),
( 255, 232, 128 ),
( 255, 243, 128 ),
( 255, 255, 128 ), # Light yellow
( 243, 255, 128 ),
( 232, 255, 128 ),
( 220, 255, 128 ),
( 209, 255, 128 ),
( 197, 255, 128 ),
( 186, 255, 128 ),
( 174, 255, 128 ),
( 163, 255, 128 ),
( 151, 255, 128 ),
( 140, 255, 128 ),
( 128, 255, 128 ), # Light green
( 128, 255, 140 ),
( 128, 255, 151 ),
( 128, 255, 163 ),
( 128, 255, 174 ),
( 128, 255, 186 ),
( 128, 255, 197 ),
( 128, 255, 209 ),
( 128, 255, 220 ),
( 128, 255, 232 ),
( 128, 255, 243 ),
( 128, 255, 255 ), # Light cyan
( 128, 243, 255 ),
( 128, 232, 255 ),
( 128, 220, 255 ),
( 128, 209, 255 ),
( 128, 197, 255 ),
( 128, 186, 255 ),
( 128, 174, 255 ),
( 128, 163, 255 ),
( 128, 151, 255 ),
( 128, 140, 255 ),
( 128, 128, 255 ), # Light blue
( 140, 128, 255 ),
( 151, 128, 255 ),
( 163, 128, 255 ),
( 174, 128, 255 ),
( 186, 128, 255 ),
( 197, 128, 255 ),
( 209, 128, 255 ),
( 220, 128, 255 ),
( 232, 128, 255 ),
( 243, 128, 255 ),
( 255, 128, 255 ), # Light magenta
( 255, 128, 243 ),
( 255, 128, 232 ),
( 255, 128, 220 ),
( 255, 128, 209 ),
( 255, 128, 197 ),
( 255, 128, 186 ),
( 255, 128, 174 ),
( 255, 128, 163 ),
( 255, 128, 151 ),
( 255, 128, 140 ),
( 255, 0, 0 ), # Red (cycleable)
( 255, 23, 0 ),
( 255, 46, 0 ),
( 255, 70, 0 ),
( 255, 93, 0 ),
( 255, 116, 0 ),
( 255, 139, 0 ),
( 255, 162, 0 ),
( 255, 185, 0 ),
( 255, 209, 0 ),
( 255, 232, 0 ),
( 255, 255, 0 ), #Yellow (cycleable)
( 232, 255, 0 ),
( 209, 255, 0 ),
( 185, 255, 0 ),
( 162, 255, 0 ),
( 139, 255, 0 ),
( 116, 255, 0 ),
( 93, 255, 0 ),
( 70, 255, 0 ),
( 46, 255, 0 ),
( 23, 255, 0 ),
( 0, 255, 0 ), # Green (cycleable)
( 0, 255, 23 ),
( 0, 255, 46 ),
( 0, 255, 70 ),
( 0, 255, 93 ),
( 0, 255, 116 ),
( 0, 255, 139 ),
( 0, 255, 162 ),
( 0, 255, 185 ),
( 0, 255, 209 ),
( 0, 255, 232 ),
( 0, 255, 255 ), # Cyan (cycleable)
( 0, 232, 255 ),
( 0, 209, 255 ),
( 0, 185, 255 ),
( 0, 162, 255 ),
( 0, 139, 255 ),
( 0, 116, 255 ),
( 0, 93, 255 ),
( 0, 70, 255 ),
( 0, 46, 255 ),
( 0, 23, 255 ),
( 0, 0, 255 ), # Blue (cycleable)
( 23, 0, 255 ),
( 46, 0, 255 ),
( 70, 0, 255 ),
( 93, 0, 255 ),
( 116, 0, 255 ),
( 139, 0, 255 ),
( 162, 0, 255 ),
( 185, 0, 255 ),
( 209, 0, 255 ),
( 232, 0, 255 ),
( 255, 0, 255 ), # Magenta (cycleable)
( 255, 0, 232 ),
( 255, 0, 209 ),
( 255, 0, 185 ),
( 255, 0, 162 ),
( 255, 0, 139 ),
( 255, 0, 116 ),
( 255, 0, 93 ),
( 255, 0, 70 ),
( 255, 0, 46 ),
( 255, 0, 23 ),
( 128, 0, 0 ), # Dark red
( 128, 12, 0 ),
( 128, 23, 0 ),
( 128, 35, 0 ),
( 128, 47, 0 ),
( 128, 58, 0 ),
( 128, 70, 0 ),
( 128, 81, 0 ),
( 128, 93, 0 ),
( 128, 105, 0 ),
( 128, 116, 0 ),
( 128, 128, 0 ), # Dark yellow
( 116, 128, 0 ),
( 105, 128, 0 ),
( 93, 128, 0 ),
( 81, 128, 0 ),
( 70, 128, 0 ),
( 58, 128, 0 ),
( 47, 128, 0 ),
( 35, 128, 0 ),
( 23, 128, 0 ),
( 12, 128, 0 ),
( 0, 128, 0 ), # Dark green
( 0, 128, 12 ),
( 0, 128, 23 ),
( 0, 128, 35 ),
( 0, 128, 47 ),
( 0, 128, 58 ),
( 0, 128, 70 ),
( 0, 128, 81 ),
( 0, 128, 93 ),
( 0, 128, 105 ),
( 0, 128, 116 ),
( 0, 128, 128 ), # Dark cyan
( 0, 116, 128 ),
( 0, 105, 128 ),
( 0, 93, 128 ),
( 0, 81, 128 ),
( 0, 70, 128 ),
( 0, 58, 128 ),
( 0, 47, 128 ),
( 0, 35, 128 ),
( 0, 23, 128 ),
( 0, 12, 128 ),
( 0, 0, 128 ), # Dark blue
( 12, 0, 128 ),
( 23, 0, 128 ),
( 35, 0, 128 ),
( 47, 0, 128 ),
( 58, 0, 128 ),
( 70, 0, 128 ),
( 81, 0, 128 ),
( 93, 0, 128 ),
( 105, 0, 128 ),
( 116, 0, 128 ),
( 128, 0, 128 ), # Dark magenta
( 128, 0, 116 ),
( 128, 0, 105 ),
( 128, 0, 93 ),
( 128, 0, 81 ),
( 128, 0, 70 ),
( 128, 0, 58 ),
( 128, 0, 47 ),
( 128, 0, 35 ),
( 128, 0, 23 ),
( 128, 0, 12 ),
( 255, 192, 192 ), # Very light red
( 255, 64, 64 ), # Light-medium red
( 192, 0, 0 ), # Medium-dark red
( 64, 0, 0 ), # Very dark red
( 255, 255, 192 ), # Very light yellow
( 255, 255, 64 ), # Light-medium yellow
( 192, 192, 0 ), # Medium-dark yellow
( 64, 64, 0 ), # Very dark yellow
( 192, 255, 192 ), # Very light green
( 64, 255, 64 ), # Light-medium green
( 0, 192, 0 ), # Medium-dark green
( 0, 64, 0 ), # Very dark green
( 192, 255, 255 ), # Very light cyan
( 64, 255, 255 ), # Light-medium cyan
( 0, 192, 192 ), # Medium-dark cyan
( 0, 64, 64 ), # Very dark cyan
( 192, 192, 255 ), # Very light blue
( 64, 64, 255 ), # Light-medium blue
( 0, 0, 192 ), # Medium-dark blue
( 0, 0, 64 ), # Very dark blue
( 255, 192, 255 ), # Very light magenta
( 255, 64, 255 ), # Light-medium magenta
( 192, 0, 192 ), # Medium-dark magenta
( 64, 0, 64 ), # Very dark magenta
( 255, 96, 96 ), # Medium skin tone
( 255, 255, 255 ), # White (cycleable)
( 245, 245, 245 ),
( 235, 235, 235 ),
( 224, 224, 224 ), # Very light gray (7/8 intensity)
( 213, 213, 213 ),
( 203, 203, 203 ),
( 192, 192, 192 ), # Light gray (3/4 intensity)
( 181, 181, 181 ),
( 171, 171, 171 ),
( 160, 160, 160 ), # Medium-light gray (5/8 int.)
( 149, 149, 149 ),
( 139, 139, 139 ),
( 128, 128, 128 ), # Medium gray (1/2 intensity)
( 117, 117, 117 ),
( 107, 107, 107 ),
( 96, 96, 96 ), # Medium-dark gray (3/8 int.)
( 85, 85, 85 ),
( 75, 75, 75 ),
( 64, 64, 64 ), # Dark gray (1/4 intensity)
( 53, 53, 53 ),
( 43, 43, 43 ),
( 32, 32, 32 ), # Very dark gray (1/8 intensity)
( 21, 21, 21 ),
( 11, 11, 11 )] # Black
self.dev.set_interface_altsetting(interface = 0, alternate_setting = 1)
if self.dev.is_kernel_driver_active(0) is True:
self.dev.detach_kernel_driver(0)
# claim the device
usb.util.claim_interface(self.dev, 0)
if self.dev is None:
raise ValueError('Device not found')
else:
if self.debug:
print(self.dev)
try:
transferResult = self.intf[0].read(32,1)
except:
if self.debug:
print("no lingering data")
if self.debug:
print(self.GetName())
print(self.getHWVersion())
self.setSDKVersion()
self.closed = False
if queuethread:
self.runQueueThread()
def runQueueThread(self):
worker = Thread(target=self.doframe_thread_loop)
worker.setDaemon(True)
worker.start()
def doframe_thread_loop(self):
while self.closed == 0:
if self.closed:
return;
self.DoFrame();
def getHWVersion(self):
self.intf[1].write(struct.pack("<H",HELIOS_GET_FWVERSION))
transferResult = self.intf[0].read(32)
if transferResult[0] == 0x84:
return struct.unpack("<L",transferResult[1:])[0]
else:
return None
def setSDKVersion(self, version = HELIOS_SDK_VERSION):
self.intf[1].write(struct.pack("<H",(version << 8) | HELIOS_SET_SDK_VERSION))
return
def setShutter(self, shutter=False):
self.SendControl(struct.pack("<H",(shutter << 8) | HELIOS_CMD_SHUTTER))
return
def setName(self, name):
self.SendControl(struct.pack("<H", HELIOS_CMD_SET_NAME) + name[:30] + b"\x00")
return
def newFrame(self,pps, pntobjlist, flags = HELIOS_FLAGS_DEFAULT):
if self.closed:
return HELIOS_ERROR_DEVICE_CLOSED;
if ( len(pntobjlist) > HELIOS_MAX_POINTS):
return HELIOS_ERROR_TOO_MANY_POINTS
if (pps > HELIOS_MAX_RATE):
return HELIOS_ERROR_PPS_TOO_HIGH
if (pps < HELIOS_MIN_RATE):
return HELIOS_ERROR_PPS_TOO_LOW
#this is a bug workaround, the mcu won't correctly receive transfers with these sizes
ppsActual = pps;
numOfPointsActual = len(pntobjlist)
if (((len(pntobjlist)-45) % 64) == 0):
numOfPointsActual-=1
ppsActual = int((pps * numOfPointsActual / len(pntobjlist) + 0.5))
pntobjlist = pntobjlist[:numOfPointsActual]
nextframebuffer = b""
for pnt in pntobjlist:
a = (pnt.x >> 4) & 0xff
b = ((pnt.x & 0x0F) << 4) | (pnt.y >> 8)
c = pnt.y & 0xFF
if pnt.blank == False:
r = (pnt.c & 0xff0000) >> 16
g = (pnt.c & 0xff00) >> 8
b = (pnt.c & 0xff)
i = pnt.i
else:
r = 0
g = 0
b = 0
i = 0
nextframebuffer += struct.pack("BBBBBBB", a,b,c,r,g,b,i)
nextframebuffer += struct.pack("BBBBB", (ppsActual & 0xFF),(ppsActual >> 8) ,(len(pntobjlist) & 0xFF),(len(pntobjlist) >> 8),flags)
self.threadqueue.put(nextframebuffer)
def DoFrame(self):
if (self.closed):
return HELIOS_ERROR_DEVICE_CLOSED;
self.nextframebuffer = self.threadqueue.get(block=True)
self.intf[3].write(self.nextframebuffer)
t = time.time()
while(self.getStatus()[1] == 0): #wait for the laser
pass
return self.getStatus()
def GetName(self):
self.SendControl(struct.pack("<H",HELIOS_CMD_GET_NAME))
x = self.intf[0].read(32)[:16]
if x[0] == 0x85:
return "".join([chr(t) for t in x[1:]])
else:
return None
def SendControl(self, buffer):
if (buffer == None):
return HELIOS_ERROR_DEVICE_NULL_BUFFER;
if (len(buffer) > 32):
return HELIOS_ERROR_DEVICE_SIGNAL_TOO_LONG;
self.intf[1].write(buffer)
def stop(self):
self.SendControl(struct.pack("<H",0x0001), 2)
time.sleep(.1)
return
def getStatus(self):
self.SendControl(struct.pack("<H",0x0003))
ret = self.intf[0].read(32)
if self.debug:
print(ret)
return ret
def generateText(self,text,xpos,ypos,cindex=0,scale=1.0):
pointstream = []
ctr = 0
for c in text:
lastx = xpos
lasty = ypos
blank = True
for x,y in HERSHEY_FONT[ord(c)-32]:
if (x == -1) and (y == -1):
# pointstream.append(HeliosPoint(lastx,lasty,blank=blank))
blank = True
else:
lastx = int((x + (ctr * HERSHEY_WIDTH)) * scale)
lasty = int(y * scale)
blank = False
pointstream.append(HeliosPoint(lastx,lasty,self.palette[cindex],blank=blank))
ctr += 1
return pointstream
def loadILDfile(self,filename, xscale=1.0, yscale=1.0):
f = open(filename,"rb")
headerstruct = ">4s3xB8s8sHHHBx"
moreframes = True
frames = []
while moreframes:
(magic, format, fname, cname, rcnt, num, total_frames, projectorid) = struct.unpack(headerstruct,f.read(struct.calcsize(headerstruct)))
if magic == b"ILDA":
pointlist = []
palette = []
x = y = z = red = green = blue = 0
blank = 1
lastpoint = 0
if rcnt > 0:
for i in range(rcnt):
if format in [0,1,4,5]:
if format == 0:
fmt = ">hhhBB"
(x,y,z,status,cindex) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
elif format == 1:
fmt = ">hhBB"
(x,y,status,cindex) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
elif format == 4:
(x,y,z,status,red,green,blue) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
elif format == 5:
fmt = ">hhhBBBB"
(x,y,status,red,green,blue) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
blank = (status & 0x40) > 0
lastpoint = (status & 0x80) > 0
lessadcbits = (16 - self.adcbits)
x = int((x >> lessadcbits) * xscale)
y = int((y >> lessadcbits) * yscale)
pointlist.append(HeliosPoint(x,y,self.palette[cindex],blank=blank))
elif format == 2:
fmt = ">BBB"
(r,g,b) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
palette.append((r<<16) | (g<<8) | b)
if format == 2:
frames.append((("palette",fname,cname, num),palette))
else:
frames.append((("frame",fname,cname,num),pointlist))
else:
moreframes = 0
else:
moreframes = 0
return frames
def plot(self, pntlist):
fig, ax = plt.subplots() # Create a figure containing a single axes.
xlst = []
ylst = []
for p in pntlist:
if p.blank == False:
xlst.append(p.x)
ylst.append(p.y)
ax.plot(xlst,ylst)
plt.show()
if __name__ == "__main__":
a = HeliosDAC()
# a.runQueueThread()
# cal = a.generateText("hello World", 20,20,scale=10)
## print(cal)
# a.plot(cal)
#
# while(1):
# a.newFrame(2000,cal)
# a.DoFrame()
# cal = a.generateText("hello World", 0, 0,scale=10)
# pps = 20000
# while(1):
# a.newFrame(pps,cal)
# a.DoFrame()
# cal = a.loadILDfile("ildatest.ild")
# while(1):
# for (t,n1,n2,c),f in cal:
# print("playing %s,%s, %d" % (n1,n2,c))
# a.newFrame(5000,f)
# a.DoFrame()
# a.plot(f)
pps = 200
while(1):
a.newFrame(pps,[HeliosPoint(0,200, c=(255,255,255)), #draw a square
HeliosPoint(200,200, c=(255,255,255)),
HeliosPoint(200,0, c=(255,255,255)),
HeliosPoint(0,0, c=(255,255,255))])
a.DoFrame()
# while(1):
## a.newFrame(1000,[HeliosPoint(16000,16000)])
# a.newFrame(100,[HeliosPoint(16000-2500,16000),HeliosPoint(16000,16000),HeliosPoint(16000+2500,16000),HeliosPoint(16000,16000),HeliosPoint(16000,16000+2500),HeliosPoint(16000,16000),HeliosPoint(16000,16000-2500),HeliosPoint(16000,16000)])
# a.DoFrame()
# while(1):
# a.newFrame(1000,[HeliosPoint(0,200),
# HeliosPoint(200,200),
# HeliosPoint(200,0),
# HeliosPoint(0,0),
# ])
# a.DoFrame()

View file

@ -1,253 +0,0 @@
# -*- coding: utf-8 -*-
"""
Example for using Helios DAC libraries in python (using C library with ctypes)
NB: If you haven't set up udev rules you need to use sudo to run the program for it to detect the DAC.
"""
from __future__ import annotations
import ctypes
import json
import math
from typing import Optional
import cv2
import numpy as np
def lerp(a: float, b: float, t: float) -> float:
"""Linear interpolate on the scale given by a to b, using t as the point on that scale.
Examples
--------
50 == lerp(0, 100, 0.5)
4.2 == lerp(1, 5, 0.8)
"""
return (1 - t) * a + t * b
class LaserFrame():
def __init__(self, paths: list[LaserPath]):
self.paths = paths
# def closest_path(cls, point, paths):
# distances = [min(p.last()-)]
# def optimise_paths_lazy(self, last_point = None):
# """Quick way to optimise order of paths
# last_point can be the ending point of previous frame.
# """
# ordered_paths = []
# if not last_point:
# ordered_paths.append(self.paths.pop(0))
# last_point = endpoint
# pass
def get_points_interpolated_by_distance(self, point_interval, last_point: Optional[LaserPoint] = None) -> list[LaserPoint]:
"""
Interpolate the gaps between paths (NOT THE PATHS THEMSELVES)
point_interval is the maximum interval at which a new point should be added
"""
points: list[LaserPoint] = []
for path in self.paths:
if last_point:
a = last_point
b = path.first()
dx = b.x - a.x
dy = b.y - a.y
distance = np.linalg.norm([dx,dy])
steps = int(distance // point_interval)
for step in range(steps+1): # have both 0 and 1 in the lerp for empty points
t = step/(steps+1)
x = int(lerp(a.x, b.x, t))
y = int(lerp(a.y, b.y, t))
points.append(LaserPoint(x,y, (0,0,0), 0, True))
# print('append', steps)
points.extend(path.points)
last_point = path.last()
return points
class LaserPath():
def __init__(self, points: list[LaserPoint] = []):
# if len(points) < 1:
# raise RuntimeError("LaserPath should have some points")
self.points = points
def last(self):
return self.points[-1]
def first(self):
return self.points[0]
class LaserPoint():
def __init__(self,x,y,c: Color = (255,0,0),i= 255,blank=False):
self.x = x
self.y = y
self.c = c
self._i = i
self.blank = blank
@property
def color(self):
if self.blank: return (0,0,0)
return self.c
@property
def i(self):
return 0 if self.blank else self._i
def circle_points(cx, cy, r, c: Color):
# r = 100
steps = r
pointlist: list[LaserPoint] = []
for i in range(steps):
x = int(cx + math.cos(i * (2*math.pi)/steps) * r)
y = int(cy + math.sin(i * (2*math.pi)/steps)* r)
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
return pointlist
def cross_points(cx, cy, r, c: Color):
# r = 100
steps = r
pointlist: list[LaserPoint] = []
for i in range(steps):
x = int(cx)
y = int(cy + r - i * 2 * r/steps)
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
path = LaserPath(pointlist)
pointlist: list[LaserPoint] = []
for i in range(steps):
y = int(cy)
x = int(cx + r - i * 2 * r/steps)
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
path2 = LaserPath(pointlist)
return [path, path2]
Color = tuple[int, int, int]
#Define point structure
class HeliosPoint(ctypes.Structure):
#_pack_=1
_fields_ = [('x', ctypes.c_uint16),
('y', ctypes.c_uint16),
('r', ctypes.c_uint8),
('g', ctypes.c_uint8),
('b', ctypes.c_uint8),
('i', ctypes.c_uint8)]
#Load and initialize library
HeliosLib = ctypes.cdll.LoadLibrary("./libHeliosDacAPI.so")
numDevices = HeliosLib.OpenDevices()
print("Found ", numDevices, "Helios DACs")
# #Create sample frames
# frames = [0 for x in range(100)]
# frameType = HeliosPoint * 1000
# x = 0
# y = 0
# for i in range(100):
# y = round(i * 0xFFF / 100)
# # y = round(50*0xFFF/100)
# frames[i] = frameType()
# for j in range(1000):
# if (j < 500):
# x = round(j * 0xFFF / 500)
# offset = 0
# else:
# offset = 0
# x = round(0xFFF - ((j - 500) * 0xFFF / 500))
# # frames[i][j] = HeliosPoint(int(x),int(y+offset),0,(x%155),0,255)
# frames[i][j] = HeliosPoint(int(x),int(y+offset),0,100,0,255)
pct =0xfff/100
r=50
# TODO)) scriptje met sliders
paths = [
# LaserPath(circle_points(10*pct, 45*pct, r, (100,0,100))),
# *cross_points(10*pct, 45*pct, r, (100,0,100)), # magenta
*cross_points(13.7*pct, 38.9*pct, r, (100,0,100)), # magenta # punt 10
*cross_points(44.3*pct, 47.0*pct, r, (0,100,0)), # groen # punt 0
*cross_points(82.5*pct, 12.7*pct, r, (100,100,100)), # wit # punt 4
*cross_points(89*pct, 49*pct, r, (0,100,100)), # cyan # punt 2
*cross_points(36*pct, 81.7*pct, r, (100,100,0)), # geel # punt 7
]
calibration_points = [
(13.7*pct, 38.9*pct, 10,),
(44.3*pct, 47.0*pct, 0),
(82.5*pct, 12.7*pct, 4),
(89*pct, 49*pct, 2),
(36*pct, 81.7*pct, 7),
]
with open('/home/ruben/suspicion/DATASETS/hof3/irl_points.json') as fp:
irl_points = json.load(fp)
src_points = []
dst_points=[]
for x, y, index in calibration_points:
src_points.append(irl_points[index])
dst_points.append([x,y])
print(src_points)
H, status = cv2.findHomography(np.array(src_points), np.array(dst_points))
print("LASER HOMOGRAPHY MATRIX")
print(H)
dst_img_points = cv2.perspectiveTransform(np.array([[irl_points[1]]]), H)
print(dst_img_points)
paths.extend([
*cross_points(dst_img_points[0][0][0], dst_img_points[0][0][1], r, (100,100,0)), # geel # punt 7
])
frame = LaserFrame(paths)
pointlist = frame.get_points_interpolated_by_distance(3)
print(len(pointlist))
#Play frames on DAC
i=0
while True:
frameType = HeliosPoint * len(pointlist)
frame = frameType()
# print(len(pointlist), last_laser_point.x, last_laser_point.y)
for j, point in enumerate(pointlist):
frame[j] = HeliosPoint(point.x, point.y, point.color[0],point.color[1], point.color[2], point.i)
# Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
statusAttempts=0
while (statusAttempts < 512 and HeliosLib.GetStatus(0) != 1):
statusAttempts += 1
HeliosLib.WriteFrame(0, 50000, 0, ctypes.pointer(frame), len(pointlist))
# for i in range(250):
# i+=1
# for j in range(numDevices):
# statusAttempts = 0
# # Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
# while (statusAttempts < 512 and HeliosLib.GetStatus(j) != 1):
# statusAttempts += 1
# HeliosLib.WriteFrame(j, 50000, 0, ctypes.pointer(frames[i % 100]), 1000) #Send the frame
HeliosLib.CloseDevices()

View file

@ -1,196 +0,0 @@
# part of heliospy, see helios.py
HERSHEY_HEIGHT = 28
HERSHEY_WIDTH = 28
HERSHEY_FONT = [
#Ascii 32
[(0,16),(-1, -1)],
#Ascii 33
[(8,10),(5, 21),(5, 7),(-1, -1),(5, 2),(4, 1),(5, 0),(6, 1),(5, 2),(-1, -1)],
#Ascii 34
[(5,16),(4, 21),(4, 14),(-1, -1),(12, 21),(12, 14),(-1, -1)],
#Ascii 35
[(11,21),(11, 25),(4, -7),(-1, -1),(17, 25),(10, -7),(-1, -1),(4, 12),(18, 12),(-1, -1),(3, 6),(17, 6),(-1, -1)],
#Ascii 36
[(26,20),(8, 25),(8, -4),(-1, -1),(12, 25),(12, -4),(-1, -1),(17, 18),(15, 20),(12, 21),(8, 21),(5, 20),(3, 18),(3, 16),(4, 14),(5, 13),(7, 12),(13, 10),(15, 9),(16, 8),(17, 6),(17, 3),(15, 1),(12, 0),(8, 0),(5, 1),(3, 3),(-1, -1)],
#Ascii 37
[(31,24),(21, 21),(3, 0),(-1, -1),(8, 21),(10, 19),(10, 17),(9, 15),(7, 14),(5, 14),(3, 16),(3, 18),(4, 20),(6, 21),(8, 21),(10, 20),(13, 19),(16, 19),(19, 20),(21, 21),(-1, -1),(17, 7),(15, 6),(14, 4),(14, 2),(16, 0),(18, 0),(20, 1),(21, 3),(21, 5),(19, 7),(17, 7),(-1, -1)],
#Ascii 38
[(34,26),(23, 12),(23, 13),(22, 14),(21, 14),(20, 13),(19, 11),(17, 6),(15, 3),(13, 1),(11, 0),(7, 0),(5, 1),(4, 2),(3, 4),(3, 6),(4, 8),(5, 9),(12, 13),(13, 14),(14, 16),(14, 18),(13, 20),(11, 21),(9, 20),(8, 18),(8, 16),(9, 13),(11, 10),(16, 3),(18, 1),(20, 0),(22, 0),(23, 1),(23, 2),(-1, -1)],
#Ascii 39
[(7,10),(5, 19),(4, 20),(5, 21),(6, 20),(6, 18),(5, 16),(4, 15),(-1, -1)],
#Ascii 40
[(10,14),(11, 25),(9, 23),(7, 20),(5, 16),(4, 11),(4, 7),(5, 2),(7, -2),(9, -5),(11, -7),(-1, -1)],
#Ascii 41
[(10,14),(3, 25),(5, 23),(7, 20),(9, 16),(10, 11),(10, 7),(9, 2),(7, -2),(5, -5),(3, -7),(-1, -1)],
#Ascii 42
[(8,16),(8, 21),(8, 9),(-1, -1),(3, 18),(13, 12),(-1, -1),(13, 18),(3, 12),(-1, -1)],
#Ascii 43
[(5,26),(13, 18),(13, 0),(-1, -1),(4, 9),(22, 9),(-1, -1)],
#Ascii 44
[(8,10),(6, 1),(5, 0),(4, 1),(5, 2),(6, 1),(6, -1),(5, -3),(4, -4),(-1, -1)],
#Ascii 45
[(2,26),(4, 9),(22, 9),(-1, -1)],
#Ascii 46
[(5,10),(5, 2),(4, 1),(5, 0),(6, 1),(5, 2),(-1, -1)],
#Ascii 47`
[(2,22),(20, 25),(2, -7),(-1, -1)],
#Ascii 48
[(17,20),(9, 21),(6, 20),(4, 17),(3, 12),(3, 9),(4, 4),(6, 1),(9, 0),(11, 0),(14, 1),(16, 4),(17, 9),(17, 12),(16, 17),(14, 20),(11, 21),(9, 21),(-1, -1)],
#Ascii 49
[(4,20),(6, 17),(8, 18),(11, 21),(11, 0),(-1, -1)],
#Ascii 50
[(14,20),(4, 16),(4, 17),(5, 19),(6, 20),(8, 21),(12, 21),(14, 20),(15, 19),(16, 17),(16, 15),(15, 13),(13, 10),(3, 0),(17, 0),(-1, -1)],
#Ascii 51
[(15,20),(5, 21),(16, 21),(10, 13),(13, 13),(15, 12),(16, 11),(17, 8),(17, 6),(16, 3),(14, 1),(11, 0),(8, 0),(5, 1),(4, 2),(3, 4),(-1, -1)],
#Ascii 52
[(6,20),(13, 21),(3, 7),(18, 7),(-1, -1),(13, 21),(13, 0),(-1, -1)],
#Ascii 53
[(17,20),(15, 21),(5, 21),(4, 12),(5, 13),(8, 14),(11, 14),(14, 13),(16, 11),(17, 8),(17, 6),(16, 3),(14, 1),(11, 0),(8, 0),(5, 1),(4, 2),(3, 4),(-1, -1)],
#Ascii 54
[(23,20),(16, 18),(15, 20),(12, 21),(10, 21),(7, 20),(5, 17),(4, 12),(4, 7),(5, 3),(7, 1),(10, 0),(11, 0),(14, 1),(16, 3),(17, 6),(17, 7),(16, 10),(14, 12),(11, 13),(10, 13),(7, 12),(5, 10),(4, 7),(-1, -1)],
#Ascii 55
[(5,20),(17, 21),(7, 0),(-1, -1),(3, 21),(17, 21),(-1, -1)],
#Ascii 56
[(29,20),(8, 21),(5, 20),(4, 18),(4, 16),(5, 14),(7, 13),(11, 12),(14, 11),(16, 9),(17, 7),(17, 4),(16, 2),(15, 1),(12, 0),(8, 0),(5, 1),(4, 2),(3, 4),(3, 7),(4, 9),(6, 11),(9, 12),(13, 13),(15, 14),(16, 16),(16, 18),(15, 20),(12, 21),(8, 21),(-1, -1)],
#Ascii 57
[(23,20),(16, 14),(15, 11),(13, 9),(10, 8),(9, 8),(6, 9),(4, 11),(3, 14),(3, 15),(4, 18),(6, 20),(9, 21),(10, 21),(13, 20),(15, 18),(16, 14),(16, 9),(15, 4),(13, 1),(10, 0),(8, 0),(5, 1),(4, 3),(-1, -1)],
#Ascii 58
[(11,10),(5, 14),(4, 13),(5, 12),(6, 13),(5, 14),(-1, -1),(5, 2),(4, 1),(5, 0),(6, 1),(5, 2),(-1, -1)],
#Ascii 59
[(14,10),(5, 14),(4, 13),(5, 12),(6, 13),(5, 14),(-1, -1),(6, 1),(5, 0),(4, 1),(5, 2),(6, 1),(6, -1),(5, -3),(4, -4),(-1, -1)],
#Ascii 60
[(3,24),(20, 18),(4, 9),(20, 0),(-1, -1)],
#Ascii 61
[(5,26),(4, 12),(22, 12),(-1, -1),(4, 6),(22, 6),(-1, -1)],
#Ascii 62
[(3,24),(4, 18),(20, 9),(4, 0),(-1, -1)],
#Ascii 63
[(20,18),(3, 16),(3, 17),(4, 19),(5, 20),(7, 21),(11, 21),(13, 20),(14, 19),(15, 17),(15, 15),(14, 13),(13, 12),(9, 10),(9, 7),(-1, -1),(9, 2),(8, 1),(9, 0),(10, 1),(9, 2),(-1, -1)],
#Ascii 64
[(55,27),(18, 13),(17, 15),(15, 16),(12, 16),(10, 15),(9, 14),(8, 11),(8, 8),(9, 6),(11, 5),(14, 5),(16, 6),(17, 8),(-1, -1),(12, 16),(10, 14),(9, 11),(9, 8),(10, 6),(11, 5),(-1, -1),(18, 16),(17, 8),(17, 6),(19, 5),(21, 5),(23, 7),(24, 10),(24, 12),(23, 15),(22, 17),(20, 19),(18, 20),(15, 21),(12, 21),(9, 20),(7, 19),(5, 17),(4, 15),(3, 12),(3, 9),(4, 6),(5, 4),(7, 2),(9, 1),(12, 0),(15, 0),(18, 1),(20, 2),(21, 3),(-1, -1),(19, 16),(18, 8),(18, 6),(19, 5),(8, 18),(-1,-1)],
#Ascii 65
[(8,18), (9,21), (1, 0),(-1,-1), (9,21),(17, 0),(-1,-1),( 4, 7),(14, 7),(-1,-1)],
#Ascii 66
[(23,21),(4, 21),(4, 0),(-1, -1),(4, 21),(13, 21),(16, 20),(17, 19),(18, 17),(18, 15),(17, 13),(16, 12),(13, 11),(-1, -1),(4, 11),(13, 11),(16, 10),(17, 9),(18, 7),(18, 4),(17, 2),(16, 1),(13, 0),(4, 0),(-1, -1)],
#Ascii 67
[(18,21),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(-1, -1)],
#Ascii 68
[(15,21),(4, 21),(4, 0),(-1, -1),(4, 21),(11, 21),(14, 20),(16, 18),(17, 16),(18, 13),(18, 8),(17, 5),(16, 3),(14, 1),(11, 0),(4, 0),(-1, -1)],
#Ascii 69
[(11,19),(4, 21),(4, 0),(-1, -1),(4, 21),(17, 21),(-1, -1),(4, 11),(12, 11),(-1, -1),(4, 0),(17, 0),(-1, -1)],
#Ascii 70
[(8,18),(4, 21),(4, 0),(-1, -1),(4, 21),(17, 21),(-1, -1),(4, 11),(12, 11),(-1, -1)],
#Ascii 71
[(22,21),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(18, 8),(-1, -1),(13, 8),(18, 8),(-1, -1)],
#Ascii 72
[(8,22),(4, 21),(4, 0),(-1, -1),(18, 21),(18, 0),(-1, -1),(4, 11),(18, 11),(-1, -1)],
#Ascii 73
[(2,8),(4, 21),(4, 0),(-1, -1)],
#Ascii 74
[(10,16),(12, 21),(12, 5),(11, 2),(10, 1),(8, 0),(6, 0),(4, 1),(3, 2),(2, 5),(2, 7),(-1, -1)],
#Ascii 75
[(8,21),(4, 21),(4, 0),(-1, -1),(18, 21),(4, 7),(-1, -1),(9, 12),(18, 0),(-1, -1)],
#Ascii 76
[(5,17),(4, 21),(4, 0),(-1, -1),(4, 0),(16, 0),(-1, -1)],
#Ascii 77
[(11,24),(4, 21),(4, 0),(-1, -1),(4, 21),(12, 0),(-1, -1),(20, 21),(12, 0),(-1, -1),(20, 21),(20, 0),(-1, -1)],
#Ascii 78
[(8,22),(4, 21),(4, 0),(-1, -1),(4, 21),(18, 0),(-1, -1),(18, 21),(18, 0),(-1, -1)],
#Ascii 79
[(21,22),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(19, 8),(19, 13),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(-1, -1)],
#Ascii 80
[(13,21),(4, 21),(4, 0),(-1, -1),(4, 21),(13, 21),(16, 20),(17, 19),(18, 17),(18, 14),(17, 12),(16, 11),(13, 10),(4, 10),(-1, -1)],
#Ascii 81
[(24,22),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(19, 8),(19, 13),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(-1, -1),(12, 4),(18, -2),(-1, -1)],
#Ascii 82
[(16,21),(4, 21),(4, 0),(-1, -1),(4, 21),(13, 21),(16, 20),(17, 19),(18, 17),(18, 15),(17, 13),(16, 12),(13, 11),(4, 11),(-1, -1),(11, 11),(18, 0),(-1, -1)],
#Ascii 83
[(20,20),(17, 18),(15, 20),(12, 21),(8, 21),(5, 20),(3, 18),(3, 16),(4, 14),(5, 13),(7, 12),(13, 10),(15, 9),(16, 8),(17, 6),(17, 3),(15, 1),(12, 0),(8, 0),(5, 1),(3, 3),(-1, -1)],
#Ascii 8,4
[(5,16),(8, 21),(8, 0),(-1, -1),(1, 21),(15, 21),(-1, -1)],
#Ascii 85
[(10,22),(4, 21),(4, 6),(5, 3),(7, 1),(10, 0),(12, 0),(15, 1),(17, 3),(18, 6),(18, 21),(-1, -1)],
#Ascii 86
[(5,18),(1, 21),(9, 0),(-1, -1),(17, 21),(9, 0),(-1, -1)],
#Ascii 87
[(11,24),(2, 21),(7, 0),(-1, -1),(12, 21),(7, 0),(-1, -1),(12, 21),(17, 0),(-1, -1),(22, 21),(17, 0),(-1, -1)],
#Ascii 88
[(5,20),(3, 21),(17, 0),(-1, -1),(17, 21),(3, 0),(-1, -1)],
#Ascii 89
[(6,18),(1, 21),(9, 11),(9, 0),(-1, -1),(17, 21),(9, 11),(-1, -1)],
#Ascii 90
[(8,20),(17, 21),(3, 0),(-1, -1),(3, 21),(17, 21),(-1, -1),(3, 0),(17, 0),(-1, -1)],
#Ascii 91
[(11,14),(4, 25),(4, -7),(-1, -1),(5, 25),(5, -7),(-1, -1),(4, 25),(11, 25),(-1, -1),(4, -7),(11, -7),(-1, -1)],
#Ascii 92
[(2,14),(0, 21),(14, -3),(-1, -1)],
#Ascii 93
[(11,14),(9, 25),(9, -7),(-1, -1),(10, 25),(10, -7),(-1, -1),(3, 25),(10, 25),(-1, -1),(3, -7),(10, -7),(-1, -1)],
#Ascii 94
[(10,16),(6, 15),(8, 18),(10, 15),(-1, -1),(3, 12),(8, 17),(13, 12),(-1, -1),(8, 17),(8, 0),(-1, -1)],
#Ascii 95
[(2,16),(0, -2),(16, -2),(-1, -1)],
#Ascii 96
[(7,10),(6, 21),(5, 20),(4, 18),(4, 16),(5, 15),(6, 16),(5, 17),(-1, -1)],
#Ascii 97
[(17,19),(15, 14),(15, 0),(-1, -1),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
#Ascii 98
[(17,19),(4, 21),(4, 0),(-1, -1),(4, 11),(6, 13),(8, 14),(11, 14),(13, 13),(15, 11),(16, 8),(16, 6),(15, 3),(13, 1),(11, 0),(8, 0),(6, 1),(4, 3),(-1, -1)],
#Ascii 99
[(14,18),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
#Ascii 100
[(17,19),(15, 21),(15, 0),(-1, -1),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
#Ascii 101
[(17,18),(3, 8),(15, 8),(15, 10),(14, 12),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
#Ascii 102
[(8,12),(10, 21),(8, 21),(6, 20),(5, 17),(5, 0),(-1, -1),(2, 14),(9, 14),(-1, -1)],
#Ascii 103
[(22,19),(15, 14),(15, -2),(14, -5),(13, -6),(11, -7),(8, -7),(6, -6),(-1, -1),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
#Ascii 104
[(10,19),(4, 21),(4, 0),(-1, -1),(4, 10),(7, 13),(9, 14),(12, 14),(14, 13),(15, 10),(15, 0),(-1, -1)],
#Ascii 105
[(8,8),(3, 21),(4, 20),(5, 21),(4, 22),(3, 21),(-1, -1),(4, 14),(4, 0),(-1, -1)],
#Ascii 106
[(11,10),(5, 21),(6, 20),(7, 21),(6, 22),(5, 21),(-1, -1),(6, 14),(6, -3),(5, -6),(3, -7),(1, -7),(-1, -1)],
#Ascii 107
[(8,17),(4, 21),(4, 0),(-1, -1),(14, 14),(4, 4),(-1, -1),(8, 8),(15, 0),(-1, -1)],
#Ascii 108
[(2,8),(4, 21),(4, 0),(-1, -1),(18, 30),(-1,-1)],
#Ascii 109
[(18,30), (4,14),(4, 0),(-1,-1),(4,10),(7,13),(9,14),(12,14),(14,13),(15,10),(15, 0),(-1,-1),(15,10),(18,13),(20,14),(23,14),(25,13),(26,10),(26, 0),(-1,-1)],
#Ascii 110
[(10,19),(4, 14),(4, 0),(-1, -1),(4, 10),(7, 13),(9, 14),(12, 14),(14, 13),(15, 10),(15, 0),(-1, -1),(17, 19),(-1,-1)],
#Ascii 111 */
[(17,19),(8,14), (6,13), (4,11), (3, 8), (3, 6), (4, 3), (6, 1), (8, 0),(11, 0),(13, 1),(15, 3),(16,6),(16, 8),(15,11),(13,13),(11,14), (8,14), (-1,-1),(-1,-1)],
#Ascii 112
[(17,19),(4, 14),(4, -7),(-1, -1),(4, 11),(6, 13),(8, 14),(11, 14),(13, 13),(15, 11),(16, 8),(16, 6),(15, 3),(13, 1),(11, 0),(8, 0),(6, 1),(4, 3),(-1, -1),(17, 19),(-1,-1)],
#Ascii 113,
[(17,19), (15,14),(15,-7),(-1,-1),(15,11),(13,13),(11,14), (8,14), (6,13), (4,11), (3, 8), (3, 6), (4,3), (6, 1), (8, 0),(11, 0),(13, 1),(15, 3), (-1,-1), (-1,-1)],
#Ascii 114
[(8,13),(4, 14),(4, 0),(-1, -1),(4, 8),(5, 11),(7, 13),(9, 14),(12, 14),(-1, -1)],
#Ascii 115
[(17,17),(14, 11),(13, 13),(10, 14),(7, 14),(4, 13),(3, 11),(4, 9),(6, 8),(11, 7),(13, 6),(14, 4),(14, 3),(13, 1),(10, 0),(7, 0),(4, 1),(3, 3),(-1, -1)],
#Ascii 116
[(8,12),(5, 21),(5, 4),(6, 1),(8, 0),(10, 0),(-1, -1),(2, 14),(9, 14),(-1, -1)],
#Ascii 117
[(10,19),(4, 14),(4, 4),(5, 1),(7, 0),(10, 0),(12, 1),(15, 4),(-1, -1),(15, 14),(15, 0),(-1, -1)],
#Ascii 118
[(5,16),(2, 14),(8, 0),(-1, -1),(14, 14),(8, 0),(-1, -1)],
#Ascii 119
[(11,22),(3, 14),(7, 0),(-1, -1),(11, 14),(7, 0),(-1, -1),(11, 14),(15, 0),(-1, -1),(19, 14),(15, 0),(-1, -1)],
#Ascii 120
[(5,17),(3, 14),(14, 0),(-1, -1),(14, 14),(3, 0),(-1, -1)],
#Ascii 121
[(9,16),(2, 14),(8, 0),(-1, -1),(14, 14),(8, 0),(6, -4),(4, -6),(2, -7),(1, -7),(-1, -1)],
#Ascii 122
[(8,17),(14, 14),(3, 0),(-1, -1),(3, 14),(14, 14),(-1, -1),(3, 0),(14, 0),(-1, -1)],
#Ascii 123
[(39,14),(9, 25),(7, 24),(6, 23),(5, 21),(5, 19),(6, 17),(7, 16),(8, 14),(8, 12),(6, 10),(-1, -1),(7, 24),(6, 22),(6, 20),(7, 18),(8, 17),(9, 15),(9, 13),(8, 11),(4, 9),(8, 7),(9, 5),(9, 3),(8, 1),(7, 0),(6, -2),(6, -4),(7, -6),(-1, -1),(6, 8),(8, 6),(8, 4),(7, 2),(6, 1),(5, -1),(5, -3),(6, -5),(7, -6),(9, -7),(-1, -1)],
#Ascii 124
[(2,8),(4, 25),(4, -7),(-1, -1)],
#Ascii 125
[(39,14),(5, 25),(7, 24),(8, 23),(9, 21),(9, 19),(8, 17),(7, 16),(6, 14),(6, 12),(8, 10),(-1, -1),(7, 24),(8, 22),(8, 20),(7, 18),(6, 17),(5, 15),(5, 13),(6, 11),(10, 9),(6, 7),(5, 5),(5, 3),(6, 1),(7, 0),(8, -2),(8, -4),(7, -6),(-1, -1),(8, 8),(6, 6),(6, 4),(7, 2),(8, 1),(9, -1),(9, -3),(8, -5),(7, -6),(5, -7),(-1, -1)],
#Ascii 126
[(23,24),(3, 6),(3, 8),(4, 11),(6, 12),(8, 12),(10, 11),(14, 8),(16, 7),(18, 7),(20, 8),(21, 10),(-1, -1),(3, 8),(4, 10),(6, 11),(8, 11),(10, 10),(14, 7),(16, 6),(18, 6),(20, 7),(21, 10),(21, 12),(-1, -1)]]

View file

@ -1,292 +0,0 @@
from argparse import ArgumentParser
import enum
import json
from pathlib import Path
import time
from typing import Optional
import cv2
import numpy as np
from trap.base import DataclassJSONEncoder, DistortedCamera, Frame
from trap.lines import CoordinateSpace, RenderableLine, RenderableLines, RenderablePoint, RenderablePosition, SrgbaColor, cross_points
from trap.node import Node
from trap.stage import Coordinate
class Modes(enum.Enum):
POINTS = 1
TEST_LINE = 2
class LaserCalibration(Node):
"""
A calibrated camera can be used to reverse-map the points of the laser to world coordinates.
Note, it publishes on the address of the stage node, so they cannot run at the same time.
1. Draw points with the laser (use 1-9 to create/select, then position them with arrow keys)
2. Use cursor on camera stream to create an image point for.
- Locate nearby point to select and drag
3. Use image coordinate of point, undistort, homograph, gives world coordinate.
4. Perform homography on world coordinates + laser coordinates
"""
def setup(self):
# self.scenarios: List[DrawnScenario] = []
self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.laser_sock = self.pub(self.config.zmq_stage_addr)
self.camera: Optional[DistortedCamera] = None
self._selected_point = None
self._is_dragging = False
self.laser_points = {}
self.image_points = {}
self.mode = Modes.POINTS
self.H = None
self.img_size = (1920,1080)
self.frame_img_factor = (1,1)
if self.config.calibfile.exists():
with self.config.calibfile.open('r') as fp:
calibdata = json.load(fp)
self.laser_points = calibdata['laser_points']
self.image_points = calibdata['image_points']
self.H = calibdata['H']
def run(self):
cv2.namedWindow("laser_calib", cv2.WINDOW_NORMAL)
# https://gist.github.com/ronekko/dc3747211543165108b11073f929b85e
# cv2.moveWindow("laser_calib", 0, -1)
cv2.setMouseCallback('laser_calib',self.mouse_event)
cv2.setWindowProperty("laser_calib",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
# arrow up (82), down (84), arrow left(81)
frame = None
while self.run_loop_capped_fps(60):
if self.frame_sock.poll(0):
frame: Frame = self.frame_sock.recv_pyobj()
if not self.camera:
self.camera = frame.camera
if frame is None:
continue
self.frame_img_factor = frame.img.shape[1] / self.img_size[0], frame.img.shape[0] / self.img_size[1]
img = frame.img
img = cv2.resize(img, self.img_size)
cv2.putText(img, 'press 1-0 to create/edit points', (10,20), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,255))
if len(self.laser_points) < 4:
cv2.putText(img, 'add points to calculate homography', (10,40), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,255))
else:
cv2.putText(img, 'press c to calculate homography', (10,40), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,0))
cv2.putText(img, str(self.config.calibfile), (10,self.img_size[1]-30), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,0))
if self._selected_point:
color = (0,255,255)
cv2.putText(img, f'selected {self._selected_point}', (10,60), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
cv2.putText(img, 'press d to delete', (10,80), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
cv2.putText(img, 'use arrows to position laser for this point', (10,100), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
target = self.camera.points_img_to_world([self.image_points[self._selected_point]])[0].tolist()
target = round(target[0], 2), round(target[1], 2)
cv2.putText(img, f'map {self.laser_points[self._selected_point]} to {target} ({self.image_points[self._selected_point]})', (10,120), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
for k, coord in self.image_points.items():
color = (0,0,255) if self._selected_point == k else (255,0,0)
coord = int(coord[0] / self.frame_img_factor[0]), int(coord[1] / self.frame_img_factor[1])
cv2.circle(img, coord, 4, color, thickness=2)
cv2.putText(img, str(k), (coord[0]+10, coord[1]), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
key = cv2.waitKey(5) # or for arrows: full_key_code = cv2.waitKeyEx(0)
self.key_event(key)
# nr_keys = [ord(i) for i in range(10)] # select/add point
# cv2.
cv2.imshow('laser_calib', img)
lines = []
if self.mode == Modes.TEST_LINE:
lines.append(RenderableLine([
RenderablePoint((i,time.time()%18), SrgbaColor(0,1,0,1)) for i in range(-15, 40)
]))
# render in laser space
rl = RenderableLines(lines, CoordinateSpace.WORLD)
self.laser_sock.send_json(rl, cls=DataclassJSONEncoder)
else:
if self._selected_point:
point = self.laser_points[self._selected_point]
lines.extend(cross_points(point[0], point[1], .5, SrgbaColor(0,1,0,1)))
# render in laser space
rl = RenderableLines(lines, CoordinateSpace.LASER)
self.laser_sock.send_json(rl, cls=DataclassJSONEncoder)
# print(json.dumps(rl, cls=DataclassJSONEncoder))
def key_event(self, key: int):
if key < 0:
return
if key == ord('q'):
exit()
if key == 27: #esc
self._selected_point = None
if key == ord('c'):
self.calculate_homography()
self.save()
if key == ord('d') and self._selected_point:
self.delete_point(self._selected_point)
if key == ord('t'):
self.mode = Modes.TEST_LINE if self.mode == Modes.POINTS else Modes.POINTS
print(self.mode)
# arrow up (82), down (84), arrow left(81)
if self._selected_point and key in [81, 84, 82, 83,
ord('h'), ord('j'), ord('k'), ord('l'),
ord('H'), ord('J'), ord('K'), ord('L'),
]:
diff = [0,0]
if key in [81, ord('h')]:
diff[0] -= 1
if key == ord('H'):
diff[0] -= 10
if key in [83, ord('l')]:
diff[0] += 1
if key == ord('L'):
diff[0] += 10
if key in [82, ord('k')]:
diff[1] += 1
if key == ord('K'):
diff[1] += 10
if key in [84, ord('j')]:
diff[1] -= 1
if key == ord('J'):
diff[1] -= 10
self.laser_points[self._selected_point] = (
self.laser_points[self._selected_point][0] + diff[0],
self.laser_points[self._selected_point][1] + diff[1],
)
nr_keys = [ord(str(i)) for i in range(10)]
if key in nr_keys:
select = str(nr_keys.index(key))
self.create_or_select(select)
def mouse_event(self, event,x,y,flags,param):
x *= self.frame_img_factor[0]
y *= self.frame_img_factor[1]
if event == cv2.EVENT_MOUSEMOVE:
if not self._is_dragging or not self._selected_point:
return
self.image_points[self._selected_point] = (x, y)
if event == cv2.EVENT_LBUTTONDOWN:
# select or create
self._selected_point = None
for i, p in self.image_points.items():
d = (p[0]-x)**2 + (p[1]-y)**2
if d < 30:
self._selected_point = i
break
if self._selected_point is None:
self._selected_point = self.new_point((x,y), None)
self._is_dragging = True
if event == cv2.EVENT_LBUTTONUP:
self._is_dragging = False
# ... point stays selected to tweak laser
def create_or_select(self, nr: str):
if nr not in self.image_points:
self.new_point(None, None, nr)
self._selected_point = nr
return nr
def new_point(self, img_coord: Optional[Coordinate], laser_coord: Optional[Coordinate], nr: Optional[str]=None):
if nr:
new_nr = nr
else:
new_nr = None
for i in range(100):
k = str(i)
if k not in self.image_points:
new_nr = k
break
if not new_nr:
new_nr = 0 # cover unlikely case
self.image_points[new_nr] = img_coord or (100,100)
self.laser_points[new_nr] = laser_coord or (100,100)
return new_nr
def delete_point(self, point: str):
del self.image_points[point]
del self.laser_points[point]
self._selected_point = None
def calculate_homography(self):
if len(self.image_points) < 4:
return
world_points = self.camera.points_img_to_world(list(self.image_points.values()))
laser_points = np.array(list(self.laser_points.values()))
print('from', world_points)
print('to', laser_points)
self.H, status = cv2.findHomography(world_points, laser_points)
print('Found')
print(self.H)
def save(self):
with self.config.calibfile.open('w') as fp:
json.dump({
'laser_points': self.laser_points,
'image_points': self.image_points,
'H': self.H.tolist()
}, fp)
@classmethod
def arg_parser(cls) -> ArgumentParser:
argparser = ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument('--zmq-stage-addr',
help='Manually specity communication addr for the stage messages (the rendered lines)',
type=str,
default="tcp://0.0.0.0:99174")
argparser.add_argument('--calibfile',
help='specify file to save & load points with',
type=Path,
default=Path("./laser_calib.json"))
return argparser

View file

@ -1,693 +0,0 @@
# used for "Forward Referencing of type annotations"
from __future__ import annotations
import time
import ffmpeg
from argparse import Namespace
import datetime
import logging
from multiprocessing import Event
from multiprocessing.synchronize import Event as BaseEvent
import cv2
import numpy as np
import json
import pyglet
import pyglet.event
import zmq
import tempfile
from pathlib import Path
import shutil
import math
from typing import Dict, Iterable, Optional
from pyglet import shapes
from PIL import Image
# from trap.scenarios import TrackScenario
from trap.counter import CounterSender
from trap.frame_emitter import DetectionState, Frame, Track, Camera
# from trap.helios import HeliosDAC, HeliosPoint
from trap.preview_renderer import PROJECTION_MAP, DrawnTrack, FrameWriter
from trap.tools import draw_track, draw_track_predictions, draw_track_projected, draw_trackjectron_history, drawntrack_predictions_to_lines, to_point, track_predictions_to_lines
from trap.utils import convert_world_points_to_img_points, convert_world_space_to_img_space, lerp
logger = logging.getLogger("trap.laser_renderer")
import ctypes
class LaserFrame():
def __init__(self, paths: list[LaserPath]):
self.paths = paths
def point_count(self):
return sum([len(p.points) for p in self.paths])
# def closest_path(cls, point, paths):
# distances = [min(p.last()-)]
# def optimise_paths_lazy(self, last_point = None):
# """Quick way to optimise order of paths
# last_point can be the ending point of previous frame.
# """
# ordered_paths = []
# if not last_point:
# ordered_paths.append(self.paths.pop(0))
# last_point = endpoint
# pass
def as_cropped_to_projector(self):
paths = []
for path in self.paths:
p = path.as_cropped_to_projector()
if len(p.points):
paths.append(p)
return LaserFrame(paths)
def get_points_interpolated_by_distance(self, point_interval, last_point: Optional[LaserPoint] = None) -> list[LaserPoint]:
"""
Interpolate the gaps between paths (NOT THE PATHS THEMSELVES)
point_interval is the maximum interval at which a new point should be added
"""
points: list[LaserPoint] = []
for path in self.paths:
if last_point:
a = last_point
b = path.first()
dx = b.x - a.x
dy = b.y - a.y
distance = np.linalg.norm([dx,dy])
steps = int(distance // point_interval)
for step in range(steps+1): # have both 0 and 1 in the lerp for empty points
t = step/(steps+1)
t = 1 # just asap to starting point of next shape
x = int(lerp(a.x, b.x, t))
y = int(lerp(a.y, b.y, t))
points.append(LaserPoint(x,y, (0,0,0), 0, True))
# print('append', steps)
points.extend(path.points)
last_point = path.last()
return points
class LaserPath():
def __init__(self, points: list[LaserPoint] = []):
# if len(points) < 1:
# raise RuntimeError("LaserPath should have some points")
self.points = points
def last(self):
return self.points[-1]
def first(self):
return self.points[0]
def as_array(self):
np.array([[p.x, p.y] for p in self.points])
def as_cropped_to_projector(self):
"""Make sure all points fall within range of laser"""
points = [p for p in self.points if p.x >= 0 and p.y >= 0 and p.x < 0xFFF and p.y < 0xFFF ]
return LaserPath(points)
def simplyfied_path(self, start_v= 10., max_v= 20., a = 2):
"""walk over the path with specific velocity,
continuously accelerate (a) until max_v is reached
place point at each step
(see also tools.transition_path_points() )
"""
if len(self.points) < 1:
return self.points
path = self.as_array()
# new_path = np.array([])
lengths = np.sqrt(np.sum(np.diff(path, axis=0)**2, axis=1))
cum_lenghts = np.cumsum(lengths)
# distance = cum_lenghts[-1] * t
# ts = np.concatenate((np.array([0.]), cum_lenghts / cum_lenghts[-1]))
# print(cum_lenghts[-1])
# DRAW_SPEED = 35 # fixed speed (independent of lenght) TODO)) make variable
# ts = np.concatenate((np.array([0.]), cum_lenghts / DRAW_SPEED))
new_path = [path[0]]
position = 0
next_pos = position + v
for a, b, pos in zip(path[:-1], path[1:], cum_lenghts):
# TODO))
if pos < (next_pos):
continue
v = min(v+a, max_v)
next_pos = position + v
relative_t = inv_lerp(t_a, t_b, t)
pass
# for a, b, t_a, t_b in zip(path[:-1], path[1:], ts[:-1], ts[1:]):
# if t_b < t:
# new_path.append(b)
# continue
# # interpolate
# relative_t = inv_lerp(t_a, t_b, t)
# x = lerp(a[0], b[0], relative_t)
# y = lerp(a[1], b[1], relative_t)
# new_path.append([x,y])
# break
# return np.array(new_path)
class LaserPoint():
def __init__(self,x,y,c: Color = (255,0,0),i= 255,blank=False):
self.x = x
self.y = y
self.c = c
self._i = i
self.blank = blank
@property
def color(self):
if self.blank: return (0,0,0)
return self.c
@property
def i(self):
return 0 if self.blank else self._i
#Define point structure
class CHeliosPoint(ctypes.Structure):
#_pack_=1
_fields_ = [('x', ctypes.c_uint16),
('y', ctypes.c_uint16),
('r', ctypes.c_uint8),
('g', ctypes.c_uint8),
('b', ctypes.c_uint8),
('i', ctypes.c_uint8)]
class LaserRenderer:
def __init__(self, config: Namespace, is_running: BaseEvent):
self.config = config
self.is_running = is_running
context = zmq.Context()
self.prediction_sock = context.socket(zmq.SUB)
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
# self.prediction_sock.connect(config.zmq_prediction_addr if not self.config.bypass_prediction else config.zmq_trajectory_addr)
self.prediction_sock.connect(config.zmq_prediction_addr)
self.tracker_sock = context.socket(zmq.SUB)
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.tracker_sock.setsockopt(zmq.SUBSCRIBE, b'')
self.tracker_sock.connect(config.zmq_trajectory_addr)
self.H = self.config.H
self.inv_H = np.linalg.pinv(self.H)
# TODO: get FPS from frame_emitter
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
self.fps = 60
self.frame_size = (self.config.camera.w,self.config.camera.h)
self.first_time: float|None = None
self.frame: Frame|None= None
self.tracker_frame: Frame|None = None
self.prediction_frame: Frame|None = None
self.tracks: Dict[str, Track] = {}
# self.scenarios: Dict[str, TrackScenario] = {}
self.predictions: Dict[str, Track] = {}
self.drawn_tracks: Dict[str, DrawnTrack] = {}
self.helios = ctypes.cdll.LoadLibrary("./trap/helios_dac/libHeliosDacAPI.so")
numDevices = self.helios.OpenDevices()
logger.info(f"Found {numDevices} Helios DACs")
# self.dac = HeliosDAC(debug=False)
# logger.info(f"{self.dac.dev}")
# logger.info(f"{self.dac.GetName()}")
# logger.info(f"{self.dac.getHWVersion()}")
# logger.info(f"Helios version: {self.dac.getHWVersion()}")
# self.init_shapes()
# self.init_labels()
def check_frames(self, dt):
new_tracks = False
try:
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
if not self.first_time:
self.first_time = self.frame.time
img = cv2.GaussianBlur(self.frame.img, (15, 15), 0)
img = cv2.flip(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), 0)
img = pyglet.image.ImageData(self.frame_size[0], self.frame_size[1], 'RGB', img.tobytes())
# don't draw in batch, so that it is the background
self.video_sprite = pyglet.sprite.Sprite(img=img, batch=self.batch_bg)
self.video_sprite.opacity = 100
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
try:
self.prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
new_tracks = True
except zmq.ZMQError as e:
pass
try:
self.tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
new_tracks = True
except zmq.ZMQError as e:
pass
def run(self, timer_counter):
frame = None
prediction_frame = None
tracker_frame = None
i=0
first_time = None
kpps = 50000
# frames = [0 for x in range(30)]
# frameTr= CHeliosPoint(int(x),int(y),20,20,20,255)
# print(frames)
pointlist_test = []
# pointlist_test.append(HeliosPoint(10,10, blank=False))
for i in range(30):
if i < 15:
# y = int(i*0xfff/500)
y = int(i*10 + 0xfff/2)
else:
# y = int((15-i)*0xfff/500)
y = int((15-i)*10 + 0xfff/2)
pointlist_test.append(LaserPoint(int(0),0xfff-y, blank=False))
# pointlist_test.append(HeliosPoint(10,0xfff, blank=False))
# pointlist_test.append(HeliosPoint(8000,8000, blank=False))
# pointlist_test.append(HeliosPoint(8000,10, blank=False))
# pointlist_test.append(HeliosPoint(10,10, blank=True))
# frameType = CHeliosPoint * len(pointlist_test)
# frame = frameType()
# for j, point in enumerate(pointlist_test):
# frame[j] = CHeliosPoint(point.x, point.y, 0,40,0,0 if point.blank else 255)
counter = CounterSender()
print(f"RENDER DAC\n\n\n")
last_laser_point = None
# for i in range(150):
while self.is_running.is_set():
# Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
# statusAttempts=0
# while (statusAttempts < 512 and self.helios.GetStatus(0) != 1):
# statusAttempts += 1
# self.helios.WriteFrame(0, kpps, 0, ctypes.pointer(frame), len(pointlist))
# continue
i+=1
with timer_counter.get_lock():
timer_counter.value+=1
try:
prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
for track_id, track in prediction_frame.tracks.items():
prediction_id = f"{track_id}-{track.history[-1].frame_nr}"
self.predictions[prediction_id] = track
# TODO)) also for tracks:
if track_id not in self.drawn_tracks:
self.drawn_tracks[track_id] = DrawnTrack(track_id, track, self, prediction_frame.camera.H, PROJECTION_MAP, prediction_frame.camera)
elif self.drawn_tracks[track_id].update_predictions_at < (time.time() - .5): # TODO)) only update predictions every n frames. configure
# self.drawn_tracks[track_id].pred_track
self.drawn_tracks[track_id].set_predictions(track)
# if track_id in self.scenarios:
# self.scenarios[track_id].set_prediction(track)
# self.drawn_predictions[track_id] = track
except zmq.ZMQError as e:
logger.debug(f'reuse prediction')
try:
tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
for track_id, track in tracker_frame.tracks.items():
self.tracks[track_id] = track
# if not track_id in self.scenarios:
# self.scenarios[track_id] = TrackScenario(track)
# else:
# self.scenarios[track_id].set_track(track)
# self.scenarios[track_id].receive_track(track)
except zmq.ZMQError as e:
logger.debug(f'reuse tracks')
# if tracker_frame is None:
# # might need to wait a few iterations before first frame comes available
# time.sleep(.1)
# continue
if first_time is None and tracker_frame is not None:
first_time = tracker_frame.time
# print('-------')
paths = render_frame_to_pathlist( tracker_frame, prediction_frame, self.drawn_tracks, first_time, self.config, self.tracks, self.predictions, self.config.render_clusters)
counter.set('paths', len(paths))
counter.set('points', sum([len(p.points) for p in paths]))
if self.prediction_frame:
counter.set('pred_render_latency', time.time() - self.prediction_frame.time)
if self.tracker_frame:
counter.set('track_render_latency', time.time() - self.tracker_frame.time)
# print(f"Paths: {len(paths)} ... points {sum([len(p.points) for p in paths])}")
laserframe = LaserFrame(paths)
laserframe_cropped = laserframe.as_cropped_to_projector()
counter.set('laser.removed', laserframe_cropped.point_count() - laserframe.point_count())
if laserframe.point_count() > laserframe_cropped.point_count():
# logger.warning("Removed laser points out of frame!")
laserframe = laserframe_cropped
# pointlist=pointlist_test
# print([(p.x, p.y) for p in pointlist])
# pointlist.extend(pointlist_test)
pointlist = laserframe.get_points_interpolated_by_distance(30, last_laser_point)
# pointlist_cropped =
# pointlist = pointlist[::2]
# print('decimated', len(pointlist))
if len(pointlist):
last_laser_point = pointlist[-1]
frameType = CHeliosPoint * len(pointlist)
frame = frameType()
# print(len(pointlist)) #, last_laser_point.x, last_laser_point.y)
for j, point in enumerate(pointlist):
frame[j] = CHeliosPoint(int(point.x), int(point.y), point.color[0],point.color[1], point.color[2], point.i)
# Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
statusAttempts=0
while (statusAttempts < 512 and self.helios.GetStatus(0) != 1):
statusAttempts += 1
self.helios.WriteFrame(0, kpps, 0, ctypes.pointer(frame), len(pointlist))
# continue
# self.helios.WriteFrame(0, kpps, 0, ctypes.pointer(frame), len(pointlist))
# self.dac.newFrame(50000, pointlist)
# clear out old tracks & predictions:
for track_id, track in list(self.tracks.items()):
# TODO)) Migrate to using time() instead of framenr, to detach the two
if get_animation_position(track, tracker_frame) == 1:
self.tracks.pop(track_id)
for prediction_id, track in list(self.predictions.items()):
if get_animation_position(track, tracker_frame) == 1:
self.predictions.pop(prediction_id)
for track_id in list(self.drawn_tracks.keys()):
# TODO make delay configurable
if self.drawn_tracks[track_id].update_at < time.time() - 5:
# TODO fade out
del self.drawn_tracks[track_id]
logger.info('Stopping')
self.helios.CloseDevices()
# if i>2:
logger.info('stopped')
# colorset = itertools.product([0,255], repeat=3) # but remove white
# colorset = [(0, 0, 0),
# (0, 0, 255),
# (0, 255, 0),
# (0, 255, 255),
# (255, 0, 0),
# (255, 0, 255),
# (255, 255, 0)
# ]
colorset = [
(255,255,100),
(255,100,255),
(100,255,255),
]
# colorset = [
# (0,0,0),
# ]
def get_animation_position(track: Track, current_frame: Frame) -> float:
fade_duration = current_frame.camera.fps * 2
diff = current_frame.index - track.history[-1].frame_nr
return max(0, min(1, diff / fade_duration))
# track.history[-1].frame_nr < (current_frame.index - current_frame.camera.fps * 3)
# track.history[-1].frame_nr < (current_frame.index - current_frame.camera.fps * 3)
def circle_points(cx, cy, r, c: Color):
# r = r
steps = 30
pointlist: list[LaserPoint] = []
for i in range(steps):
x = int(cx + math.cos(i * (2*math.pi)/steps) * r)
y = int(cy + math.sin(i * (2*math.pi)/steps)* r)
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
return pointlist
Color = tuple[int, int, int]
# derived with trap/helios_dac/calibration_points.py
# set points in the script to points from hof3/irl_points.json
laser_H =np.array([[ 2.47442963e+02, -7.01714050e+01, -9.71749119e+01],
[ 1.02328119e+01, 1.47185254e+02, 1.96295638e+02],
[-1.20921986e-03, -3.32735973e-02, 1.00000000e+00]])
def world_points_to_laser_points(points):
return cv2.perspectiveTransform(np.array([points]), laser_H)
# Deprecated
def render_frame_to_pathlist(tracker_frame: Optional[Frame], prediction_frame: Optional[Frame], drawn_tracks: Optional[Dict[str, DrawnTrack]], first_time: Optional[float], config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track], as_clusters = True):
# TODO: replace opencv with QPainter to support alpha? https://doc.qt.io/qtforpython-5/PySide2/QtGui/QPainter.html#PySide2.QtGui.PySide2.QtGui.QPainter.drawImage
# or https://github.com/pygobject/pycairo?tab=readme-ov-file
# or https://pyglet.readthedocs.io/en/latest/programming_guide/shapes.html
# and use http://code.astraw.com/projects/motmot/pygarrayimage.html or https://gist.github.com/nkymut/1cb40ea6ae4de0cf9ded7332f1ca0d55
# or https://api.arcade.academy/en/stable/index.html (supports gradient color in line -- "Arcade is built on top of Pyglet and OpenGL.")
# pointlist: list[LaserPoint] = []
# frame = LaserFrame()
paths: list[LaserPath] = []
# pointlist.append(HeliosPoint(x,y, dac.palette[cindex],blank=blank))
# all not working:
# if i == 1:
# # thanks to GpG for fixing scaling issue: https://stackoverflow.com/a/39668864
# scale_factor = 1./20 # from 10m to 1000px
# S = np.array([[scale_factor, 0,0],[0,scale_factor,0 ],[ 0,0,1 ]])
# new_H = S * self.H * np.linalg.inv(S)
# warpedFrame = cv2.warpPerspective(img, new_H, (1000,1000))
# cv2.imwrite(str(self.config.output_dir / "orig.png"), warpedFrame)
# cv2.rectangle(img, (0,0), (img.shape[1],25), (0,0,0), -1)
intensity = 39 # range 0-255
test_r = 100
base_c = (0,0, intensity)
# base_c = (0,intensity, intensity)
track_c = (intensity,0,0)
pred_c = (0,intensity,0)
if not tracker_frame and not prediction_frame:
paths.append(
LaserPath(circle_points(0xFFF/2, 0xFFF/2, test_r, base_c))
)
# c = (0,intensity,0)#, dac.palette[4] # Green
# r = 100
# steps = 100
# for i in range(steps):
# x = int(0xFFF/2 + math.cos(i * (2*math.pi)/steps) * r)
# y = int(0xFFF/2 + math.sin(i * (2*math.pi)/steps)* r)
# pointlist.append(HeliosPoint(x, y, c, blank=i==99))
# pointlist.append(HeliosPoint(10,10, c,blank=False))
# pointlist.append(HeliosPoint(10,100, c,blank=False))
# pointlist.append(HeliosPoint(10,200, c,blank=False))
# pointlist.append(HeliosPoint(100,200, c,blank=False))
# pointlist.append(HeliosPoint(200,200, c,blank=False))
# pointlist.append(HeliosPoint(200,100, c,blank=False))
# pointlist.append(HeliosPoint(200,10, c,blank=False))
# pointlist.append(HeliosPoint(100,10, c,blank=False))
# pointlist.append(HeliosPoint(10,10, c,blank=True))
# return pointlist
# print(not tracker_frame, not prediction_frame)
if not tracker_frame:
paths.append(
LaserPath(circle_points(0xFFF/2+2*test_r, 0xFFF/2, test_r, track_c))
)
else:
# if not len(tracks):
# paths.append(
# LaserPath(circle_points(0xFFF/2+4*test_r, 0xFFF/2, test_r/2, pred_c))
# )
for track_id, track in tracks.items():
inv_H = np.linalg.pinv(tracker_frame.H)
# track = track.get_sampled(4)
projected_history = track.get_projected_history(camera=config.camera)
history_for_laser = world_points_to_laser_points(projected_history)[0]
# point_color = bgr_colors[color_index % len(bgr_colors)]
points = np.rint(history_for_laser.reshape((-1,1,2))).astype(np.int32)
# print('point len',len(points))
laserpoints = []
for i, point in enumerate(points):
laserpoints.append(LaserPoint(point[0][0], point[0][1], track_c, blank=False))
path = LaserPath(laserpoints)
paths.append(path)
paths.append(
LaserPath(circle_points(history_for_laser[-1][0], history_for_laser[-1][1], 20, track_c))
)
# draw_track_projected(img, track, int(track_id), config.camera, convert_world_points_to_img_points)
if not prediction_frame:
paths.append(
LaserPath(circle_points(0xFFF/2+4*test_r, 0xFFF/2, test_r, pred_c))
)
# cv2.putText(img, f"Waiting for prediction...", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
# continue
# elif True:
# pass
elif drawn_tracks:
inv_H = np.linalg.pinv(prediction_frame.H)
for track_id, drawn_track in drawn_tracks.items():
drawn_track.update_drawn_positions(dt=None, no_shapes=True)
# For debugging:
# draw_trackjectron_history(img, track, int(track.track_id), convert_world_points_to_img_points)
anim_position = 1 # TODO)) calculate without video frame: get_animation_position(track, tracker_frame)
lines = drawntrack_predictions_to_lines(drawn_track, config.camera, anim_position)
# if lines:
# lines.extend(get_prediction_text(drawn_track))
if not lines:
continue
# draw in a single pass
# line_points = line_points.reshape((1, -1,1,2))
for line in lines:
# print('prediction line')
line = world_points_to_laser_points(line)[0]
# line = convert_world_points_to_img_points(line)
line = np.rint(line).astype(np.int32)
laserpoints = []
for i, point in enumerate(line):
laserpoints.append(LaserPoint(point[0], point[1], pred_c, blank=False))
path = LaserPath(laserpoints)
paths.append(path)
# draw_track_predictions(img, track, int(track.track_id)+1, config.camera, convert_world_points_to_img_points, anim_position=anim_position, as_clusters=as_clusters)
# cv2.putText(img, f"{len(track.predictor_history) if track.predictor_history else 'none'}", to_point(track.history[0].get_foot_coords()), cv2.FONT_HERSHEY_COMPLEX, 1, (255,255,255), 1)
# print(len(paths))
return paths
def get_prediction_text(drawn_track: DrawnTrack)-> list[list[float, float]]:
position_index = 20
if not drawn_track.drawn_predictions:
return []
if len(drawn_track.drawn_predictions[0]) < position_index:
logger.warning("prediction to short!")
return []
# draw only for first prediction
draw_pos = drawn_track.drawn_predictions[0][position_index-1]
current_pos = drawn_track.drawn_positions[-1]
angle = np.arctan2(draw_pos[0]-current_pos[0], draw_pos[1]-current_pos[1]) + np.pi
# print('angle', angle)
text_paths = []
with open("your_future_points_test.json", 'r') as fp:
lines = json.load(fp)
for i, line in enumerate(lines):
if i != 0:
continue
points = np.array(line)
avg_x = np.average(points[:,0])
avg_y = np.average(points[:,1])
minx, maxx = np.min(points[:,0]), np.max(points[:,0])
miny, maxy = np.min(points[:,1]), np.max(points[:,1])
sx = maxx-minx
sy = maxy-miny
points[:,0] -= avg_x
points[:,1] -= avg_y - i/2
points /= sx # scale to 1
points @= rotateMatrix(angle)
points += draw_pos
text_paths.append(points)
return text_paths
def rotateMatrix(a):
return np.array([[np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]])
def run_laser_renderer(config: Namespace, is_running: BaseEvent, timer_counter):
renderer = LaserRenderer(config, is_running)
renderer.run(timer_counter)

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,65 +0,0 @@
from argparse import ArgumentParser
import time
from trap.counter import CounterListerner
from trap.node import Node
class Monitor(Node):
"""
Render a stage, on which different TrackScenarios take place to a
single image of lines. Which can be passed to different renderers
E.g. the laser or image renderers.
"""
FPS = 1
def setup(self):
# self.scenarios: List[DrawnScenario] = []
self.counter_listener = CounterListerner()
def run(self):
prev_time = time.perf_counter()
while self.is_running.is_set():
# self.tick() # don't polute it with own data
self.counter_listener.snapshot()
stats = self.counter_listener.to_string()
if len(stats):
self.logger.info(stats)
# else:
# self.logger.info("no stats")
# for i, (k, v) in enumerate(self.counter_listener.get_latest().items()):
# print(k,v)
# cv2.putText(img, f"{k} {v.value()}", (20,img.shape[0]-(40*i)-40), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
# 3) calculate latency for desired FPS
now = time.perf_counter()
time_diff = (now - prev_time)
if time_diff < 1/self.FPS:
# print(f"sleep {1/self.FPS - time_diff}")
time.sleep(1/self.FPS - time_diff)
now += 1/self.FPS - time_diff
prev_time = now
@classmethod
def arg_parser(cls) -> ArgumentParser:
argparser = ArgumentParser()
# argparser.add_argument('--zmq-trajectory-addr',
# help='Manually specity communication addr for the trajectory messages',
# type=str,
# default="ipc:///tmp/feeds_traj")
# argparser.add_argument('--zmq-prediction-addr',
# help='Manually specity communication addr for the prediction messages',
# type=str,
# default="ipc:///tmp/feeds_preds")
# argparser.add_argument('--zmq-stage-addr',
# help='Manually specity communication addr for the stage messages (the rendered lines)',
# type=str,
# default="tcp://0.0.0.0:99174")
return argparser

View file

@ -1,217 +0,0 @@
from collections import defaultdict
import logging
from logging.handlers import QueueHandler, QueueListener, SocketHandler
import multiprocessing
from multiprocessing.synchronize import Event as BaseEvent
from argparse import ArgumentParser, Namespace
import time
from typing import Any, Optional
import zmq
from trap.counter import CounterFpsSender, CounterSender
from trap.timer import Timer
class Node():
def __init__(self, config: Namespace, is_running: BaseEvent, fps_counter: CounterFpsSender):
self.node_id = self.__class__.__name__.lower()
self.config = config
self.is_running = is_running
self.fps_counter = fps_counter
self.zmq_context = zmq.Context()
self.logger = self._logger()
self._prev_loop_time = 0
self.dt_since_last_tick = 0
self.config_sock = self.sub(self.config.zmq_config_addr)
self.config_init_sock = self.push(self.config.zmq_config_init_addr) # a sending sub
self.settings = defaultdict(None)
self.refresh_settings()
self.setup()
@classmethod
def _logger(cls):
return logging.getLogger(f"trap.{cls.__name__}")
def tick(self):
self.dt_since_last_tick = self.fps_counter.tick()
# with self.fps_counter.get_lock():
# self.fps_counter.value+=1
def setup(self):
raise RuntimeError("Not implemented setup()")
def run(self):
raise RuntimeError("Not implemented run()")
def stop(self):
"""
Called when runloop is stopped. Override to clean up what was initiated in start() and run() methods
"""
pass
def refresh_settings(self):
try:
self.config_init_sock.send_string(self.node_id, zmq.NOBLOCK)
except Exception as e:
self.logger.warning('No settings socket available')
self.logger.exception(e)
def run_loop(self):
"""Use in run(), to check if it should keep looping
Takes care of tick()'ing the iterations/second counter
"""
self.tick()
self.check_config()
return self.is_running.is_set()
def check_config(self):
while True:
try:
config = self.config_sock.recv_json(zmq.NOBLOCK)
for field, value in config.items():
self.settings[field] = value
except zmq.ZMQError as e:
# no msgs
break
def get_setting(self, name: str, default: Any):
if name in self.settings:
return self.settings[name]
return default
def run_loop_capped_fps(self, max_fps: float, warn_below_fps: float = 0.):
"""Use in run(), to check if it should keep looping
Takes care of tick()'ing the iterations/second counter
"""
now = time.perf_counter()
time_diff = (now - self._prev_loop_time)
if warn_below_fps > 0 and time_diff > 1/warn_below_fps:
self.logger.warning(f"Running below {warn_below_fps} FPS: measured {1/time_diff} FPS")
if time_diff < 1/max_fps:
# print(f"sleep {1/max_fps - time_diff}")
time.sleep(1/max_fps - time_diff)
now += 1/max_fps - time_diff
self._prev_loop_time = now
return self.run_loop()
@classmethod
def arg_parser(cls) -> ArgumentParser:
raise RuntimeError("Not implemented arg_parser()")
@classmethod
def _get_arg_parser(cls) -> ArgumentParser:
parser = cls.arg_parser()
# add some defaults
parser.add_argument(
'--verbose',
'-v',
help="Increase verbosity. Add multiple times to increase further.",
action='count', default=0
)
parser.add_argument(
'--remote-log-addr',
help="Connect to a remote logger like cutelog. Specify the ip",
type=str,
default="100.72.38.82"
)
parser.add_argument(
'--remote-log-port',
help="Connect to a remote logger like cutelog. Specify the port",
type=int,
default=19996
)
parser.add_argument('--zmq-config-addr',
help='Manually specity communication addr for the config messages',
type=str,
default="ipc:///tmp/feeds_config")
parser.add_argument('--zmq-config-init-addr',
help='Manually specity communication addr for req-rep config messages',
type=str,
default="ipc:///tmp/feeds_config_rr")
return parser
def sub(self, addr: str):
"Default zmq sub configuration"
sock = self.zmq_context.socket(zmq.SUB)
sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
sock.setsockopt(zmq.SUBSCRIBE, b'')
sock.connect(addr)
return sock
def pub(self, addr: str):
"Default zmq pub configuration"
sock = self.zmq_context.socket(zmq.PUB)
sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
sock.bind(addr)
return sock
def push(self, addr: str):
"push-pull pair"
sock = self.zmq_context.socket(zmq.PUSH)
# sock.setsockopt(zmq.LINGER, 0)
sock.connect(addr)
return sock
def pull(self, addr: str):
"Push-pull pair"
sock = self.zmq_context.socket(zmq.PULL)
sock.bind(addr)
return sock
@classmethod
def start(cls, config: Namespace, is_running: BaseEvent, timer_counter: Optional[Timer]):
instance = cls(config, is_running, timer_counter)
try:
instance.run()
except Exception as e:
instance.logger.exception(f"{e}")
instance.stop()
instance.logger.info("Stopping")
@classmethod
def parse_and_start(cls):
"""To start the node from CLI/supervisor"""
config = cls._get_arg_parser().parse_args()
setup_logging(config) # running from cli, we need to setup logging
is_running = multiprocessing.Event()
is_running.set()
statsender = CounterSender()
counter = CounterFpsSender(f"trap.{cls.__name__}", statsender)
# timer_counter = Timer(cls.__name__)
cls.start(config, is_running, counter)
def setup_logging(config: Namespace):
loglevel = logging.NOTSET if config.verbose > 1 else logging.DEBUG if config.verbose > 0 else logging.INFO
stream_handler = logging.StreamHandler()
log_handlers = [stream_handler]
if config.remote_log_addr:
logging.captureWarnings(True)
# root_logger.setLevel(logging.NOTSET) # to send all records to cutelog
socket_handler = SocketHandler(config.remote_log_addr, config.remote_log_port)
# print(socket_handler.host, socket_handler.port)
socket_handler.setLevel(logging.NOTSET)
log_handlers.append(socket_handler)
logging.basicConfig(
level=loglevel,
handlers=log_handlers, # [queue_handler]
format="%(asctime)s %(levelname)s:%(name)s:%(message)s",
datefmt="%H:%M:%S"
)

View file

@ -7,16 +7,12 @@ import signal
import sys
import time
from trap.config import parser
from trap.counter import CounterListerner
from trap.cv_renderer import run_cv_renderer
from trap.face_detector import run_detector
from trap.frame_emitter import run_frame_emitter
from trap.laser_renderer import run_laser_renderer
from trap.prediction_server import run_prediction_server
from trap.preview_renderer import run_preview_renderer
from trap.animation_renderer import run_animation_renderer
from trap.socket_forwarder import run_ws_forwarder
from trap.stage import Stage
from trap.timer import TimerCollection
from trap.tracker import run_tracker
@ -91,16 +87,12 @@ def start():
timers = TimerCollection()
timer_fe = timers.new('frame_emitter')
timer_tracker = timers.new('tracker')
timer_faces = timers.new('faces')
timer_stage = timers.new('stage')
# instantiating process with arguments
procs = [
# ExceptionHandlingProcess(target=run_ws_forwarder, kwargs={'config': args, 'is_running': isRunning}, name='forwarder'),
ExceptionHandlingProcess(target=run_frame_emitter, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_fe.iterations}, name='frame_emitter'),
ExceptionHandlingProcess(target=run_tracker, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_tracker.iterations}, name='tracker'),
# ExceptionHandlingProcess(target=run_detector, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_faces.iterations}, name='detector'),
ExceptionHandlingProcess(target=Stage.start, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_stage.iterations}, name='stage'),
]
# if args.render_file or args.render_url or args.render_window:
@ -114,10 +106,6 @@ def start():
procs.append(
ExceptionHandlingProcess(target=run_animation_renderer, kwargs={'config': args, 'is_running': isRunning}, name='renderer')
)
if args.render_laser:
procs.append(
ExceptionHandlingProcess(target=run_laser_renderer, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_preview.iterations}, name='renderer')
)
if not args.bypass_prediction:
timer_predict = timers.new('predict')
@ -126,14 +114,10 @@ def start():
)
def timer_process(timers: TimerCollection, is_running: Event):
counter_listener = CounterListerner()
while is_running.is_set():
time.sleep(1)
timers.snapshot()
counter_listener.snapshot()
print(timers.to_string(), counter_listener.to_string())
print(timers.to_string())
procs.append(
ExceptionHandlingProcess(target=timer_process, kwargs={'is_running':isRunning, 'timers': timers}, name='timer'),

View file

@ -1,31 +1,33 @@
# adapted from Trajectron++ online_server.py
import json
from argparse import Namespace
import logging
from multiprocessing import Event, Queue
import os
import pathlib
import pickle
import random
import sys
import time
from typing import List
import json
import traceback
import warnings
from argparse import ArgumentParser, Namespace
from multiprocessing import Event
import dill
import numpy as np
import shapely
import pandas as pd
import torch
import zmq
from trajectron.environment import Environment, Scene, GeometricMap
from trajectron.model.model_registrar import ModelRegistrar
from trajectron.model.online.online_trajectron import OnlineTrajectron
import dill
import random
import pathlib
import numpy as np
from trajectron.environment.data_utils import derivative_of
from trajectron.utils import prediction_output_to_trajectories
from trajectron.model.online.online_trajectron import OnlineTrajectron
from trajectron.model.model_registrar import ModelRegistrar
from trajectron.environment import Environment, Scene
from trajectron.environment.node import Node
from trajectron.environment.node_type import NodeType
import matplotlib.pyplot as plt
import zmq
from trap.frame_emitter import DataclassJSONEncoder, Frame
from trap.lines import load_lines_from_svg
from trap.node import Node
from trap.tracker import Smoother
from trap.utils import ImageMap
from trap.tracker import Track, Smoother
logger = logging.getLogger("trap.prediction")
@ -54,21 +56,19 @@ def create_online_env(env, hyperparams, scene_idx, init_timestep):
init_timestep + 1),
state=hyperparams['state'])
online_scene.robot = test_scene.robot
radius = {k: 0 for k,v in env.attention_radius.items()}
online_scene.calculate_scene_graph(attention_radius=radius,
online_scene.calculate_scene_graph(attention_radius=env.attention_radius,
edge_addition_filter=hyperparams['edge_addition_filter'],
edge_removal_filter=hyperparams['edge_removal_filter'])
return Environment(node_type_list=env.node_type_list,
standardization=env.standardization,
scenes=[online_scene],
attention_radius=radius,
attention_radius=env.attention_radius,
robot_type=env.robot_type)
def get_maps_for_input(input_dict, scene: Scene, hyperparams, device):
scene_maps: List[ImageMap] = list()
def get_maps_for_input(input_dict, scene, hyperparams, device):
scene_maps = list()
scene_pts = list()
heading_angles = list()
patch_sizes = list()
@ -90,11 +90,9 @@ def get_maps_for_input(input_dict, scene: Scene, hyperparams, device):
else:
heading_angle = None
scene_map: ImageMap = scene.map[node.type]
scene_map.set_bounds() # update old pickled maps
scene_map = scene.map[node.type]
# map_point = x[-1, :2]
map_point = x[:2]
# map_point = x[:2].clip(0) # prevent crash for out of map point.
patch_size = hyperparams['map_encoder'][node.type]['patch_size']
@ -110,17 +108,11 @@ def get_maps_for_input(input_dict, scene: Scene, hyperparams, device):
heading_angles = torch.Tensor(heading_angles)
# print(scene_maps, patch_sizes, heading_angles)
# print(scene_pts)
try:
maps = scene_maps[0].get_cropped_maps_from_scene_map_batch(scene_maps,
scene_pts=torch.Tensor(scene_pts),
patch_size=patch_sizes[0],
rotation=heading_angles,
device='cpu')
except Exception as e:
# print(scene_maps)
logger.warning(f"Crash on getting maps for points: {scene_pts=} {heading_angles=} {patch_size=}")
raise e
maps = scene_maps[0].get_cropped_maps_from_scene_map_batch(scene_maps,
scene_pts=torch.Tensor(scene_pts),
patch_size=patch_sizes[0],
rotation=heading_angles,
device='cpu')
maps_dict = {node: maps[[i]].to(device) for i, node in enumerate(nodes_with_maps)}
return maps_dict
@ -154,27 +146,27 @@ def offset_trajectron_dict(source, x, y):
source[t][node][:,1] += y
return source
class PredictionServer(Node):
def setup(self):
class PredictionServer:
def __init__(self, config: Namespace, is_running: Event):
self.config = config
self.is_running = is_running
if self.config.eval_device == 'cpu':
logger.warning("Running on CPU. Specifying --eval_device cuda:0 should dramatically speed up prediction")
if self.config.smooth_predictions:
self.smoother = Smoother(window_len=12, convolution=True) # convolution seems fine for predictions
self.trajectory_socket = self.sub(self.config.zmq_trajectory_addr)
self.prediction_socket = self.pub(self.config.zmq_prediction_addr)
self.external_predictions = not self.config.zmq_prediction_addr.startswith("ipc://")
self.cutoff_shape = None
if self.config.cutoff_map:
self.cutoff_line = load_lines_from_svg(self.config.cutoff_map, 100, '')[0]
self.cutoff_shape = shapely.Polygon([p.position for p in self.cutoff_line.points])
logger.info(f"{self.cutoff_shape}")
context = zmq.Context()
self.trajectory_socket: zmq.Socket = context.socket(zmq.SUB)
self.trajectory_socket.setsockopt(zmq.SUBSCRIBE, b'')
self.trajectory_socket.setsockopt(zmq.CONFLATE, 1) # only keep last msg. Set BEFORE connect!
self.trajectory_socket.connect(config.zmq_trajectory_addr)
self.prediction_socket: zmq.Socket = context.socket(zmq.PUB)
self.prediction_socket.bind(config.zmq_prediction_addr)
self.external_predictions = not self.config.zmq_prediction_addr.startswith("ipc://")
# print(self.prediction_socket)
def send_frame(self, frame: Frame):
if self.external_predictions:
@ -183,7 +175,8 @@ class PredictionServer(Node):
else:
self.prediction_socket.send_pyobj(frame)
def run(self):
def run(self, timer_counter):
print(self.config)
if self.config.seed is not None:
random.seed(self.config.seed)
np.random.seed(self.config.seed)
@ -197,8 +190,7 @@ class PredictionServer(Node):
# model_dir = 'models/models_04_Oct_2023_21_04_48_eth_vel_ar3'
# Load hyperparameters from json
# config_file = os.path.join(self.config.model_dir, self.config.conf)
config_file = self.config.conf
config_file = os.path.join(self.config.model_dir, self.config.conf)
if not os.path.exists(config_file):
raise ValueError('Config json not found!')
with open(config_file, 'r') as conf_json:
@ -238,9 +230,6 @@ class PredictionServer(Node):
logger.info(f"Basing online env on {eval_scene=} -- loaded from {self.config.eval_data_dict}")
online_env = create_online_env(eval_env, hyperparams, scene_idx, init_timestep)
print("overriding attention radius")
online_env.attention_radius = {(online_env.NodeType.PEDESTRIAN, online_env.NodeType.PEDESTRIAN): 0.1}
# auto-find highest iteration
model_registrar = ModelRegistrar(self.config.model_dir, self.config.eval_device)
model_iterations = pathlib.Path(self.config.model_dir).glob('model_registrar-*.pt')
@ -261,9 +250,17 @@ class PredictionServer(Node):
trajectron.set_environment(online_env, init_timestep)
timestep = init_timestep + 1
while self.run_loop():
prev_run_time = 0
while self.is_running.is_set():
timestep += 1
with timer_counter.get_lock():
timer_counter.value+=1
# this_run_time = time.time()
# logger.debug(f'test {prev_run_time - this_run_time}')
# time.sleep(max(0, prev_run_time - this_run_time + .5))
# prev_run_time = time.time()
# TODO: see process_data.py on how to create a node, the provide nodes + incoming data columns
# data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
@ -286,6 +283,7 @@ class PredictionServer(Node):
if self.config.predict_training_data:
input_dict = eval_scene.get_clipped_input_dict(timestep, hyperparams['state'])
else:
# print('await', self.config.zmq_trajectory_addr)
zmq_ev = self.trajectory_socket.poll(timeout=2000)
if not zmq_ev:
# on no data loop so that is_running is checked
@ -296,14 +294,6 @@ class PredictionServer(Node):
data = self.trajectory_socket.recv()
# print('recv tracker frame')
frame: Frame = pickle.loads(data)
# add settings to log
frame.log['predictor'] = {}
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
frame.log['predictor'][option] = self.config.__dict__[option]
# print('indexrecv', [frame.tracks[t].frame_index for t in frame.tracks])
# trajectory_data = {t.track_id: t.get_projected_history_as_dict(frame.H) for t in frame.tracks.values()}
# trajectory_data = json.loads(data)
# logger.debug(f"Receive {frame.index}")
@ -314,7 +304,6 @@ class PredictionServer(Node):
input_dict = {}
for identifier, track in frame.tracks.items():
# if len(trajectory['history']) < 7:
# # TODO: these trajectories should still be in the output, but without predictions
# continue
@ -331,16 +320,7 @@ class PredictionServer(Node):
if len(track.history) < 2:
continue
node = track.to_trajectron_node(frame.camera, online_env)
if self.cutoff_shape:
position = shapely.Point(node.data.data[-1][:2])
if not shapely.contains(self.cutoff_shape, position):
# logger.debug(f"Skip position {position}")
continue
node = track.to_trajectron_node(self.config.camera, online_env)
# print(node.data.data[-1])
input_dict[node] = np.array(object=node.data.data[-1])
# print("history", node.data.data[-10:])
@ -379,7 +359,6 @@ class PredictionServer(Node):
# )
# input_dict[node] = np.array(object=[x[-1],y[-1],vx[-1],vy[-1],ax[-1],ay[-1]])
# break # only on
# print(input_dict)
@ -396,11 +375,9 @@ class PredictionServer(Node):
continue
maps = None
start_maps = time.time()
if hyperparams['use_map_encoding']:
maps = get_maps_for_input(input_dict, eval_scene, hyperparams, device=self.config.eval_device)
# print(maps)
# robot_present_and_future = None
@ -428,8 +405,7 @@ class PredictionServer(Node):
gmm_mode=self.config.gmm_mode, # "If True: The mode of the Gaussian Mixture Model (GMM) is sampled (see trajectron.model.mgcvae.py)"
z_mode=self.config.z_mode # "Predictions from the models most-likely high-level latent behavior mode" (see trajecton.models.components.discrete_latent:sample_p(most_likely_z=z_mode))
)
print(len(dists), len (preds))
intermediate = time.time()
# unsure what this bit from online_prediction.py does:
# detailed_preds_dict = dict()
# for node in eval_scene.nodes:
@ -449,8 +425,8 @@ class PredictionServer(Node):
end = time.time()
logger.debug("took %.2f s (= %.2f Hz), maps: %.2f, forward: %.2f w/ %d nodes and %d edges -- init: %.2f s" % (end - start,
1. / (end - start), (start-start_maps)/(end - start), (intermediate-start)/(end - start), len(trajectron.nodes),
logger.debug("took %.2f s (= %.2f Hz) w/ %d nodes and %d edges -- init: %.2f s" % (end - start,
1. / (end - start), len(trajectron.nodes),
trajectron.scene_graph.get_num_edges(), start-t_init))
# if self.config.center_data:
@ -472,7 +448,7 @@ class PredictionServer(Node):
futures_dict = futures_dict[ts_key]
response = {}
# logger.debug(f"{histories_dict=}")
logger.debug(f"{histories_dict=}")
for node in histories_dict:
history = histories_dict[node]
# future = futures_dict[node] # ground truth dict
@ -480,9 +456,7 @@ class PredictionServer(Node):
# print('preds', len(predictions[0][0]))
if not len(history) or np.isnan(history[-1]).any():
logger.warning(f'skip for no history for {node} @ {ts_key} [{len(prediction_dict)=}, {len(histories_dict)=}, {len(futures_dict)=}]')
# logger.info(f"{preds=}")
logger.warning('skip for no history')
continue
# response[node.id] = {
@ -508,170 +482,9 @@ class PredictionServer(Node):
frame.maps = list([m.cpu().numpy() for m in maps.values()]) if maps else None
# print('index', [frame.tracks[t].frame_index for t in frame.tracks])
self.send_frame(frame)
logger.info('Stopping')
@classmethod
def arg_parser(cls) -> ArgumentParser:
inference_parser = ArgumentParser()
inference_parser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
inference_parser.add_argument('--zmq-prediction-addr',
help='Manually specity communication addr for the prediction messages',
type=str,
default="ipc:///tmp/feeds_preds")
inference_parser.add_argument("--step-size",
# TODO)) Make dataset/model metadata
help="sample step size (should be the same as for data processing and augmentation)",
type=int,
default=1,
)
inference_parser.add_argument("--model_dir",
help="directory with the model to use for inference",
type=str, # TODO: make into Path
default='../Trajectron-plus-plus/experiments/trap/models/models_18_Oct_2023_19_56_22_virat_vel_ar3/')
# default='../Trajectron-plus-plus/experiments/pedestrians/models/models_04_Oct_2023_21_04_48_eth_vel_ar3')
inference_parser.add_argument("--conf",
help="path to json config file for hyperparameters",
type=pathlib.Path,
default='EXPERIMENTS/config.json')
# Model Parameters (hyperparameters)
inference_parser.add_argument("--offline_scene_graph",
help="whether to precompute the scene graphs offline, options are 'no' and 'yes'",
type=str,
default='yes')
inference_parser.add_argument("--dynamic_edges",
help="whether to use dynamic edges or not, options are 'no' and 'yes'",
type=str,
default='yes')
inference_parser.add_argument("--edge_state_combine_method",
help="the method to use for combining edges of the same type",
type=str,
default='sum')
inference_parser.add_argument("--edge_influence_combine_method",
help="the method to use for combining edge influences",
type=str,
default='attention')
inference_parser.add_argument('--edge_addition_filter',
nargs='+',
help="what scaling to use for edges as they're created",
type=float,
default=[0.25, 0.5, 0.75, 1.0]) # We don't automatically pad left with 0.0, if you want a sharp
# and short edge addition, then you need to have a 0.0 at the
# beginning, e.g. [0.0, 1.0].
inference_parser.add_argument('--edge_removal_filter',
nargs='+',
help="what scaling to use for edges as they're removed",
type=float,
default=[1.0, 0.0]) # We don't automatically pad right with 0.0, if you want a sharp drop off like
# the default, then you need to have a 0.0 at the end.
inference_parser.add_argument('--incl_robot_node',
help="whether to include a robot node in the graph or simply model all agents",
action='store_true')
inference_parser.add_argument('--map_encoding',
help="Whether to use map encoding or not",
action='store_true')
inference_parser.add_argument('--no_edge_encoding',
help="Whether to use neighbors edge encoding",
action='store_true')
inference_parser.add_argument('--batch_size',
help='training batch size',
type=int,
default=512)
inference_parser.add_argument('--k_eval',
help='how many samples to take during evaluation',
type=int,
default=1)
# Data Parameters
inference_parser.add_argument("--eval_data_dict",
help="what file to load for evaluation data (WHEN NOT USING LIVE DATA)",
type=str,
default='../Trajectron-plus-plus/experiments/processed/eth_test.pkl')
inference_parser.add_argument("--output_dir",
help="what dir to save output (i.e., saved models, logs, etc) (WHEN NOT USING LIVE OUTPUT)",
type=pathlib.Path,
default='./OUT/test_inference')
# inference_parser.add_argument('--device',
# help='what device to perform training on',
# type=str,
# default='cuda:0')
inference_parser.add_argument("--eval_device",
help="what device to use during inference",
type=str,
default="cuda:0")
inference_parser.add_argument('--seed',
help='manual seed to use, default is 123',
type=int,
default=123)
inference_parser.add_argument('--predict_training_data',
help='Ignore tracker and predict data from the training dataset',
action='store_true')
inference_parser.add_argument("--smooth-predictions",
help="Smooth the predicted tracks",
action='store_true')
inference_parser.add_argument('--prediction-horizon',
help='Trajectron.incremental_forward parameter',
type=int,
default=30)
inference_parser.add_argument('--num-samples',
help='Trajectron.incremental_forward parameter',
type=int,
default=5)
inference_parser.add_argument("--full-dist",
help="Trajectron.incremental_forward parameter",
action='store_true')
inference_parser.add_argument("--gmm-mode",
help="Trajectron.incremental_forward parameter",
type=bool,
default=True)
inference_parser.add_argument("--z-mode",
help="Trajectron.incremental_forward parameter",
action='store_true')
inference_parser.add_argument('--cm-to-m',
help="Correct for homography that is in cm (i.e. {x,y}/100). Should also be used when processing data",
action='store_true')
inference_parser.add_argument('--center-data',
help="Center data around cx and cy. Should also be used when processing data",
action='store_true')
inference_parser.add_argument('--cutoff-map',
help='specify a map (svg-file) that specifies projection boundaries. In here, degrade chance to be selectede',
type=str,
default="../DATASETS/hof-lidar/map_hof.svg")
return inference_parser

View file

@ -24,7 +24,7 @@ from typing import List, Optional
from pyglet import shapes
from PIL import Image
from trap.utils import convert_world_points_to_img_points, exponentialDecay, relativePointToPolar, relativePolarToPoint
from trap.utils import convert_world_points_to_img_points
from trap.frame_emitter import DetectionState, Frame, Track, Camera
@ -45,6 +45,18 @@ class FrameAnimation:
def done(self):
return (time.time() - self.start_time) > 5
def exponentialDecay(a, b, decay, dt):
"""Exponential decay as alternative to Lerp
Introduced by Freya Holmér: https://www.youtube.com/watch?v=LSNQuFEDOyQ
"""
return b + (a-b) * math.exp(-decay * dt)
def relativePointToPolar(origin, point) -> tuple[float, float]:
x, y = point[0] - origin[0], point[1] - origin[1]
return np.sqrt(x**2 + y**2), np.arctan2(y, x)
def relativePolarToPoint(origin, r, angle) -> tuple[float, float]:
return r * np.cos(angle) + origin[0], r * np.sin(angle) + origin[1]
PROJECTION_IMG = 0
PROJECTION_UNDISTORT = 1
@ -55,8 +67,7 @@ class DrawnTrack:
def __init__(self, track_id, track: Track, renderer: PreviewRenderer, H, draw_projection = PROJECTION_IMG, camera: Optional[Camera] = None):
# self.created_at = time.time()
self.draw_projection = draw_projection
self.update_at = self.created_at = self.update_predictions_at = time.time()
self.last_update_t = time.perf_counter()
self.update_at = self.created_at = time.time()
self.track_id = track_id
self.renderer = renderer
self.camera = camera
@ -82,7 +93,6 @@ class DrawnTrack:
self.inv_H = np.linalg.pinv(self.H)
def set_predictions(self, track: Track, H = None):
self.update_predictions_at = time.time()
pred_coords = []
pred_history_coords = []
@ -102,7 +112,7 @@ class DrawnTrack:
# color = (128,0,128) if pred_i else (128,
def update_drawn_positions(self, dt: float|None, no_shapes=False) -> List:
def update_drawn_positions(self, dt) -> List:
'''
use dt to lerp the drawn positions in the direction of current prediction
'''
@ -112,11 +122,6 @@ class DrawnTrack:
"""quick wrapper to toggle int'ing"""
return v
# return int(v)
if dt is None:
t = time.perf_counter()
dt = t - self.last_update_t
self.last_update_t = t
# 1. track history
for i, pos in enumerate(self.drawn_positions):
@ -165,8 +170,7 @@ class DrawnTrack:
# finally: update shapes from coordinates
if not no_shapes: # to be used when not rendering to pyglet (e.g. laser renderer)
self.update_shapes(dt)
self.update_shapes(dt)
return self.drawn_positions
def update_shapes(self, dt):
@ -201,9 +205,7 @@ class DrawnTrack:
if draw_dot:
line = pyglet.shapes.Arc(x2, y2, 10, thickness=2, color=color, batch=self.renderer.batch_anim)
else:
# line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
line = pyglet.shapes.Line(x, y, x2, y2, 3, color, batch=self.renderer.batch_anim)
# line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
line.opacity = 20 if not for_laser else 255
self.shapes.append(line)
@ -298,9 +300,9 @@ class FrameWriter:
framerate.
See https://video.stackexchange.com/questions/25811/ffmpeg-make-video-with-non-constant-framerate-from-image-filenames
"""
def __init__(self, filename: str, fps: float, frame_size: Optional[tuple] = None) -> None:
def __init__(self, filename: str, fps: float, frame_size: tuple) -> None:
self.filename = filename
self._fps = fps
self.fps = fps
self.frame_size = frame_size
self.tmp_dir = tempfile.TemporaryDirectory(prefix="trap-output-")

View file

@ -8,18 +8,16 @@ import time
from xml.dom.pulldom import default_bufsize
from attr import dataclass
import cv2
import noise
import numpy as np
import pandas as pd
import dill
import tqdm
import argparse
from typing import Dict, List, Optional
from typing import List, Optional
from trap.base import Track
from trap.config import CameraAction, HomographyAction
from trap.frame_emitter import Camera
from trap.tracker import FinalDisplacementFilter, Noiser, RandomOffset, Smoother, TrackReader
from trap.tracker import FinalDisplacementFilter, Smoother, TrackReader
#sys.path.append("../../")
from trajectron.environment import Environment, Scene, Node
@ -74,29 +72,22 @@ class TrackIteration:
smooth: bool
step_size: int
step_offset: int
noisy: bool = False
offset: bool = False
@classmethod
def iteration_variations(cls, smooth = True, toggle_smooth=True, sample_step_size=1, noisy_variations=0, offset_variations=0):
def iteration_variations(cls, smooth = True, toggle_smooth=True, sample_step_size=1):
iterations: List[TrackIteration] = []
for i in range(sample_step_size):
for n in range(noisy_variations+1):
for f in range(offset_variations+1):
iterations.append(TrackIteration(smooth, sample_step_size, i, noisy=bool(n), offset=bool(f)))
if smooth and toggle_smooth:
iterations.append(TrackIteration(not smooth, sample_step_size, i, noisy=bool(n), offset=bool(f)))
iterations.append(TrackIteration(smooth, sample_step_size, i))
if toggle_smooth:
iterations.append(TrackIteration(not smooth, sample_step_size, i))
return iterations
# maybe_makedirs('trajectron-data')
# for desired_source in [ 'hof2', ]:# ,'hof-maskrcnn', 'hof-yolov8', 'VIRAT-0102-parsed', 'virat-resnet-keypoints-full']:
def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, noise_tracks: int, offset_tracks: int, center_data: bool, bin_positions: bool, camera: Camera, step_size: int, filter_displacement:float, map_img_path: Optional[Path]):
def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, cm_to_m: bool, center_data: bool, bin_positions: bool, camera: Camera, step_size: int, filter_displacement:float, map_img_path: Optional[Path]):
name += f"-nostep" if step_size == 1 else f"-step{step_size}"
# name += f"-conv{smooth_window}" if smooth_tracks else f"-nosmooth"
name += f"-kalsmooth" if smooth_tracks else f"-nosmooth"
name += f"-noise{noise_tracks}" if noise_tracks else f""
name += f"-offsets{offset_tracks}" if offset_tracks else f""
name += f"-conv{smooth_window}" if smooth_tracks else f"-nosmooth"
name += f"-f{filter_displacement}" if filter_displacement > 0 else ""
name += "-map" if map_img_path else "-nomap"
name += f"-{datetime.date.today()}"
@ -106,21 +97,15 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
if map_img_path:
if not map_img_path.exists():
raise RuntimeError(f"Map image does not exists {map_img_path}")
print(f"Using map {map_img_path}")
type_map = {}
# TODO)) For now, assume the map is a 100x scale of the world coordinates (i.e. 100px per meter)
# thus when we do a homography of 5px per meter, scale down by 20
map_H_path = map_img_path.with_suffix('.json')
if map_H_path.exists():
homography_matrix = np.loadtxt(map_H_path)
else:
homography_matrix = np.array([
[5, 0,0],
[0, 5,0],
[0,0,1],
]) # 100 scale
homography_matrix = np.array([
[5, 0,0],
[0, 5,0],
[0,0,1],
]) # 100 scale
img = cv2.imread(map_img_path)
img = cv2.resize(img, (img.shape[1]//20, img.shape[0]//20))
type_map['PEDESTRIAN'] = ImageMap(
@ -138,37 +123,21 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
skipped_for_error = 0
created = 0
# smoother = Smoother(window_len=smooth_window, convolution=True) if smooth_tracks else None
smoother = Smoother(convolution=False) if smooth_tracks else None
noiser = Noiser(amplitude=.1) if noise_tracks else None
smoother = Smoother(window_len=smooth_window, convolution=True) if smooth_tracks else None
reader = TrackReader(src_dir, camera.fps)
tracks = [t for t in reader]
print(f"Unfiltered total: {len(tracks)} tracks")
if filter_displacement > 0:
filter = FinalDisplacementFilter(filter_displacement)
tracks = filter.apply(tracks, camera)
print(f"Filtered: {len(tracks)} tracks")
skip_idxs = []
for idx, track in enumerate(tracks):
track_history = track.get_projected_history(camera=camera)
distances = np.sqrt(np.sum(np.diff(track_history, axis=0)**2, axis=1))
# print(trajectory_org)
# print(distances)
if any(distances > 3):
skip_idxs.append(idx)
for idx in skip_idxs:
tracks.pop(idx)
print(f"Filtered {len(skip_idxs)} tracks which contained leaps")
total = len(tracks)
bar = tqdm.tqdm(total=total)
destinations = {
'train': int(total * .91),
'val': int(total * .08),
'test': int(total * .01), # I don't realyl care about this
'train': int(total * .8),
'val': int(total * .12),
'test': int(total * .08),
}
max_track = reader.get(str(max([int(k) for k in reader._tracks.keys()])))
@ -184,7 +153,7 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
dt3 = RollingAverage()
dt4 = RollingAverage()
sets: Dict[str, List[Track]] = {}
sets = {}
offset = 0
for data_class, nr in destinations.items():
# TODO)) think of a way to shuffle while keeping scenes
@ -194,9 +163,6 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
print(f"Camera FPS: {camera.fps}, actual fps: {camera.fps/step_size} (or {(1/camera.fps)*step_size})")
names: Dict[str, Path] = {}
max_pos = 0
for data_class, nr_of_items in destinations.items():
env = Environment(node_type_list=['PEDESTRIAN'], standardization=standardization)
attention_radius = dict()
@ -206,7 +172,6 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
scenes = []
split_id = f"{name}_{data_class}"
data_dict_path = dst_dir / (split_id + '.pkl')
names[data_class] = data_dict_path
# subpath = src_dir / data_class
@ -214,9 +179,7 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
# scene = None
scene_nodes = defaultdict(lambda: [])
variations = TrackIteration.iteration_variations(smooth_tracks, True, step_size, noise_tracks, offset_tracks)
print(f"Create {len(variations)} variations")
variations = TrackIteration.iteration_variations(smooth_tracks, False, step_size)
for i, track in enumerate(sets[data_class]):
bar.update()
@ -244,20 +207,13 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
interpolated_track = track.get_with_interpolated_history()
b = time.time()
for variation_nr, iteration_settings in enumerate(variations):
track = interpolated_track
if iteration_settings.noisy:
track = noiser.apply_track(track)
if iteration_settings.offset:
offset = RandomOffset(amplitude=.1)
track = offset.apply_track(track)
if iteration_settings.smooth:
track = smoother.smooth_track(track)
track = smoother.smooth_track(interpolated_track)
# track = Smoother(smooth_window, False).smooth_track(track)
else:
track = interpolated_track # TODO)) Copy & move smooth outside iter loop
c = time.time()
if iteration_settings.step_size > 1:
@ -268,7 +224,6 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
# track.get_projected_history(H=None, camera=self.config.camera)
node = track.to_trajectron_node(camera, env)
max_pos = max(node.data.data[0][0], max_pos)
data_class = time.time()
@ -330,8 +285,7 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
# print(scene.nodes[0].first_timestep)
print(f'Processed {len(scenes)} scene with {sum([len(s.nodes) for s in scenes])} nodes for data class {data_class}')
# print("MAXIMUM!!", max_pos)
print(f'Processed {len(scenes):.2f} scene for data class {data_class}')
env.scenes = scenes
@ -341,30 +295,9 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
with open(data_dict_path, 'wb') as f:
dill.dump(env, f, protocol=dill.HIGHEST_PROTOCOL)
bar.close()
# print(f"Linear: {l}")
# print(f"Non-Linear: {nl}")
print(f"error: {skipped_for_error}, used: {created}")
print("Run with")
target_model_dir = (dst_dir / "../models/").resolve()
target_config = (dst_dir / "../trajectron.json").resolve()
# set eval_every very high, because we're not interested in theoretical evaluations, and we don't mind overfitting
print(f"""
uv run trajectron_train --eval_every 200 \\
--train_data_dict {names['train'].name} \\
--eval_data_dict {names['val'].name} \\
--offline_scene_graph no --preprocess_workers 8 \\
--log_dir {target_model_dir} \\
--log_tag _{name} \\
--train_epochs 100 \\
--conf {target_config} \\
--data_dir {dst_dir} \\
{"--map_encoding" if map_img_path else ""} \\
--no_edge_encoding
""")
return names
def main():
parser = argparse.ArgumentParser()
@ -372,8 +305,6 @@ def main():
parser.add_argument("--dst-dir", "-d", type=Path, required=True, help="Destination directory to store parsed .pkl files (typically 'trajectron-data')")
parser.add_argument("--name", "-n", type=str, required=True, help="Identifier to prefix the output .pkl files with (result is NAME-train.pkl, NAME-test.pkl)")
parser.add_argument("--smooth-tracks", action='store_true', help=f"Enable smoother. Set to {smooth_window} frames")
parser.add_argument("--noise-tracks", type=int, default=0, help=f"Enable Noiser. provide number for how many noisy variations")
parser.add_argument("--offset-tracks", type=int, default=0, help=f"Enable Offset. provide number for how many random offset variations")
parser.add_argument("--cm-to-m", action='store_true', help=f"If homography is in cm, convert tracked points to meter for beter results")
parser.add_argument("--center-data", action='store_true', help=f"Normalise around center")
parser.add_argument("--bin-positions", action='store_true', help=f"Experiment to put round positions to a grid")
@ -411,8 +342,7 @@ def main():
args.dst_dir,
args.name,
args.smooth_tracks,
args.noise_tracks,
args.offset_tracks,
args.cm_to_m,
args.center_data,
args.bin_positions,
args.camera,

View file

@ -1,68 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# sources: renderable.proto
# plugin: python-betterproto
from dataclasses import dataclass
from typing import Dict, List
import betterproto
class CoordinateSpace(betterproto.Enum):
"""Enum for coordinate spaces"""
UNDEFINED = 0
CAMERA = 1
UNDISTORTED_CAMERA = 2
WORLD = 3
LASER = 4
RAW_LASER = 8
@dataclass
class RenderablePosition(betterproto.Message):
"""Message for RenderablePosition (Tuple[float, float])"""
x: float = betterproto.float_field(1)
y: float = betterproto.float_field(2)
@dataclass
class SrgbaColor(betterproto.Message):
"""Message for SrgbaColor"""
red: float = betterproto.float_field(1)
green: float = betterproto.float_field(2)
blue: float = betterproto.float_field(3)
alpha: float = betterproto.float_field(4)
@dataclass
class RenderablePoint(betterproto.Message):
"""Message for RenderablePoint"""
position: "RenderablePosition" = betterproto.message_field(1)
color: "SrgbaColor" = betterproto.message_field(2)
@dataclass
class RenderableLine(betterproto.Message):
"""Message for RenderableLine"""
points: List["RenderablePoint"] = betterproto.message_field(1)
@dataclass
class RenderableLines(betterproto.Message):
"""Message for RenderableLines"""
lines: List["RenderableLine"] = betterproto.message_field(1)
space: "CoordinateSpace" = betterproto.enum_field(2)
@dataclass
class RenderableLayers(betterproto.Message):
"""Message to represent RenderableLayers (Dict[int, RenderableLines])"""
layers: Dict[int, "RenderableLines"] = betterproto.map_field(
1, betterproto.TYPE_INT32, betterproto.TYPE_MESSAGE
)

View file

@ -1,50 +0,0 @@
syntax = "proto3";
package renderable;
// Enum for coordinate spaces
enum CoordinateSpace {
UNDEFINED=0;
CAMERA = 1;
UNDISTORTED_CAMERA = 2;
WORLD = 3;
LASER = 4;
RAW_LASER = 8;
}
// Message for RenderablePosition (Tuple[float, float])
message RenderablePosition {
float x = 1;
float y = 2;
}
// Message for SrgbaColor
message SrgbaColor {
float red = 1;
float green = 2;
float blue = 3;
float alpha = 4;
}
// Message for RenderablePoint
message RenderablePoint {
RenderablePosition position = 1;
SrgbaColor color = 2;
}
// Message for RenderableLine
message RenderableLine {
repeated RenderablePoint points = 1;
}
// Message for RenderableLines
message RenderableLines {
repeated RenderableLine lines = 1;
CoordinateSpace space = 2;
}
// Message to represent RenderableLayers (Dict[int, RenderableLines])
message RenderableLayers {
map<int32, RenderableLines> layers = 1;
}

View file

@ -1,41 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: renderable.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x10renderable.proto\x12\nrenderable\"*\n\x12RenderablePosition\x12\t\n\x01x\x18\x01 \x01(\x02\x12\t\n\x01y\x18\x02 \x01(\x02\"E\n\nSrgbaColor\x12\x0b\n\x03red\x18\x01 \x01(\x02\x12\r\n\x05green\x18\x02 \x01(\x02\x12\x0c\n\x04\x62lue\x18\x03 \x01(\x02\x12\r\n\x05\x61lpha\x18\x04 \x01(\x02\"j\n\x0fRenderablePoint\x12\x30\n\x08position\x18\x01 \x01(\x0b\x32\x1e.renderable.RenderablePosition\x12%\n\x05\x63olor\x18\x02 \x01(\x0b\x32\x16.renderable.SrgbaColor\"=\n\x0eRenderableLine\x12+\n\x06points\x18\x01 \x03(\x0b\x32\x1b.renderable.RenderablePoint\"h\n\x0fRenderableLines\x12)\n\x05lines\x18\x01 \x03(\x0b\x32\x1a.renderable.RenderableLine\x12*\n\x05space\x18\x02 \x01(\x0e\x32\x1b.renderable.CoordinateSpace\"\x98\x01\n\x10RenderableLayers\x12\x38\n\x06layers\x18\x01 \x03(\x0b\x32(.renderable.RenderableLayers.LayersEntry\x1aJ\n\x0bLayersEntry\x12\x0b\n\x03key\x18\x01 \x01(\x05\x12*\n\x05value\x18\x02 \x01(\x0b\x32\x1b.renderable.RenderableLines:\x02\x38\x01*Z\n\x0f\x43oordinateSpace\x12\r\n\tUNDEFINED\x10\x00\x12\n\n\x06\x43\x41MERA\x10\x01\x12\x16\n\x12UNDISTORTED_CAMERA\x10\x02\x12\t\n\x05WORLD\x10\x03\x12\t\n\x05LASER\x10\x04\x62\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'renderable_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_RENDERABLELAYERS_LAYERSENTRY._options = None
_RENDERABLELAYERS_LAYERSENTRY._serialized_options = b'8\001'
_COORDINATESPACE._serialized_start=579
_COORDINATESPACE._serialized_end=669
_RENDERABLEPOSITION._serialized_start=32
_RENDERABLEPOSITION._serialized_end=74
_SRGBACOLOR._serialized_start=76
_SRGBACOLOR._serialized_end=145
_RENDERABLEPOINT._serialized_start=147
_RENDERABLEPOINT._serialized_end=253
_RENDERABLELINE._serialized_start=255
_RENDERABLELINE._serialized_end=316
_RENDERABLELINES._serialized_start=318
_RENDERABLELINES._serialized_end=422
_RENDERABLELAYERS._serialized_start=425
_RENDERABLELAYERS._serialized_end=577
_RENDERABLELAYERS_LAYERSENTRY._serialized_start=503
_RENDERABLELAYERS_LAYERSENTRY._serialized_end=577
# @@protoc_insertion_point(module_scope)

View file

@ -1,175 +0,0 @@
from argparse import ArgumentParser
import json
import math
from pathlib import Path
from typing import Any, Dict
import zmq
from trap.node import Node
import dearpygui.dearpygui as dpg
class Settings(Node):
"""
Quickndirty gui to change some settings ad-hoc
no storage of values, no defaults. No detection of lost nodes, or sending config on them starting
"""
def setup(self):
self.config_sock.close() # setup by default for all nodes, but we want to publish
self.config_sock = self.pub(self.config.zmq_config_addr)
self.config_init_sock.close() # setup by default for all nodes, but we want to publish
self.config_init_sock = self.pull(self.config.zmq_config_init_addr)
self.settings_fields = {}
self.settings: Dict[str, Any] = {}
self.load()
dpg.create_context()
dpg.create_viewport(title='Trap settings', width=600, height=1200)
dpg.setup_dearpygui()
with dpg.window(label="General", pos=(0, 0)):
dpg.add_text(f"Settings from {self.config.settings_file}")
dpg.add_button(label="Save", callback=self.save)
with dpg.window(label="Renderer", pos=(0, 600)):
for i in range(8) :
self.register_setting(f'stagerenderer.layer.{i}', dpg.add_checkbox(label=f"layer {i}", default_value=self.get_setting(f'stagerenderer.layer.{i}', True), callback=self.on_change))
self.register_setting(f'stagerenderer.scale', dpg.add_slider_float(label="scale", default_value=self.get_setting(f'stagerenderer.scale', 1), max_value=3, callback=self.on_change))
self.register_setting(f'stagerenderer.dx', dpg.add_slider_int(label="dx", default_value=self.get_setting(f'stagerenderer.dx', 0), min_value=-300, max_value=300, callback=self.on_change))
self.register_setting(f'stagerenderer.dy', dpg.add_slider_int(label="dy", default_value=self.get_setting(f'stagerenderer.dy', 0), min_value=-300, max_value=300, callback=self.on_change))
self.register_setting(f'stagerenderer.fade', dpg.add_slider_float(label="fade factor", default_value=self.get_setting(f'stagerenderer.fade', 0.27), max_value=1, callback=self.on_change))
with dpg.window(label="Stage", pos=(150, 0)):
self.register_setting(f'stage.fps', dpg.add_slider_int(label="FPS cap", default_value=self.get_setting(f'stage.fps', 30), callback=self.on_change))
self.register_setting(f'stage.prediction_interval', dpg.add_slider_int(label="prediction interval", default_value=self.get_setting('stage.prediction_interval', 18), callback=self.on_change))
self.register_setting(f'stage.loitering_animation', dpg.add_checkbox(label="loitering_animation", default_value=self.get_setting('stage.loitering_animation', True), callback=self.on_change))
with dpg.window(label="Lidar", pos=(0, 100), autosize=True):
self.register_setting(f'lidar.crop_map_boundaries', dpg.add_checkbox(label="crop_map_boundaries", default_value=self.get_setting(f'lidar.crop_map_boundaries', True), callback=self.on_change))
self.register_setting(f'lidar.viz_cropping', dpg.add_checkbox(label="viz_cropping", default_value=self.get_setting(f'lidar.viz_cropping', True), callback=self.on_change))
# self.register_setting(f'lidar.voxel_downsample', dpg.add_checkbox(label="voxel_downsample", default_value=self.get_setting(f'lidar.voxel_downsample', True), callback=self.on_change))
self.register_setting(f'lidar.tracking_enabled', dpg.add_checkbox(label="tracking_enabled", default_value=self.get_setting(f'lidar.tracking_enabled', True), callback=self.on_change))
self.register_setting(f'lidar.kalman_factor', dpg.add_slider_float(label="kalman_factor", default_value=self.get_setting(f'lidar.kalman_factor', 1.3), max_value=3, callback=self.on_change))
dpg.add_separator(label="Clustering")
cluster_methods = ("birch", "optics", "dbscan")
self.register_setting('lidar.cluster.method', dpg.add_combo(label="Method", items=cluster_methods, default_value=self.get_setting('lidar.cluster.method', default='dbscan'), callback=self.on_change))
self.register_setting(f'lidar.eps', dpg.add_slider_float(label="DBSCAN epsilon", default_value=self.get_setting(f'lidar.eps', 0.3), max_value=1, callback=self.on_change))
self.register_setting(f'lidar.min_samples', dpg.add_slider_int(label="DBSCAN min_samples", default_value=self.get_setting(f'lidar.min_samples', 8), max_value=30, callback=self.on_change))
dpg.add_text("When using BIRCH, the resulting subclusters can be postprocessed by DBSCAN:")
self.register_setting('lidar.birch_process_subclusters', dpg.add_checkbox(label="Process subclusters", default_value=self.get_setting('lidar.birch_process_subclusters', True), callback=self.on_change))
self.register_setting('lidar.birch_threshold', dpg.add_slider_float(label="Threshold", default_value=self.get_setting('lidar.birch_threshold', 1), max_value=2.5, callback=self.on_change))
self.register_setting('lidar.birch_branching_factor', dpg.add_slider_int(label="Branching factor", default_value=self.get_setting('lidar.birch_branching_factor', 50), max_value=100, callback=self.on_change))
dpg.add_separator(label="Cluster filter")
self.register_setting(f'lidar.min_box_area', dpg.add_slider_float(label="min_box_area", default_value=self.get_setting(f'lidar.min_box_area', .1), min_value=0, max_value=1, callback=self.on_change))
self.register_setting(f'lidar.max_box_area', dpg.add_slider_float(label="max_box_area", default_value=self.get_setting(f'lidar.max_box_area', 5), min_value=.5, max_value=10, callback=self.on_change))
for i, lidar in enumerate(["192.168.1.16", "192.168.0.10"]):
name = lidar.replace(".", "_")
with dpg.window(label=f"Lidar {lidar}", pos=(i * 300, 450),autosize=True):
# dpg.add_text("test")
# dpg.add_input_text(label="string", default_value="Quick brown fox")
self.register_setting(f'lidar.{name}.enabled', dpg.add_checkbox(label="enabled", default_value=self.get_setting(f'lidar.{name}.enabled', True), callback=self.on_change))
self.register_setting(f'lidar.{name}.rot_x', dpg.add_slider_float(label="rot_x", default_value=self.get_setting(f'lidar.{name}.rot_x', 0), max_value=math.pi * 2, callback=self.on_change))
self.register_setting(f'lidar.{name}.rot_y', dpg.add_slider_float(label="rot_y", default_value=self.get_setting(f'lidar.{name}.rot_y', 0), max_value=math.pi * 2, callback=self.on_change))
self.register_setting(f'lidar.{name}.rot_z', dpg.add_slider_float(label="rot_z", default_value=self.get_setting(f'lidar.{name}.rot_z', 0), max_value=math.pi * 2, callback=self.on_change))
self.register_setting(f'lidar.{name}.trans_x', dpg.add_slider_float(label="trans_x", default_value=self.get_setting(f'lidar.{name}.trans_x', 0), min_value=-15, max_value=15, callback=self.on_change))
self.register_setting(f'lidar.{name}.trans_y', dpg.add_slider_float(label="trans_y", default_value=self.get_setting(f'lidar.{name}.trans_y', 0), min_value=-15, max_value=15, callback=self.on_change))
self.register_setting(f'lidar.{name}.trans_z', dpg.add_slider_float(label="trans_z", default_value=self.get_setting(f'lidar.{name}.trans_z', 0), min_value=-15, max_value=15, callback=self.on_change))
self.send_for_prefix("") # spread the defaults
dpg.show_viewport()
def stop(self):
dpg.destroy_context()
def check_config(self):
# override node function to disable it
pass
def refresh_settings(self):
# override node function to disable it
pass
def get_setting(self, name: str, default: Any):
"""
Automatically configure the value with the default when requesting it
"""
r = super().get_setting(name, default)
self.settings[name] = r
return r
def register_setting(self, name: str, field: int):
self.settings_fields[field] = name
def on_change(self, sender, value, user_data = None):
# print(sender, app_data, user_data)
setting = self.settings_fields[sender]
print(setting, value)
self.settings[setting] = value
self.config_sock.send_json({setting: value})
def send_for_prefix(self, prefix: str):
self.config_sock.send_json(self.get_by_prefix(prefix))
def save(self):
with self.config.settings_file.open('w') as fp:
self.logger.info(f"Save to {self.config.settings_file}")
json.dump(self.settings, fp)
def get_by_prefix(self, prefix: str) -> Dict[str, Any]:
return {key: value for key, value in self.settings.items() if key.startswith(prefix)}
def load(self) -> Dict[str, Any]:
if not self.config.settings_file.exists():
self.logger.info(f"No config at {self.config.settings_file}")
return {}
self.logger.info(f"Loading from {self.config.settings_file}")
with self.config.settings_file.open('r') as fp:
self.settings = json.load(fp)
def run(self):
# below replaces, start_dearpygui()
while self.run_loop() and dpg.is_dearpygui_running():
# 1) receive init requests
try:
init_msg = self.config_init_sock.recv_string(zmq.NOBLOCK)
self.logger.info(f"Send init for {init_msg}")
print('init', init_msg)
self.send_for_prefix(init_msg)
except zmq.ZMQError as e:
# no msgs
pass
dpg.render_dearpygui_frame()
@classmethod
def arg_parser(cls):
argparser = ArgumentParser()
argparser.add_argument('--settings-file',
help='Where to store settings',
type=Path,
default=Path("./settings.json"))
return argparser

File diff suppressed because one or more lines are too long

View file

@ -1,950 +0,0 @@
from __future__ import annotations
from abc import abstractmethod
from argparse import ArgumentParser
from collections import defaultdict
from dataclasses import dataclass
from enum import Enum
from functools import partial
import json
import logging
from math import inf
import math
from pathlib import Path
import random
import time
import threading
from typing import Dict, Generator, List, Optional, Type, TypeVar
import numpy as np
import zmq
from trap.anomaly import DiffSegment, calc_anomaly, calculate_loitering_scores
from trap.base import CameraAction, DataclassJSONEncoder, Frame, HomographyAction, ProjectedTrack, Track
from trap.counter import CounterSender
from trap.lines import AppendableLine, AppendableLineAnimator, Coordinate, CoordinateSpace, CropAnimationLine, CropLine, DashedLine, DeltaT, FadeOutJitterLine, FadeOutLine, FadedEndsLine, FadedTailLine, LineAnimationStack, LineAnimator, NoiseLine, RenderableLayers, RenderableLine, RenderableLines, RotatingLine, SegmentLine, SimplifyLine, SimplifyMethod, SrgbaColor, StartFromClosestPoint, StaticLine, layers_to_message, load_lines_from_svg
from trap.node import Node
from trap.track_history import TrackHistory
from trap.utils import lerp
logger = logging.getLogger('trap.stage')
OPTION_RENDER_DEBUG = False
OPTION_POSITION_MARKER = False
OPTION_GROW_ANOMALY_CIRCLE = False
# OPTION_RENDER_DIFF_SEGMENT = True
OPTION_TRACK_NOISE = False
TRACK_ASSUMED_FPS = 12
LOST_FADEOUT = 2 # seconds
PREDICTION_INTERVAL: int|None = int(TRACK_ASSUMED_FPS * 1.2) # frames
PREDICTION_FADE_IN: float = 3
PREDICTION_FADE_SLOPE: float = -10
PREDICTION_FADE_AFTER_DURATION: float = 8 # seconds
PREDICTION_END_FADE = 2 #frames
# TRACK_MAX_POINTS = 100
TRACK_FADE_AFTER_DURATION = 9. # seconds
TRACK_END_FADE = 30 # points
TRACK_FADE_ASSUME_FPS = TRACK_ASSUMED_FPS
# LOITERING_WINDOW = 8 * TRACK_ASSUMED_FPS
# LOITERING_DISTANCE = 1 # meter diff in LOITERING_WINDOW time
# LOITERING_MEDIAN_FILTER = TRACK_ASSUMED_FPS // 3 # frames: smooth out velocity over n frames
LOITERING_VELOCITY_TRESHOLD = .5 # m/s
LOITERING_DURATION_TO_LINGER = TRACK_ASSUMED_FPS * 1 # start counting as lingering after this many frames
LOITERING_LINGER_FACTOR = TRACK_ASSUMED_FPS * 4 # number of frames to reach loitering score of 1 (+LOITERING_DURATION_TO_LINGER)
class DefaultDictKeyed(dict):
def __init__(self, factory):
self.factory = factory
def __missing__(self, key):
self[key] = self.factory(key)
return self[key]
@dataclass
class SceneInfo:
priority: int
description: str = ""
takeover_possible: bool = False # whether to allow for other scenarios to steal the stage
takeover_possible_after: float = -1
class ScenarioScene(Enum):
DETECTED = SceneInfo(4, "First detection")
TRACKED = SceneInfo(6, "Multiple detections")
PREDICTION_AVAILABLE = SceneInfo(10, "Prediction is ready")
UPDATED_PREDICTION = SceneInfo(11, "Multiple predictions")
LOITERING = SceneInfo(7, "Foundto be loitering", takeover_possible=True, takeover_possible_after=10) # TODO: create "possible after"
PLAY = SceneInfo(7, description="After many predictions; just fooling around", takeover_possible=True, takeover_possible_after=10)
LOST = SceneInfo(-1, description="Track lost", takeover_possible=True, takeover_possible_after=0)
Time = float
class PrioritySlotItem():
TAKEOVER_FADEOUT = 3
def __init__(self, identifier):
self.identifier = identifier
self.start_time = 0.
self.take_over_at: Optional[Time] = None
def take_over(self):
if self.take_over_at:
return
self.take_over_at = time.perf_counter()
def taken_over(self):
self.is_running = False
self.take_over_at = None
def takenover_for(self):
if self.take_over_at:
return time.perf_counter() - self.take_over_at
return None
def takeover_factor(self):
l = self.takenover_for()
if not l:
return 0
return l/self.TAKEOVER_FADEOUT
def start(self):
# change when visible
logger.info(f"Start {self.identifier}: {self.get_state_name()}")
self.start_time = time.perf_counter()
self.is_running = True
def running_for(self):
return time.perf_counter() - self.start_time
@abstractmethod
def get_priority(self) -> int:
raise RuntimeError("Not implemented")
@abstractmethod
def get_state_name(self) -> str:
raise RuntimeError("Not implemented")
@abstractmethod
def can_be_taken_over(self):
raise RuntimeError("Not implemented")
class Scenario(PrioritySlotItem):
def __init__(self, track_id, stage: Stage):
super().__init__(track_id)
self.stage = stage
self.track_id = track_id
self.scene: ScenarioScene = ScenarioScene.DETECTED
self.current_time = 0
self.track: Optional[ProjectedTrack] = None
self.prediction_tracks: List[ProjectedTrack] = []
self._last_diff_frame_idx: Optional[int] = 0
self.prediction_diffs: List[DiffSegment] = []
self.state_change_at = None
self.is_running = False
self.loitering_factor = 0
logger.info(f"Found {self.track_id}: {self.scene.name}")
def get_state_name(self):
return self.scene.name
def get_priority(self) -> int:
# newer higher prio
distance = 0
# todo: check if last point is within bounds
if self.track and len(self.track.projected_history) > 5:
distance = np.linalg.norm(self.track.projected_history[-1] - self.track.projected_history[0])
return (self.scene.value.priority, distance)
def can_be_taken_over(self):
if self.scene.value.takeover_possible:
if time.perf_counter() - self.state_change_at > self.scene.value.takeover_possible_after:
return True
return False
def track_age(self):
if not self.track:
return 0
return time.time() - self.track.updated_at
def take_over(self):
if self.take_over_at:
return
self.take_over_at = time.perf_counter()
def taken_over(self):
self.is_running = False
self.take_over_at = None
def takenover_for(self):
if self.take_over_at:
return time.perf_counter() - self.take_over_at
return None
def takeover_factor(self):
l = self.takenover_for()
if not l:
return 0
return l/self.TAKEOVER_FADEOUT
def lost_for(self):
if self.scene is ScenarioScene.LOST:
return time.perf_counter() - self.state_change_at
return None
def lost_factor(self):
l = self.lost_for()
if not l:
return 0
return l/LOST_FADEOUT
def anomaly_factor(self):
return calc_anomaly(self.prediction_diffs)
def deactivate(self):
self.take_over_at = None
def update(self):
"""Animation tick, check state."""
# 1) lost_score: unlike other states, this runs for each rendering pass to handle crashing tracker
self.check_lost()
def set_scene(self, scene: ScenarioScene):
if self.scene is scene:
return False
logger.info(f"Changing scene for {self.track_id}: {self.scene.name} -> {scene.name}")
self.scene = scene
self.state_change_at = time.perf_counter()
return True
def update_state(self):
self.check_lost() or self.check_loitering() or self.check_track()
def check_lost(self):
if self.track and (self.track.lost or self.track.updated_at < time.time() - 5):
self.set_scene(ScenarioScene.LOST)
return True
return False
def check_loitering(self):
scores = [s for s in calculate_loitering_scores(self.track, LOITERING_DURATION_TO_LINGER, LOITERING_LINGER_FACTOR, LOITERING_VELOCITY_TRESHOLD/TRACK_ASSUMED_FPS, 150)]
if not len(scores):
logger.warning(f"No loitering score for {self.track_id}")
return False
self.loitering_factor = scores[-1]
if self.loitering_factor > .99:
self.set_scene(ScenarioScene.LOITERING)
return True
return False
def check_track(self):
predictions = len(self.prediction_tracks)
if predictions and self.running_for() < 20:
self.set_scene(ScenarioScene.PREDICTION_AVAILABLE)
return True
if predictions and self.running_for() > 60 * 5:
self.set_scene(ScenarioScene.PLAY)
return True
if predictions:
self.set_scene(ScenarioScene.UPDATED_PREDICTION)
return True
if self.track:
if len(self.track.projected_history) > TRACK_ASSUMED_FPS * 2:
self.set_scene(ScenarioScene.TRACKED)
else:
self.set_scene(ScenarioScene.DETECTED)
return True
return False
# the tracker track: replace
def recv_track(self, track: ProjectedTrack):
if self.track and self.track.created_at > track.created_at:
# ignore old track
return
self.track = track
self.update_prediction_diff()
self.update_state()
def update_prediction_diff(self):
"""
gather the diffs of the trajectory with the most recent prediction
"""
if len(self.prediction_diffs) == 0:
return
self.prediction_diffs[-1].update_track(self.track)
# receive new predictions: accumulate
def recv_prediction(self, track: ProjectedTrack):
if not self.track:
# in case of the unlikely event that prediction was received sooner
self.recv_track(track)
interval = self.stage.get_setting('stage.prediction_interval', PREDICTION_INTERVAL)
if interval is not None and len(self.prediction_tracks) and (track.frame_index - self.prediction_tracks[-1].frame_index) < interval:
# just drop tracks if the predictions come to quick
return
if track._track.predictions is None or not len(track._track.predictions):
# don't count to predictions if no prediction is set of given track (e.g. young tracks, that are still passed by the predictor)
return
self.prediction_tracks.append(track)
if len(self.prediction_diffs):
self.prediction_diffs[-1].finish() # existing diffing can end
# and create a new one
self.prediction_diffs.append(DiffSegment(track))
# self.prediction_diffs.append(DiffSegmentScan(track))
self.update_state()
def build_line_others():
others_color = SrgbaColor(1,1,0,1)
line_others = LineAnimationStack(StaticLine([], others_color))
# line_others.add(SegmentLine(line_others.tail, duration=3, anim_f=partial(SegmentLine.anim_grow, in_and_out=True, max_len=5)))
line_others.add(SimplifyLine(line_others.tail, 0.001)) # Simplify before effects, so they don't distort
line_others.add(CropAnimationLine(line_others.tail, 70, assume_fps=TRACK_ASSUMED_FPS*2)) # speed up
line_others.add(NoiseLine(line_others.tail, amplitude=0, t_factor=.3))
# line_others.add(DashedLine(line_others.tail, t_factor=4, loop_offset=True))
# line_others.get(DashedLine).skip = True
line_others.add(FadedEndsLine(line_others.tail, 30, 30))
line_others.add(FadeOutLine(line_others.tail))
line_others.get(FadeOutLine).set_alpha(0)
return line_others
class DrawnScenario(Scenario):
"""
Scenario contains the controls (scene, target positions)
DrawnScenario class does the actual drawing of points incl. transitions
This distinction is only for ordering the code
"""
MAX_HISTORY = 130 # points of history of trajectory to display (preventing too long lines)
CUT_GAP = 5 # when adding a new prediction, keep the existing prediction until that point + this CUT_GAP margin
def __init__(self, track_id, stage: Stage):
super().__init__(track_id, stage)
self.last_update_t = time.perf_counter()
self.active_ptrack: Optional[ProjectedTrack] = None
history_color = SrgbaColor(1.,0.,1.,1.)
history = StaticLine([], history_color)
self.line_history = LineAnimationStack(history)
self.line_history.add(AppendableLineAnimator(self.line_history.tail, draw_decay_speed=120, transition_in_on_init=False))
self.line_history.add(CropLine(self.line_history.tail, self.MAX_HISTORY))
self.line_history.add(SimplifyLine(self.line_history.tail, 0.002)) # Simplify before effects, so they don't distort
self.line_history.add(FadedTailLine(self.line_history.tail, TRACK_FADE_AFTER_DURATION * TRACK_ASSUMED_FPS, TRACK_END_FADE))
self.line_history.add(NoiseLine(self.line_history.tail, amplitude=0, t_factor=.3))
self.line_history.add(FadeOutJitterLine(self.line_history.tail, frequency=5, t_factor=.5))
self.prediction_color = SrgbaColor(0,1,0,1)
self.line_prediction = LineAnimationStack(StaticLine([], self.prediction_color))
self.line_prediction.add(CropLine(self.line_prediction.tail, start_offset=0))
self.line_prediction.add(StartFromClosestPoint(self.line_prediction.tail))
self.line_prediction.get(StartFromClosestPoint).skip=True
self.line_prediction.add(RotatingLine(self.line_prediction.tail, decay_speed=16))
self.line_prediction.get(RotatingLine).skip = False
self.line_prediction.add(SegmentLine(self.line_prediction.tail, duration=7 / 3, anim_f=SegmentLine.anim_follow_in_front))
self.line_prediction.get(SegmentLine).skip = False
self.line_prediction.add(SimplifyLine(self.line_prediction.tail, 0.002)) # Simplify before effects, so they don't distort
GAP_DURATION = 5
def dash_len(dt, t):
t=min(1, t/GAP_DURATION)
return lerp(.99, .6, t)
def gap_len(dt, t):
t=min(1, t/GAP_DURATION)
return lerp(0.01, .9, t)
self.line_prediction.add(DashedLine(self.line_prediction.tail, dash_len=dash_len, gap_len=gap_len, t_factor=2, loop_offset=True))
self.line_prediction.get(DashedLine).skip = True
self.line_prediction.add(FadeOutLine(self.line_prediction.tail))
# when rendering tracks from others similar/close to the current one
self.line_others = build_line_others()
self.tracks_to_self: Optional[Generator] = None
self.tracks_to_self_pos = None
self.tracks_to_self_fetched_at = None
# self.line_prediction_drawn = self.line_prediction_faded
def update(self):
super().update()
if self.track:
self.line_history.root.points = self.track.projected_history
lost_factor = self.lost_factor() # fade out when lost
start_factor = 0#1 - min(1, self.running_for()) # fade in when starting
# print(start_factor)
self.line_history.get(FadeOutJitterLine).set_alpha(1- lost_factor - start_factor)
self.line_prediction.get(FadeOutLine).set_alpha(1-lost_factor)
self.line_history.get(NoiseLine).amplitude = lost_factor * 1.8
if len(self.prediction_tracks):
# now_p = np.array(self.line_history.root.points[-1])
# prev_p = np.array(self.line_history.root.points[-1 * min(4, len(self.line_history.root.points))])
# diff = now_p - prev_p
self.line_prediction.get(StartFromClosestPoint).set_point(self.line_history.root.points[-1])
# print("set origin", self.line_history.root.points[-1])
# TODO: only when animation is ready for it? or collect lines
if self.is_running:
if not self.active_ptrack:
# draw the first prediction
self.active_ptrack = self.prediction_tracks[-1]
self.line_prediction.root.points = self.active_ptrack._track.predictions[0]
self.line_prediction.start() # reset positions
elif self.active_ptrack._track.updated_at < self.prediction_tracks[-1]._track.updated_at:
# stale prediction
# switch only if drawing animation is ready
# if self.line_prediction.is_ready():
self.active_ptrack = self.prediction_tracks[-1]
self.line_prediction.root.points = self.active_ptrack._track.predictions[0]
if self.line_prediction.is_ready() and self.line_prediction.get(DashedLine).skip == True:
self.line_prediction.get(SegmentLine).skip = True
self.line_prediction.get(DashedLine).skip = False
self.line_prediction.start() # reset positions
# self.line_prediction.get(SegmentLine).anim_f = partial(SegmentLine.anim_arrive, length=.3)
# self.line_prediction.get(SegmentLine).duration = .5
# self.line_prediction.get(DashedLine).skip = True
# # print('restart')
# self.line_prediction.start() # reset positions
# # print(self.line_prediction.get(SegmentLine).running_for())
# else:
# if self.line_prediction.is_ready():
# # little hack: check is dashedline skips, to only run this once per animation:
# if self.line_prediction.get(DashedLine).skip:
# # no new yet, but ready with anim, start stage 2
# self.line_prediction.get(SegmentLine).anim_f = partial(SegmentLine.anim_grow)
# self.line_prediction.get(SegmentLine).duration = 1
# # self.line_prediction.get(SegmentLine).skip = True
# self.line_prediction.get(DashedLine).skip = False
# self.line_prediction.start()
# elif self.line_prediction.get(SegmentLine).duration != 2: # hack to only play once
# self.line_prediction.get(SegmentLine).anim_f = partial(SegmentLine.anim_grow, reverse=True)
# self.line_prediction.get(SegmentLine).duration = 2
# self.line_prediction.get(SegmentLine).start()
if self.active_ptrack:
# TODO: this should crop by distance/lenght
self.line_prediction.get(CropLine).start_offset = self.track._track.frame_index - self.active_ptrack._track.frame_index
# self.line_prediction_dashed.set_offset_t(self.active_ptrack._track.track_update_dt() * 4)
# special case: LOITERING
if self.stage.get_setting('stage.loitering_animation', True) and self.scene is ScenarioScene.LOITERING: # or self.state_change_at:
# logger.info('loitering')
transition = min(1, (time.perf_counter() - self.state_change_at)/1.4)
# print('loitering factor', transition)
# TODO: transition fade, using to_alpha(), so it can fade back in again:
self.line_history.get(FadeOutJitterLine).set_alpha(1 - transition)
self.line_prediction.get(FadeOutLine).set_alpha(1 - transition)
current_position = self.track.projected_history[-1]
current_position_rounded = np.round(current_position*2) # cache per 1/2 meter
time_diff = inf if not self.tracks_to_self_fetched_at else time.perf_counter() - self.tracks_to_self_fetched_at
# print(transition > .999, self.is_running, current_position_rounded, time_diff)
if transition > .999 and self.is_running and not all(self.tracks_to_self_pos == current_position_rounded) and time_diff > 5: # only do these expensive calls when running
self.tracks_to_self_pos = current_position_rounded
self.tracks_to_self_fetched_at = time.perf_counter()
# fetch lines nearby
track_ids = self.stage.history.get_nearest_tracks(current_position, 30)
self.track_ids_to_self = iter(track_ids)
self.tracks_to_self = self.stage.history.ids_as_trajectory(track_ids)
self.stage.logger.info(f"Fetched similar tracks for {self.track_id}. (Took {time.perf_counter() - self.tracks_to_self_fetched_at}s)")
# if self.tracks_to_self and not len(self.line_others.root.points):
if self.tracks_to_self and not self.line_others.is_running():
try:
current_history = next(self.tracks_to_self)
current_history_id = next(self.track_ids_to_self)
self.line_others.get(CropAnimationLine).assume_fps = min(
self.line_others.get(CropAnimationLine).assume_fps + TRACK_ASSUMED_FPS*1.5 , # faster each time
TRACK_ASSUMED_FPS * 6 # capped at 6x
)
self.line_others.get(NoiseLine).amplitude = .05
logger.info(f"play history item: {current_history_id}")
self.line_others.get(FadeOutLine).set_alpha(1)
self.line_others.root.points = current_history
# print(self.line_others.root.points)
self.line_others.start()
except StopIteration as e:
pass
# logger.info("Exhausted similar tracks?")
else:
# reset loitering values
self.line_others.get(CropAnimationLine).assume_fps = TRACK_ASSUMED_FPS*2
self.line_others.get(NoiseLine).amplitude = 0
# special case: PLAY
if self.scene is ScenarioScene.PLAY:
pass
# if self.scene is ScenarioScene.CORRECTED_PREDICTION:
# self.line_prediction.get(DashedLine).skip = False
def to_renderable_lines(self, dt: DeltaT) -> RenderableLines:
# each scene is handled differently:
t1 = time.perf_counter()
# 1) history, fade out when lost
# self.line_history.get(StaticLine).color = SrgbaColor(1, 0, 1-self.anomaly_factor(), 1)
# fade out history after max duration, given in frames
track_age_in_frames = self.track_age() * TRACK_ASSUMED_FPS
self.line_history.get(FadedTailLine).set_frame_offset(track_age_in_frames)
t2 = time.perf_counter()
history_line = self.line_history.as_renderable_line(dt)
t3 = time.perf_counter()
prediction_line = self.line_prediction.as_renderable_line(dt)
t4 = time.perf_counter()
others_line = self.line_others.as_renderable_line(dt)
t5 = time.perf_counter()
# print(history_line)
# print(self.track_id, len(self.line_history.points), len(history_line))
timings = (t5-t4, t4-t3, t3-t2, t2-t1)
return RenderableLines([
history_line,
prediction_line,
others_line
]), timings
def set_scene(self, scene):
"""Create log message for the auxilary interface
"""
original = self.scene.name
changed = super().set_scene(scene)
if changed:
try:
self.stage.log_sock.send_string(f"Visitor {self.track_id}: {original} -> {self.scene.name}", zmq.NOBLOCK)
except Exception as e:
logger.warning("Not sent the scene change message, broken socket?")
return changed
class NoTracksScenario(PrioritySlotItem):
TAKEOVER_FADEOUT = 1 # override default to be faster
def __init__(self, stage: Stage, i: int):
super().__init__(f"screensaver_{i}")
self.stage = stage
self.line = build_line_others()
def get_priority(self):
# super low priority
return (-1, -1)
def can_be_taken_over(self):
return True
def get_state_name(self):
return "previewing"
def update(self, stage: Stage):
pass
def to_renderable_lines(self, dt: DeltaT):
timings = []
lines = RenderableLines([], CoordinateSpace.WORLD)
if not self.line.is_running():
track_id = random.choice(list(self.stage.history.state.tracks.keys()))
# print('track_id', track_id)
positions = self.stage.history.state.track_histories[track_id]
self.line.root.points = positions
self.line.start()
alpha = 1 - self.takeover_factor()
self.line.get(FadeOutLine).set_alpha(alpha)
lines.lines.append(
self.line.as_renderable_line(dt)
)
return lines, timings
class DebugDrawer():
def __init__(self, stage: Stage):
self.stage = stage
def positions_to_renderable_lines(self, dt: DeltaT):
lines = RenderableLines([], CoordinateSpace.WORLD)
past_color = SrgbaColor(1,0,1,1)
current_color = SrgbaColor(1,0,0,.6)
for scenario in self.stage.scenarios.values():
# lines.append(StaticLine(scenario.track.projected_history, past_color).as_renderable_line(dt).as_simplified(factor=.005))
center = scenario.track.projected_history[-1]
lines.append(StaticLine([[center[0], center[1]-.2], [center[0], center[1]+.2]], current_color).as_renderable_line(dt))
lines.append(StaticLine([[center[0]-.2, center[1]], [center[0]+.2, center[1]]], current_color).as_renderable_line(dt))
return lines
def predictions_to_renderable_lines(self, dt: DeltaT):
lines = RenderableLines([], CoordinateSpace.WORLD)
future_color = SrgbaColor(0,1,0,.6)
for scenario in self.stage.scenarios.values():
# lines.append(StaticLine(scenario.track.projected_history, past_color).as_renderable_line(dt).as_simplified(factor=.005))
if scenario.active_ptrack:
lines.append(StaticLine(scenario.active_ptrack._track.predictions[0], future_color).as_renderable_line(dt))
return lines
class DatasetDrawer():
def __init__(self, stage: Stage):
self.stage = stage
line_color = SrgbaColor(0,1,1,1)
self.track_line = LineAnimationStack(StaticLine([], line_color))
# self.track_line.add(SimplifyLine(self.track_line.tail, 0.004)) # Simplify before cropping, to get less noodling
self.track_line.add(SimplifyLine(self.track_line.tail, 0.002)) # no laser in dortmund
self.track_line.add(CropAnimationLine(self.track_line.tail, 50, assume_fps=TRACK_ASSUMED_FPS*20)) # speed up
# self.track_line.add(DashedLine(self.track_line.tail, t_factor=4, loop_offset=True))
# self.track_line.get(DashedLine).skip = True
# self.track_line.add(FadedEndsLine(self.track_line.tail, 10, 10))
self.track_line.add(FadeOutJitterLine(self.track_line.tail, t_factor=3))
# self.track_line.add(FadeOutLine(self.track_line.tail))
self.track_line.get(FadeOutJitterLine).set_alpha(np.random.random()*.3+.7)
def to_renderable_lines(self, dt: DeltaT):
lines = RenderableLines([], CoordinateSpace.WORLD)
if not self.track_line.is_running():
# print('update')
track_id = random.choice(list(self.stage.history.state.tracks.keys()))
# print('track_id', track_id)
positions = self.stage.history.state.track_histories[track_id]
self.track_line.root.points = positions
self.track_line.start()
# else:
# print('-')
lines.lines.append(
self.track_line.as_renderable_line(dt)
)
# print(lines)
return lines
class Stage(Node):
FALLBACK_FPS = 30 # we render to lasers, no need to go faster!
def setup(self):
self.active_scenarios: List[DrawnScenario] = [] # List of currently running Scenario instances
self.scenarios: Dict[str, DrawnScenario] = DefaultDictKeyed(lambda key: DrawnScenario(key, self))
self.frame_noimg_sock = self.sub(self.config.zmq_frame_noimg_addr)
self.trajectory_sock = self.sub(self.config.zmq_trajectory_addr)
self.prediction_sock = self.sub(self.config.zmq_prediction_addr)
self.detection_sock = self.sub(self.config.zmq_detection_addr)
self.stage_sock = self.pub(self.config.zmq_stage_addr)
self.log_sock = self.push(self.config.zmq_log_addr)
# self.stage_py_sock = self.pub(self.config.zmq_stage_py_addr)
self.counter = CounterSender()
if self.config.debug_map:
debug_color = SrgbaColor(0.,0.,1.,1.)
self.debug_lines = RenderableLines(load_lines_from_svg(self.config.debug_map, 100, debug_color))
self.history = TrackHistory(self.config.tracker_output_dir, self.config.camera, self.config.cache_path)
self.auxilary = DatasetDrawer(self)
self.debug_drawer = DebugDrawer(self)
# 'screensavers'
self.notrack_scenarios = [] #[NoTracksScenario(self, i) for i in range(self.config.max_active_scenarios)]
def run(self):
while self.run_loop_capped_fps(self.get_setting('stage.fps', self.FALLBACK_FPS), warn_below_fps=10):
dt = max(1/ self.get_setting('stage.fps', self.FALLBACK_FPS), self.dt_since_last_tick) # never dt of 0
# t1 = time.perf_counter()
self.loop_receive()
# t2 = time.perf_counter()
self.loop_update_scenarios()
# t3 = time.perf_counter()
self.loop_render(dt)
# t4 = time.perf_counter()
# print(t2-t1, t3-t2, t4-t3)
def loop_receive(self):
# 1) receive predictions
try:
prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
for track_id, track in prediction_frame.tracks.items():
proj_track = ProjectedTrack(track, prediction_frame.camera)
self.scenarios[track_id].recv_prediction(proj_track)
except zmq.ZMQError as e:
# no msgs
pass
# 2) receive tracker tracks
try:
trajectory_frame: Frame = self.trajectory_sock.recv_pyobj(zmq.NOBLOCK)
for track_id, track in trajectory_frame.tracks.items():
proj_track = ProjectedTrack(track, trajectory_frame.camera)
self.scenarios[track_id].recv_track(proj_track)
except zmq.ZMQError as e:
pass
# self.logger.debug(f'reuse tracks')
def loop_update_scenarios(self):
"""Update active scenarios and handle pauses/completions."""
# 1) process timestep for all scenarios
for s in self.scenarios.values():
s.update()
# 2) Remove stale tracks and take-overs
for track_id, scenario in list(self.scenarios.items()):
if scenario.lost_factor() >= 1:
if scenario in self.active_scenarios:
self.active_scenarios = list(filter(scenario.__ne__, self.active_scenarios))
self.logger.info(f"rm lost track {track_id}")
del self.scenarios[track_id]
if scenario.takeover_factor() >= 1:
if scenario in self.active_scenarios:
self.active_scenarios = list(filter(scenario.__ne__, self.active_scenarios))
scenario.taken_over()
# 3) determine set of pending scenarios (all except running)
pending_scenarios = [s for s in list(self.scenarios.values()) + self.notrack_scenarios if s not in self.active_scenarios]
# ... highest priority first
pending_scenarios.sort(key=lambda s: s.get_priority(), reverse=True)
# 4) check if there's a slot free:
while len(self.active_scenarios) < self.config.max_active_scenarios and len(pending_scenarios):
scenario = pending_scenarios.pop(0)
self.active_scenarios.append(scenario)
scenario.start()
# 5) Takeover Logic: If no space, try to replace a lower-priority active scenario
# which is in a scene in which takeover is possible
eligible_active_scenarios = [
s for s in self.active_scenarios if s.can_be_taken_over()
]
eligible_active_scenarios.sort(key=lambda s: s.get_priority())
if eligible_active_scenarios and pending_scenarios:
lowest_priority_active = eligible_active_scenarios[0]
highest_priority_waiting = pending_scenarios[0]
if highest_priority_waiting.get_priority() > lowest_priority_active.get_priority():
# Takeover! Stop the active scenario
# will be cleaned up in update() loop after animation finishes
# automatically triggering the start of the highest priority scene
lowest_priority_active.take_over()
def loop_render(self, dt: DeltaT):
"""Draw all active scenarios onto the canvas."""
lines = RenderableLines([])
# TODO: sometimes very slow!
t1 = time.perf_counter()
training_lines = self.auxilary.to_renderable_lines(dt)
t2 = time.perf_counter()
active_positions = self.debug_drawer.positions_to_renderable_lines(dt)
all_predictions = self.debug_drawer.predictions_to_renderable_lines(dt)
t2b = time.perf_counter()
timings = []
for scenario in self.active_scenarios:
scenario_lines, timing = scenario.to_renderable_lines(dt)
lines.append_lines(scenario_lines)
timings.append(timing)
if not len(self.active_scenarios):
lines = training_lines
t2c = time.perf_counter()
# rl_scenario = lines.as_simplified(SimplifyMethod.RDP, .003) # or segmentise (see shapely)
# rl_training = training_lines.as_simplified(SimplifyMethod.RDP, .003) # or segmentise (see shapely)
self.counter.set("stage.lines", len(lines.lines))
# self.counter.set("stage.points_orig", lines.point_count())
self.counter.set("stage.points", lines.point_count())
t3 = time.perf_counter()
layers: RenderableLayers = {
1: lines,
2: self.debug_lines,
3: training_lines,
4: active_positions,
5: all_predictions,
}
t4 = time.perf_counter()
# msg = json.dumps(layers, cls=DataclassJSONEncoder).encode("utf8")
msg = layers_to_message(layers)
t5 = time.perf_counter()
self.stage_sock.send(msg)
# self.stage_sock.send_pyobj(layers)
# self.stage_sock.send_json(obj=layers, cls=DataclassJSONEncoder)
t6 = time.perf_counter()
t = (t2-t1, t2b-t2, t2c-t2b, t3-t2c, t2b-t2, t4-t3, t5-t4, t6-t5)
if sum(t) > .1:
print(t)
print(len(lines.lines))
print(lines.point_count())
print(len(msg))
print('scenario timings:', timings)
# print(msg)
# exit()
@classmethod
def arg_parser(cls) -> ArgumentParser:
argparser = ArgumentParser()
argparser.add_argument('--zmq-frame-noimg-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame2")
argparser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
argparser.add_argument('--zmq-prediction-addr',
help='Manually specity communication addr for the prediction messages',
type=str,
default="ipc:///tmp/feeds_preds")
argparser.add_argument('--zmq-detection-addr',
help='Manually specity communication addr for the detection messages',
type=str,
default="ipc:///tmp/feeds_dets")
argparser.add_argument('--zmq-stage-addr',
help='Manually specity communication addr for the stage messages (the rendered lines)',
type=str,
default="tcp://0.0.0.0:99174")
argparser.add_argument('--zmq-log-addr',
help='Manually specity communication addr for the log messages',
type=str,
default="tcp://0.0.0.0:99188")
argparser.add_argument('--zmq-stage-py-addr',
help='Sometimes there is no need for protobuf',
type=str,
default="ipc:///tmp/feeds_stage")
argparser.add_argument('--debug-map',
help='specify a map (svg-file) from which to load lines which will be overlayed',
type=str,
default="../DATASETS/hof-lidar/map_hof.svg")
argparser.add_argument('--cutoff-map',
help='specify a map (svg-file) that specifies projection boundaries. In here, degrade chance to be selectede',
type=str,
default="../DATASETS/hof-lidar/map_hof.svg")
argparser.add_argument('--max-active-scenarios',
help='Maximum number of active scenarios that can be drawn at once (to not overlod the laser)',
type=int,
default=2)
# TODO: this should be subsumed to some sort of Track Dataset loader
historyargs = argparser.add_argument_group("Track History Loader")
historyargs.add_argument("--camera-fps",
help="Camera FPS",
type=int,
default=12)
historyargs.add_argument("--homography",
help="File with homography params [Deprecated]",
type=Path,
default='../DATASETS/VIRAT_subset_0102x/VIRAT_0102_homography_img2world.txt',
action=HomographyAction)
historyargs.add_argument("--calibration",
help="File with camera intrinsics and lens distortion params (calibration.json)",
# type=Path,
required=True,
# default=None,
action=CameraAction)
historyargs.add_argument("--cache-path",
help="Where to cache the Track History dataset",
type=Path,
required=True,
)
historyargs.add_argument("--tracker-output-dir",
help="Directory for the track reader (e.g. EXPERIMENT/raw/_name_)",
type=Path,
required=True,
)
return argparser

File diff suppressed because it is too large Load diff

View file

@ -1,276 +0,0 @@
from argparse import ArgumentParser
from collections import deque
import math
import re
from typing import List
import numpy as np
import pyglet
from torch import mul
import zmq
from trap.lines import RenderableLayers, message_to_layers
from trap.node import Node
BG_COLOR = (0,0,255)
class StageRenderer(Node):
def setup(self):
# self.prediction_sock = self.sub(self.config.zmq_prediction_addr)
# self.tracker_sock = self.sub(self.config.zmq_trajectory_addr)
# self.detector_sock = self.sub(self.config.zmq_detection_addr)
# self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.stage_sock = self.sub(self.config.zmq_stage_addr)
self.log_sock = self.pull(self.config.zmq_log_addr)
# setup pyglet:
display = pyglet.display.get_display()
screens = display.get_screens()
# use configured montior, fall back to whatever is available
self.screen = sorted(screens, reverse=True, key=lambda s: s.get_monitor_name() == self.config.monitor)[0]
if self.screen.get_monitor_name() != self.config.monitor:
self.logger.warning(f"Not displaying on configured monitor. {self.screen.get_monitor_name()} instead of {self.config.monitor}")
# print(self.screen.get_modes())
config = pyglet.gl.Config(sample_buffers=1, samples=4)
# when screen is in portrait, window mode here expects still (larger x smaller) number.
# self.window.get_size() will be reported properly
wh = sorted((self.screen.width, self.screen.height), reverse=self.config.fullscreen)
self.window = pyglet.window.Window(width=wh[0], height=wh[1], config=config, fullscreen=self.config.fullscreen, screen=self.screen)
self.window.set_exclusive_keyboard(True)
self.window.set_exclusive_keyboard(False)
self.window.set_exclusive_mouse(True)
self.window.set_exclusive_mouse(False)
# self.window.set_size(1080, 1920)
window_size = self.window.get_size()
padding = 40
print(window_size)
self.window.set_handler('on_draw', self.on_draw)
# self.window.set_handler('on_close', self.on_close)
# pyglet.gl.glClearColor(81./255, 20/255, 46./255, 0)
pyglet.gl.glClearColor(0/255, 0/255, 255/255, 0)
self.fps_display = pyglet.window.FPSDisplay(window=self.window, color=(255,255,255,255))
self.fps_display.label.x = self.window.width - 50
self.fps_display.label.y = self.window.height - 17
self.fps_display.label.bold = False
self.fps_display.label.font_size = 10
self.current_layers: RenderableLayers = {}
self.lines: List[pyglet.shapes.Line] = []
self.lines_batch = pyglet.graphics.Batch()
self.text = pyglet.text.document.FormattedDocument("")
self.text_batch = pyglet.graphics.Batch()
self.text_layout = pyglet.text.layout.TextLayout(
self.text, padding, (self.window.get_size()[0]-padding*2) // 2 - 100,
width=self.window.get_size()[1] - 2*padding,
height=(self.window.get_size()[0] - padding) // 2,
multiline=True, wrap_lines=False, batch=self.text_batch)
max_len = 31
self.log_msgs = deque([], maxlen=max_len)
self.log_msgs.extend(["-"] * max_len)
translate = (10,-400)
# scale = 5
smallest_dimension = min(self.window.get_size())
max_x = 16.3
max_y = 14.3
scale = min(smallest_dimension / max_x, smallest_dimension/max_y)
self.logger.info(f"Use {scale=}")
self.transform = np.array([
[scale, 0,translate[0]],
[0,-scale,window_size[1]],
[0,0,1]
])
self.bg_image = pyglet.image.load(self.config.floorplan)
scale = (window_size[0] - padding*2) / (self.bg_image.width)
print('image_scale', scale, self.bg_image.width, self.bg_image.height)
# self.bg_image.height = int(self.bg_image.height / 3)
# self.bg_image.width = int(self.bg_image.width / 3)
img_y = window_size[1]-int(self.bg_image.height*scale)-padding*2
self.bg_sprite = pyglet.sprite.Sprite(img=self.bg_image, x=padding, y=img_y)
self.bg_sprite.scale = scale
clear_area = img_y
self.clear_transparent = pyglet.shapes.Rectangle(0, window_size[1]-clear_area, window_size[0], clear_area, color=(*BG_COLOR,255//70))
self.clear_fully= pyglet.shapes.Rectangle(0, 0, window_size[0], window_size[1]-clear_area, color=(*BG_COLOR,255))
self.window.clear()
def check_running(self, dt):
if not self.run_loop():
self.window.close()
self.event_loop.exit()
def run(self):
self.event_loop = pyglet.app.EventLoop()
pyglet.clock.schedule_interval(self.check_running, 0.1)
# pyglet.clock.schedule(self.receive)
self.event_loop.run()
def receive(self, dt):
try:
msg = self.stage_sock.recv(zmq.NOBLOCK)
self.current_layers = message_to_layers(msg)
self.update_lines()
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
while True:
try:
log_msg = self.log_sock.recv_string(zmq.NOBLOCK)
self.log_msgs.append(log_msg)
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
break
self.update_msgs()
def update_lines(self):
"""
Render the renderable lines of selected layers
"""
additional_scale = self.get_setting('stagerenderer.scale', 1)
dx = self.get_setting('stagerenderer.dx', 0)
dy = self.get_setting('stagerenderer.dy', 0)
transform = self.transform.copy()
transform[0][0] *= additional_scale
transform[1][1] *= additional_scale
transform[0][2] += dx
transform[1][2] += dy
i = -1
for nr, lines in self.current_layers.items():
if not self.get_setting(f'stagerenderer.layer.{nr}', True):
continue
for line in lines.lines:
for p1, p2 in zip(line.points, line.points[1:]):
i += 1
pp1 = np.array([p1.position[0], p1.position[1], 1])
pp2 = np.array([p2.position[0], p2.position[1], 1])
pos1 = (transform@pp1)[:2].astype(int)
pos2 = (transform@pp2)[:2].astype(int)
color = (p2.color.as_array()*255).astype(int)
if i < len(self.lines):
shape = self.lines[i]
shape.x = pos1[0]
shape.y = pos1[1]
shape.x2 = pos2[0]
shape.y2 = pos2[1]
shape.color = color
else:
self.lines.append(pyglet.shapes.Line(pos1[0], pos1[1],
pos2[0],
pos2[1],
3,
color,
batch=self.lines_batch))
too_many = len(self.lines) - 1 - i
if too_many > 0:
for j in reversed(range(i, i+too_many)):
self.lines[i].delete()
del self.lines[i]
def update_msgs(self):
text = "\n".join(self.log_msgs)
self.text.text = text
self.text.set_style(0, len(self.text.text), dict(
font_name='Arial', # change to a font installed on your system
font_size=18,
color=(255, 255, 255, 255),
))
colorsmap = {
'ANOMALOUS': (255, 0, 0, 255),
'LOITERING': (255, 255, 0, 255),
'DETECTED': (255, 0, 255, 255),
'SUBSTANTIAL': (255, 0, 255, 255),
'LOST': (0, 0, 0, 255),
}
matchtext = "".join(self.log_msgs) # find no newlines
for state,color in colorsmap.items():
for match in re.finditer(state, matchtext):
self.text.set_style(match.start(), match.end(), dict(
color=color
))
def on_draw(self):
self.receive(.1)
# self.window.clear()
self.clear_transparent.color = (*BG_COLOR, int(3))
self.clear_transparent.draw()
self.clear_fully.draw()
self.fps_display.draw()
self.bg_sprite.draw()
self.lines_batch.draw()
self.text_batch.draw()
@classmethod
def arg_parser(cls):
render_parser = ArgumentParser()
render_parser.add_argument('--zmq-stage-addr',
help='Manually specity communication addr for the stage messages (the rendered lines)',
type=str,
default="tcp://0.0.0.0:99174")
render_parser.add_argument('--zmq-log-addr',
help='Manually specity communication addr for the log messages',
type=str,
default="tcp://0.0.0.0:99188")
render_parser.add_argument("--fullscreen",
help="Set Window full screen",
action='store_true')
render_parser.add_argument('--floorplan',
help='specify a map (png-file) onto which overlayed',
type=str,
default="SETTINGS/2025-11-dortmund/space/floorplan.png")
render_parser.add_argument('--monitor',
help='Specify a screen on which to output (eg. HDMI-0)',
type=str,
default="HDMI-0")
return render_parser

View file

@ -1,13 +1,13 @@
import collections
from re import A
import time
from multiprocessing.sharedctypes import Value
from multiprocessing.sharedctypes import RawValue, Value, Array
from ctypes import c_double
from typing import MutableSequence
class Timer():
"""
Multiprocess timer. Count iterations in one process, while converting that
to fps in the other.
Measure 2 independent things: the freuency of tic, and the duration of tic->toc
Note that indeed these don't need to be equal
"""
@ -40,6 +40,7 @@ class Timer():
@property
def fps(self):
fpses = []
if len(self.tocs) < 2:
return 0
dt = self.tocs[-1][0] - self.tocs[0][0]

View file

@ -1,6 +1,4 @@
from __future__ import annotations
from argparse import Namespace
from dataclasses import dataclass
import json
import math
from pathlib import Path
@ -10,13 +8,10 @@ from tempfile import mktemp
import jsonlines
import numpy as np
import pandas as pd
import shapely
from shapely.ops import split
from trap.preview_renderer import DrawnTrack
import trap.tracker
from trap.config import parser
from trap.frame_emitter import Camera, Detection, DetectionState, video_src_from_config, Frame
from trap.tracker import DETECTOR_YOLOv8, FinalDisplacementFilter, Smoother, TrackReader, _ultralytics_track, Track, TrainingDataWriter, Tracker, read_tracks_json
from trap.tracker import DETECTOR_YOLOv8, FinalDisplacementFilter, Smoother, TrackReader, _yolov8_track, Track, TrainingDataWriter, Tracker, read_tracks_json
from collections import defaultdict
import logging
@ -221,118 +216,25 @@ def transition_path_points(path: np.array, t: float):
break
return np.array(new_path)
from shapely.geometry import LineString
from shapely.geometry import Point
from sklearn.cluster import AgglomerativeClustering
@dataclass
class PointCluster:
point: np.ndarray
start: np.ndarray
source_points: List[np.ndarray]
probability: float
next_point_clusters: List[PointCluster]
def cluster_predictions_by_radius(start_point, lines: Iterable[np.ndarray] | LineString, radius = .5, p_factor = 1.) -> List[PointCluster]:
# start = lines[0][0]
p0 = Point(*start_point)
# print(lines[0][0], start_point)
circle = p0.buffer(radius).boundary
# print(lines)
# print([line.tolist() for line in lines])
intersections = []
remaining_lines = []
for line in lines:
linestring = line if type(line) is LineString else LineString(line.tolist())
intersection = circle.intersection(linestring)
if type(intersection) is LineString and intersection.is_empty:
# No intersection with circle, a dangling endpoint that we can skip
continue
if type(intersection) is not Point:
# with multiple intersections: use only the first one
intersection = intersection.geoms[0]
# set a buffer around the intersection to assure a match is fond oun the line
split_line = split(linestring, intersection.buffer(.01))
remaining_line = split_line.geoms[2] if len(split_line.geoms) > 2 else None
# print(intersection, split_line)
intersections.append(intersection)
remaining_lines.append(remaining_line)
if len(intersections) < 1:
return []
# linestrings = [LineString(line.tolist()) for line in lines]
# intersections = [circle.intersection(line) for line in linestrings]
# dangling_lines = [(type(i) is LineString and i.is_empty) for i in intersections]
# intersections = [False if is_end else (p if type(p) is Point else p.geoms[0]) for p, is_end in zip(intersections, dangling_lines)]
# as all intersections are on the same circle we can guestimate angle by
# estimating distance, as circumfence is 2*pi*r, thus distance ~ proportional with radius.
if len(intersections) > 1:
clustering = AgglomerativeClustering(None, linkage="ward", distance_threshold=2*math.pi * radius / 6)
coords = np.asarray([i.coords for i in intersections]).reshape((-1,2))
assigned_clusters = clustering.fit_predict(coords)
else:
assigned_clusters = [0] # only one item
clusters = defaultdict(lambda: [])
cluster_remainders = defaultdict(lambda: [])
for point, line, c in zip(intersections, remaining_lines, assigned_clusters):
clusters[c].append(point)
cluster_remainders[c].append(line)
line_clusters = []
for c, points in clusters.items():
mean = np.mean(points, axis=0)
prob = p_factor * len(points) / len(assigned_clusters)
remaining_lines = cluster_remainders[c]
remaining_lines = list(filter(None, remaining_lines))
next_points = cluster_predictions_by_radius(mean, remaining_lines, radius, prob)
line_clusters.append(PointCluster(mean, start_point, points, prob, next_points))
# split_lines = [shapely.ops.split(line, point) for line, point in zip(linestrings, intersections)]
# remaining_lines = [l[1] for l in split_lines if len(l) > 1]
# print(line_clusters)
return line_clusters
# def cosine_similarity(point1, point2):
# dot_product = np.dot(point1, point2)
# norm1 = np.linalg.norm(point1)
# norm2 = np.linalg.norm(point2)
# return dot_product / (norm1 * norm2)
# p = Point(5,5)
# c = p.buffer(3).boundary
# l = LineString([(0,0), (10, 10)])
# i = c.intersection(l)
def track_predictions_to_lines(track: Track, camera:Camera, anim_position=.8):
def draw_track_predictions(img: cv2.Mat, track: Track, color_index: int, camera:Camera, convert_points: Optional[Callable], anim_position=.8):
"""
anim_position: 0-1
"""
if not track.predictions:
return
current_point = track.get_projected_history(camera=camera)[-1]
opacity = 1-min(1, max(0, inv_lerp(0.8, 1, anim_position))) # fade out
slide_t = min(1, max(0, inv_lerp(0, 0.8, anim_position))) # slide_position
# if convert_points:
# current_point = convert_points([current_point])[0]
lines = []
for pred_i, pred in enumerate(track.predictions):
pred_coords = pred #cv2.perspectiveTransform(np.array([pred]), inv_H)[0].tolist()
@ -340,87 +242,23 @@ def track_predictions_to_lines(track: Track, camera:Camera, anim_position=.8):
line_points = np.concatenate(([current_point], pred_coords)) # 'current point' is amoving target
# print(pred_coords, current_point, line_points)
line_points = transition_path_points(line_points, slide_t)
lines.append(line_points)
# print("prediction line", len(line_points))
# break # TODO: only one
return lines
def drawntrack_predictions_to_lines(drawn_track: DrawnTrack, camera:Camera, anim_position=.8):
if not drawn_track.drawn_predictions:
return
# current_point = drawn_track.pred_track.get_projected_history(camera=camera)[-1] # not guaranteed to be up to date
current_point = drawn_track.drawn_predictions[0][0]
# print(current_point)
slide_t = min(1, max(0, inv_lerp(0, 0.8, anim_position))) # slide_position
lines = []
for pred_i, pred in enumerate(drawn_track.drawn_predictions):
pred_coords = pred #cv2.perspectiveTransform(np.array([pred]), inv_H)[0].tolist()
# line_points = pred_coords
line_points = np.concatenate(([current_point], pred_coords)) # 'current point' is amoving target
# print(pred_coords, current_point, line_points)
line_points = transition_path_points(line_points, slide_t)
lines.append(line_points)
# print("prediction line", len(line_points))
# break # TODO: only one
return lines
def draw_track_predictions(img: cv2.Mat, track: Track, color_index: int, camera:Camera, convert_points: Optional[Callable], anim_position=.8, as_clusters=False):
"""
anim_position: 0-1
"""
lines = track_predictions_to_lines(track, camera, anim_position)
if not lines:
return
opacity = 1-min(1, max(0, inv_lerp(0.8, 1, anim_position))) # fade out
# if convert_points:
# current_point = convert_points([current_point])[0]
color = bgr_colors[color_index % len(bgr_colors)]
color = tuple([int(c*opacity) for c in color])
if as_clusters:
clusters = cluster_predictions_by_radius(current_point, lines, 1.5)
def draw_cluster(img, cluster: PointCluster):
points = convert_points([cluster.start, cluster.point])
# cv2 only draws to integer coordinates
points = np.rint(points).astype(int)
thickness = max(1, int(cluster.probability * 6))
thickness=1
# if len(cluster.next_point_clusters) == 1:
# not a final point, nor a split:
cv2.line(img, points[0], points[1], color, thickness, lineType=cv2.LINE_AA)
# else:
# cv2.arrowedLine(img, points[0], points[1], color, thickness, cv2.LINE_AA)
for sub in cluster.next_point_clusters:
draw_cluster(img, sub)
# pass
# # cv2.circle(img, end, 2, color, 1, lineType=cv2.LINE_AA)
# print(clusters)
for cluster in clusters:
draw_cluster(img, cluster)
else:
# convert function (e.g. to project points to img space)
if convert_points:
lines = [convert_points(points) for points in lines]
# cv2 only draws to integer coordinates
lines = [np.rint(points).astype(int) for points in lines]
# draw in a single pass
# line_points = line_points.reshape((1, -1,1,2)) # TODO)) SEems to do nothing..
cv2.polylines(img, lines, False, color, 2, cv2.LINE_AA)
line_points = convert_points(line_points)
line_points = np.rint(line_points).astype(int)
# color = (128,0,128) if pred_i else (128,128,0)
color = bgr_colors[color_index % len(bgr_colors)]
color = tuple([int(c*opacity) for c in color])
line_points = line_points.reshape((-1,1,2))
lines.append(line_points)
# draw in a single pass
cv2.polylines(img, lines, False, color, 2, cv2.LINE_AA)
# for start, end in zip(line_points[:-1], line_points[1:]):
# cv2.line(img, start, end, color, 2, lineType=cv2.LINE_AA)
# pass
# # cv2.circle(img, end, 2, color, 1, lineType=cv2.LINE_AA)
def draw_trackjectron_history(img: cv2.Mat, track: Track, color_index: int, convert_points: Optional[Callable]):
if not track.predictor_history:
@ -463,12 +301,9 @@ def draw_track_projected(img: cv2.Mat, track: Track, color_index: int, camera: C
for j in range(len(history)-1):
# a = history[j]
b = history[j+1]
detection = track.history[j+1]
color = point_color if detection.state == DetectionState.Confirmed else (100,100,100)
# cv2.line(img, to_point(a), to_point(b), point_color, 1)
cv2.circle(img, to_point(b), 3, color, 2)
cv2.circle(img, to_point(b), 3, point_color, 2)
def draw_track(img: cv2.Mat, track: Track, color_index: int):

View file

@ -1,186 +0,0 @@
from dataclasses import dataclass
import logging
from pathlib import Path
import pickle
from threading import Lock
import time
from typing import Dict, Iterable, List, Optional, Set
import numpy as np
from trap.base import Camera, Track
from trap.lines import Coordinate
from trap.tracker import FinalDisplacementFilter, Smoother, TrackReader
from scipy.spatial import KDTree
logger = logging.getLogger('history')
@dataclass
class TrackHistoryState():
"""
The lock of TrackHistory is not pickle-able so separate it into a separate state
"""
tracks: List[Track]
track_histories: Dict[str, np.ndarray]
indexed_track_ids: List[str]
tree: KDTree
class TrackHistory():
def __init__(self, path: Path, camera: Camera, cache_path: Optional[Path]):
self.path = path
self.camera = camera
self.cache_path = cache_path
self.lock = Lock()
self.load_from_cache() or self.reload()
def load_from_cache(self):
if self.cache_path is None:
return False
if self.cache_path.exists():
logger.debug("Load history state from cache")
with self.cache_path.open('rb') as fp:
try:
state = pickle.load(fp)
if not isinstance(state, TrackHistoryState):
raise RuntimeError("Pickled data is not a trackhistorystate")
self.state = state
return True
except Exception as e:
logger.warning(f"Cannot read cache {self.cache_path}: {e}")
return False
def build_tree(self):
reader = TrackReader(self.path, self.camera.fps)
logger.debug(f'loaded {len(reader)} tracks')
track_filter = FinalDisplacementFilter(2)
tracks = track_filter.apply(reader, self.camera)
logger.debug(f'after filtering left with {len(tracks)} tracks')
tracks: List[Track] = [t.get_with_interpolated_history() for t in tracks]
logger.debug(f'interpolated {len(tracks)} tracks')
# use convolution here, because precision does not matter and it is _way_ faster
smoother = Smoother(convolution=True)
tracks = [smoother.smooth_track(t) for t in tracks]
logger.debug(f'smoothed')
tracks = {track.track_id: track for track in tracks}
track_histories = {t.track_id: t.get_projected_history(camera=self.camera) for t in tracks.values()}
downsampled_histories = {t_id: self.downsample_history(h) for t_id, h in track_histories.items()}
logger.debug(f'projected to world space')
# Sample data (coordinates and metadata)
# coordinates = [(1, 2, 'Point A'), (3, 4, 'Point B'), (5, 6, 'Point C'), (7, 8, 'Point D')]
all_points = []
indexed_track_ids: List[str] = []
for track_id, history in downsampled_histories.items():
all_points.extend([
[point[0], point[1]] for point in history
])
indexed_track_ids.extend([track_id] * len(history))
# self.flat_idx = self.flat_histories[:,2]
# Create the KD-Tree
tree = KDTree(all_points)
logger.debug('built tree')
return TrackHistoryState(
tracks, track_histories, indexed_track_ids, tree
)
def reload(self):
state = self.build_tree()
# aquire lock as brief as possible
with self.lock:
self.state = state
if self.cache_path:
with self.cache_path.open('wb') as fp:
logger.debug("Writing history to cache")
pickle.dump(self.state, fp)
def get_nearest_tracks(self, point: Coordinate, k:int, max_r: Optional[float] = np.inf):
with self.lock:
distances, indexes = self.state.tree.query(point, k, distance_upper_bound=max_r)
# filter out when there's no
indexes = indexes[distances != np.inf]
track_ids: Set[str] = {self.state.indexed_track_ids[idx] for idx in indexes}
# nearby_indexes = self.tree.query_ball_point(point, r)
# track_ids = set([self.flat_idx[idx] for idx in nearby_indexes])
return track_ids
def ids_as_trajectory(self, track_ids: Iterable[str]):
for track_id in track_ids:
yield self.state.tracks[track_id].get_projected_history(camera=self.camera)
@classmethod
def downsample_history(cls, history, cell_size=.3):
if not len(history):
return []
positions = np.unique(np.round(history / cell_size), axis=0) * cell_size
return positions
if __name__ == "__main__":
path = Path("EXPERIMENTS/raw/hof3/")
logging.basicConfig(level=logging.DEBUG)
calibration_path = Path("../DATASETS/hof3/calibration.json")
homography_path = Path("../DATASETS/hof3/homography.json")
camera = Camera.from_paths(calibration_path, homography_path, 12)
# device = device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
s = time.time()
history = TrackHistory(path, camera, Path("/tmp/historystate_hof3.pcl"))
dt = time.time() - s
print(f'loaded {len(history.state.tracks)} tracks in {dt}s')
track = list(history.state.tracks.values())[25]
trajectory_crop = TrackHistory.downsample_history(history.state.track_histories[track.track_id])
trajectory_org = track.get_projected_history(camera=camera)
target_point = trajectory_org[len(trajectory_org)//2+90]
import matplotlib.pyplot as plt # Visualization
track_set = history.get_nearest_tracks(target_point, 10, max_r=np.inf)
plt.gca().set_aspect('equal')
plt.scatter(trajectory_crop[:,0], trajectory_crop[:,1], c='orange')
plt.plot(trajectory_org[:,0], trajectory_org[:,1], c='blue', alpha=1)
plt.scatter(target_point[0], target_point[1], c='red', alpha=1)
for track_id in track_set:
closeby = history.state.tracks[track_id].get_projected_history(camera=camera)
plt.plot(closeby[:,0], closeby[:,1], c='green', alpha=.1)
plt.show()

View file

@ -1,74 +0,0 @@
# used for "Forward Referencing of type annotations"
from __future__ import annotations
from argparse import ArgumentParser
from pathlib import Path
import zmq
from trap.base import Track
from trap.frame_emitter import Frame
from trap.node import Node
from trap.tracker import TrainingDataWriter, TrainingTrackWriter
class TrackWriter(Node):
def setup(self):
self.track_sock = self.sub(self.config.zmq_lost_addr)
self.log_sock = self.push(self.config.zmq_log_addr)
def run(self):
with TrainingTrackWriter(self.config.output_dir) as writer:
try:
while self.run_loop():
zmq_ev = self.track_sock.poll(timeout=1000)
if not zmq_ev:
# when no data comes in, loop so that is_running is checked
continue
try:
track: Track = self.track_sock.recv_pyobj()
if len(track.history) < 20:
self.logger.debug(f"ignore short track {len(track.history)}")
continue
writer.add(track)
self.logger.info(f"Added track {track.track_id}")
try:
self.log_sock.send_string(f"Added track {track.track_id} to dataset, {len(track.history)} datapoints", zmq.NOBLOCK)
except Exception as e:
self.logger.warning("Not sent the message, broken socket?")
except zmq.ZMQError as e:
pass
except KeyboardInterrupt as e:
print('stopping on interrupt')
self.logger.info('Stopping')
@classmethod
def arg_parser(cls):
argparser = ArgumentParser()
argparser.add_argument('--zmq-log-addr',
help='Manually specity communication addr for the log messages',
type=str,
default="tcp://0.0.0.0:99188")
argparser.add_argument('--zmq-lost-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_lost")
argparser.add_argument("--output-dir",
help="Directory to save the video in",
required=True,
default=Path("EXPERIMENTS/raw/hof-lidar"),
type=Path)
return argparser

View file

@ -1,43 +1,36 @@
from abc import ABC, abstractmethod
import argparse
import csv
import json
import logging
import multiprocessing
import pickle
import time
from argparse import Namespace
from collections import defaultdict
from datetime import datetime, timedelta
import csv
from dataclasses import dataclass, field
import json
import logging
from math import nan
from multiprocessing import Event
from pathlib import Path
from typing import DefaultDict, Dict, List, Optional
import cv2
import pickle
import time
from typing import Dict, Optional, List
import jsonlines
import numpy as np
import torch
import torchvision
import zmq
from bytetracker import BYTETracker
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
from deep_sort_realtime.deepsort_tracker import DeepSort
from torchvision.models.detection import (FasterRCNN_ResNet50_FPN_V2_Weights,
KeypointRCNN_ResNet50_FPN_Weights,
MaskRCNN_ResNet50_FPN_V2_Weights,
fasterrcnn_resnet50_fpn_v2,
keypointrcnn_resnet50_fpn,
maskrcnn_resnet50_fpn_v2)
from tsmoothie.smoother import ConvolutionSmoother, KalmanSmoother
from ultralytics import YOLO, RTDETR
from ultralytics.engine.model import Model as UltralyticsModel
from ultralytics.engine.results import Results as UltralyticsResult
import cv2
from trap import timer
from trap.frame_emitter import (Camera, DataclassJSONEncoder, Detection,
DetectionState, Frame, Track)
from trap.gemma import ImgMovementFilter
from trap.node import Node
from torchvision.models.detection import retinanet_resnet50_fpn_v2, RetinaNet_ResNet50_FPN_V2_Weights, keypointrcnn_resnet50_fpn, KeypointRCNN_ResNet50_FPN_Weights, maskrcnn_resnet50_fpn_v2, MaskRCNN_ResNet50_FPN_V2_Weights, FasterRCNN_ResNet50_FPN_V2_Weights, fasterrcnn_resnet50_fpn_v2
from deep_sort_realtime.deepsort_tracker import DeepSort
from torchvision.models import ResNet50_Weights
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
from ultralytics import YOLO
from ultralytics.engine.results import Results as YOLOResult
from trap.frame_emitter import Camera, DataclassJSONEncoder, DetectionState, Frame, Detection, Track
from bytetracker import BYTETracker
from tsmoothie.smoother import KalmanSmoother, ConvolutionSmoother
import tsmoothie.smoother
from datetime import datetime, timedelta
# Detection = [int, int, int, int, float, int]
# Detections = [Detection]
@ -54,32 +47,29 @@ DETECTOR_RETINANET = 'retinanet'
DETECTOR_MASKRCNN = 'maskrcnn'
DETECTOR_FASTERRCNN = 'fasterrcnn'
DETECTOR_YOLOv8 = 'ultralytics'
DETECTOR_RTDETR = 'rtdetr'
TRACKER_DEEPSORT = 'deepsort'
TRACKER_BYTETRACK = 'bytetrack'
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8, DETECTOR_RTDETR]
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8]
TRACKERS =[TRACKER_DEEPSORT, TRACKER_BYTETRACK]
TRACKER_CONFIDENCE_MINIMUM = .001
TRACKER_BYTETRACK_MINIMUM = .001 # bytetrack can track items iwth lower thershold
TRACKER_CONFIDENCE_MINIMUM = .2
TRACKER_BYTETRACK_MINIMUM = .1 # bytetrack can track items iwth lower thershold
NON_MAXIMUM_SUPRESSION = 1
RCNN_SCALE = .4 # seems to have no impact on detections in the corners
def _ultralytics_track(img: cv2.Mat, frame_idx: int, model: UltralyticsModel, **kwargs) -> List[Detection]:
results: List[UltralyticsResult] = list(model.track(img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, conf=0.001, **kwargs))
def _yolov8_track(frame: Frame, model: YOLO, **kwargs) -> List[Detection]:
results: List[YOLOResult] = list(model.track(frame.img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, conf=0.00001, **kwargs))
if results[0].boxes is None or results[0].boxes.id is None:
# work around https://github.com/ultralytics/ultralytics/issues/5968
return []
boxes = results[0].boxes.xywh.cpu()
confidence = results[0].boxes.conf.cpu().tolist()
track_ids = results[0].boxes.id.int().cpu().tolist()
classes = results[0].boxes.cls.int().cpu().tolist()
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], conf, DetectionState.Confirmed, frame_idx, class_id) for bbox, track_id, class_id, conf in zip(boxes, track_ids, classes, confidence)]
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], 1, DetectionState.Confirmed, frame.index, class_id) for bbox, track_id, class_id in zip(boxes, track_ids, classes)]
class Multifile():
def __init__(self, srcs: List[Path]):
@ -119,7 +109,6 @@ class FinalDisplacementFilter(TrackFilter):
def filter(self, track: Track, camera: Camera):
history = track.get_projected_history(H=None, camera=camera)
displacement = np.linalg.norm(history[0]-history[-1])
return displacement > self.min_displacement
@ -127,37 +116,14 @@ class TrackReader:
def __init__(self, path: Path, fps: int, include_blacklisted = False, exclude_whitelisted = False):
self.blacklist_file = path / "blacklist.jsonl"
self.whitelist_file = path / "whitelist.jsonl" # for skipping
# self.tracks_file = path / "tracks.pkl"
self.tracks_files = path.glob('tracks*.pkl')
self.tracks_file = path / "tracks.pkl"
# with self.tracks_file.open('r') as fp:
# tracks_dict: dict = json.load(fp)
tracks: Dict[str, Track] = {}
for tracks_file in self.tracks_files:
logger.info(f"Read {tracks_file}")
with tracks_file.open('rb') as fp:
while True:
# multiple tracks can be pickled separately
try:
trackset: Dict[str, Track] = pickle.load(fp)
for track_id, track in trackset.items():
if len(tracks) < 1:
max_item = 0
else:
max_item = max([int(t) for t in tracks.keys()])
with self.tracks_file.open('rb') as fp:
tracks: dict = pickle.load(fp)
if int(track.track_id) < max_item:
track_id = str(max_item+1)
else:
track_id = track.track_id
track.track_id = track_id
tracks[track.track_id] = track
except EOFError:
break
if self.blacklist_file.exists():
with jsonlines.open(self.blacklist_file, 'r') as reader:
@ -183,7 +149,7 @@ class TrackReader:
def __len__(self):
return len(self._tracks)
def get(self, track_id) -> Track:
def get(self, track_id):
return self._tracks[track_id]
# detection_values = self._tracks[track_id]
# history = []
@ -210,9 +176,6 @@ class TrackReader:
for track_id in self._tracks:
yield self.get(track_id)
def track_ids(self):
return list(self._tracks.keys())
def read_tracks_json(path: Path, fps):
"""
Reader for tracks.json produced by TrainingDataWriter
@ -278,50 +241,8 @@ class TrainingDataWriter:
self.training_fp.close()
rewrite_raw_track_files(self.path)
class TrainingTrackWriter:
"""
Supersedes TrainingDataWriter, by writing full tracks"""
def __init__(self, training_path: Optional[Path]):
if training_path is None:
self.path = None
return
if not isinstance(training_path, Path):
raise ValueError("save-for-training should be a path")
if not training_path.exists():
logger.info(f"Making path for training data: {training_path}")
training_path.mkdir(parents=True, exist_ok=False)
else:
logger.warning(f"Path for training-data exists: {training_path}. Continuing assuming that's ok.")
self.path = training_path
def __enter__(self):
if self.path:
d = datetime.now().isoformat(timespec="minutes")
self.training_fp = open(self.path / f'tracks-{d}.pcl', 'wb')
logger.debug(f"Writing tracker data to {self.training_fp.name}")
# following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
# self.csv = csv.DictWriter(self.training_fp, fieldnames=FIELDNAMES, delimiter='\t', quoting=csv.QUOTE_NONE)
self.count = 0
return self
def add(self, track: Track):
self.count += 1;
pickle.dump(track, self.training_fp)
def __exit__(self, exc_type, exc_value, exc_tb):
# ... ignore exception (type, value, traceback)
if not self.path:
return
self.training_fp.close()
# rewrite_raw_track_files(self.path)
def rewrite_raw_track_files(path: Path):
source_files = list(sorted(path.glob("*.txt"))) # we loop twice, so need a list instead of generator
@ -362,7 +283,7 @@ def rewrite_raw_track_files(path: Path):
with file.open('w') as target_fp:
for i in range(line_nrs):
line = sources.readline().rstrip()
line = sources.readline()
current_file = sources.current_file
if prev_file != current_file:
offset: int = max_track_id
@ -451,25 +372,22 @@ class ByteTrackWrapper(TrackerWrapper):
detections = np.ndarray((0,0)) # needs to be 2-D
_ = self.tracker.update(detections)
removed_tracks = self.tracker.removed_stracks
active_tracks = [track for track in self.tracker.tracked_stracks if track.is_activated]
# TODO)) why was this in here:
# active_tracks = [track for track in active_tracks if track.start_frame < (self.tracker.frame_id - 5)]
active_tracks = [track for track in active_tracks if track.start_frame < (self.tracker.frame_id - 5)]
return [Detection.from_bytetrack(track, frame_idx) for track in active_tracks]
class Tracker(Node):
def setup(self):
class Tracker:
def __init__(self, config: Namespace):
self.config = config
# # TODO: config device
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.frame_preprocess = ImgMovementFilter()
# TODO: support removal
self.tracks: DefaultDict[str, Track] = defaultdict(lambda: Track())
self.tracks = defaultdict(lambda: Track())
logger.debug(f"Load tracker: {self.config.detector}")
@ -509,24 +427,14 @@ class Tracker(Node):
self.mot_tracker = TrackerWrapper.init_type(self.config.tracker)
elif self.config.detector == DETECTOR_YOLOv8:
# self.model = YOLO('EXPERIMENTS/yolov8x.pt')
# best from arsen:
# self.model = YOLO('./tracker/all_yolo11-2-20-15-41/weights')
# self.model = YOLO('tracker/all_yolo11-2-20-15-41/weights/best.pt')
# self.model = YOLO('models/yolo11x-pose.pt')
# self.model = YOLO("models/yolo12l.pt")
# self.model = YOLO("models/yolo12x.pt", imgsz=self.config.imgsz) #see https://github.com/orgs/ultralytics/discussions/8812
self.model = YOLO("models/yolo12x.pt")
# NOTE: changing the model, also tweak imgsz in
elif self.config.detector == DETECTOR_RTDETR:
# self.model = RTDETR('models/rtdetr-x.pt') # drops frames
self.model = RTDETR('models/rtdetr-l.pt') # somewhat less good in corners, but less frame dropping == better tracking
self.model = YOLO('yolo11x.pt')
else:
raise RuntimeError(f"{self.config.detector} is not implemented yet. See --help")
# homography = list(source.glob('*img2world.txt'))[0]
# self.H = self.config.H
self.H = self.config.H
if self.config.smooth_tracks:
logger.info("Smoother enabled")
@ -536,50 +444,71 @@ class Tracker(Node):
logger.info("Smoother Disabled (enable with --smooth-tracks)")
self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.trajectory_socket = self.pub(self.config.zmq_trajectory_addr)
self.detection_socket = self.pub(self.config.zmq_detection_addr)
logger.debug("Set up tracker")
def track_frame(self, frame: Frame):
det_img = frame.img
# det_img = self.frame_preprocess.apply(frame.img)
if self.config.detector in [DETECTOR_YOLOv8, DETECTOR_RTDETR]:
# both ultralytics
detections: List[Detection] = _ultralytics_track(det_img, frame.index, self.model, classes=[0, 15, 16], imgsz=self.config.imgsz)
if self.config.detector == DETECTOR_YOLOv8:
detections: List[Detection] = _yolov8_track(frame, self.model, classes=[0, 15, 16], imgsz=[1152, 640])
else :
detections: List[Detection] = self._resnet_track(det_img, frame.index, scale = RCNN_SCALE)
# emit raw detections
self.detection_socket.send_pyobj(detections)
detections: List[Detection] = self._resnet_track(frame, scale = RCNN_SCALE)
for detection in detections:
track = self.tracks[detection.track_id]
track.track_id = detection.track_id # for new tracks
track.fps = frame.camera.fps
track.frame_index = frame.index
track.updated_at = time.time()
# track.fps = self.config.camera.fps # for new tracks
track.fps = self.config.camera.fps # for new tracks
track.history.append(detection) # add to history
return detections
def run(self):
def track(self, is_running: Event, timer_counter: int = 0):
"""
Live tracking of frames coming in over zmq
"""
self.is_running = is_running
context = zmq.Context()
self.frame_sock = context.socket(zmq.SUB)
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
self.frame_sock.connect(self.config.zmq_frame_addr)
self.trajectory_socket = context.socket(zmq.PUB)
self.trajectory_socket.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
self.trajectory_socket.bind(self.config.zmq_trajectory_addr)
prev_run_time = 0
# training_fp = None
# training_csv = None
# training_frames = 0
# if self.config.save_for_training is not None:
# if not isinstance(self.config.save_for_training, Path):
# raise ValueError("save-for-training should be a path")
# if not self.config.save_for_training.exists():
# logger.info(f"Making path for training data: {self.config.save_for_training}")
# self.config.save_for_training.mkdir(parents=True, exist_ok=False)
# else:
# logger.warning(f"Path for training-data exists: {self.config.save_for_training}. Continuing assuming that's ok.")
# training_fp = open(self.config.save_for_training / 'all.txt', 'w')
# # following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
# training_csv = csv.DictWriter(training_fp, fieldnames=['frame_id', 'track_id', 'l', 't', 'w', 'h', 'x', 'y', 'state'], delimiter='\t', quoting=csv.QUOTE_NONE)
prev_frame_i = -1
with TrainingDataWriter(self.config.save_for_training) as writer:
end_time = None
tracker_dt = None
w_time = None
displacement_filter = FinalDisplacementFilter(.8)
displacement_filter = FinalDisplacementFilter(.2)
while self.is_running.is_set():
with timer_counter.get_lock():
timer_counter.value += 1
# this waiting for target_dt causes frame loss. E.g. with target_dt at .1, it
# skips exactly 1 frame on a 10 fps video (which, it obviously should not do)
# so for now, timing should move to emitter
@ -591,12 +520,10 @@ class Tracker(Node):
poll_time = time.time()
zmq_ev = self.frame_sock.poll(timeout=2000)
if not zmq_ev:
logger.warning('no frame for 2000ms')
logger.warning('skip poll after 2000ms')
# when there's no data after timeout, loop so that is_running is checked
continue
self.tick() # only tick if something is actually received
start_time = time.time()
frame: Frame = self.frame_sock.recv_pyobj() # frame delivery in current setup: 0.012-0.03s
@ -610,15 +537,15 @@ class Tracker(Node):
prev_frame_i = frame.index
# load homography into frame (TODO: should this be done in emitter?)
if frame.H is None:
raise RuntimeError('Tracker no longer configures H')
# logger.warning('Falling back to default H')
# fallback: load configured H
# frame.H = self.H
frame.H = self.H
# logger.info(f"Frame delivery delay = {time.time()-frame.time}s")
detections: List[Detection] = self.track_frame(frame)
# Store detections into tracklets
projected_coordinates = []
# now in track_frame()
@ -648,18 +575,8 @@ class Tracker(Node):
active_track_ids = [d.track_id for d in detections]
active_tracks = {t.track_id: t.get_with_interpolated_history() for t in self.tracks.values() if t.track_id in active_track_ids}
active_tracks = displacement_filter.apply_to_dict(active_tracks, frame.camera)# a filter to remove just detecting static objects
# print(len(detections), len(active_tracks))
removable_tracks = []
for track_id, track in self.tracks.items():
if not len(track.history):
continue
detection: Detection = track.history[-1]
if detection.frame_nr < (frame.index - frame.camera.fps * 5):
removable_tracks.append(track_id)
for track_id in removable_tracks:
del self.tracks[track_id]
# active_tracks = {t.track_id: t for t in self.tracks.values() if t.track_id in active_track_ids}
# active_tracks = {t.track_id: t for t in self.tracks.values() if t.track_id in active_track_ids}
# logger.info(f"{trajectories}")
@ -701,12 +618,13 @@ class Tracker(Node):
logger.info('Stopping')
def _resnet_track(self, img: cv2.Mat, frame_idx: int, scale: float = 1) -> List[Detection]:
def _resnet_track(self, frame: Frame, scale: float = 1) -> List[Detection]:
img = frame.img
if scale != 1:
dsize = (int(img.shape[1] * scale), int(img.shape[0] * scale))
img = cv2.resize(img, dsize)
detections = self._resnet_detect_persons(img)
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame_idx)
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame.index)
# active_tracks = [t for t in tracks if t.is_confirmed()]
return [d.get_scaled(1/scale) for d in tracks]
@ -756,119 +674,56 @@ class Tracker(Node):
different nesting
"""
return [([d[0], d[1], d[2]-d[0], d[3]-d[1]], d[4], d[5]) for d in detections]
@classmethod
def arg_parser(cls):
argparser = argparse.ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
argparser.add_argument('--zmq-detection-addr',
help='Manually specity communication addr for the detection messages',
type=str,
default="ipc:///tmp/feeds_dets")
argparser.add_argument("--save-for-training",
help="Specify the path in which to save",
type=Path,
default=None)
argparser.add_argument("--detector",
help="Specify the detector to use",
type=str,
default=DETECTOR_YOLOv8,
choices=DETECTORS)
argparser.add_argument("--tracker",
help="Specify the detector to use",
type=str,
default=TRACKER_BYTETRACK,
choices=TRACKERS)
argparser.add_argument("--smooth-tracks",
help="Smooth the tracker tracks before sending them to the predictor",
action='store_true')
argparser.add_argument("--imgsz",
help="Detector imgsz parameter (applicable to ultralytics detectors)",
type=int,
default=640)
return argparser
def run_tracker(config: Namespace, is_running: Event, timer_counter):
router = Tracker(config)
router.run(is_running, timer_counter)
def run():
# Frame emitter
import argparse
argparser = argparse.ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument('--zmq-trajectory-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
argparser.add_argument("--save-for-training",
help="Specify the path in which to save",
type=Path,
default=None)
argparser.add_argument("--detector",
help="Specify the detector to use",
type=str,
default=DETECTOR_YOLOv8,
choices=DETECTORS)
argparser.add_argument("--tracker",
help="Specify the detector to use",
type=str,
default=TRACKER_BYTETRACK,
choices=TRACKERS)
argparser.add_argument("--smooth-tracks",
help="Smooth the tracker tracks before sending them to the predictor",
action='store_true')
config = argparser.parse_args()
is_running = multiprocessing.Event()
is_running.set()
timer_counter = timer.Timer('frame_emitter')
router = Tracker(config)
router.run(is_running, timer_counter.iterations)
is_running.clear()
router.track(is_running, timer_counter)
class TrackPointFilter(ABC):
@abstractmethod
def apply(self, points: List[float]):
pass
class Smoother:
def apply_track(self, track: Track) -> Track:
def __init__(self, window_len=6, convolution=False):
# for some reason this smoother messes the predictions. Probably skews the points too much??
if convolution:
self.smoother = ConvolutionSmoother(window_len=window_len, window_type='hanning', copy=None)
else:
# "Unlike Kalman filtering, which focuses on predicting and updating the current state using historical measurements, Kalman smoothing enhances the accuracy of past state values"
# see https://medium.com/@shahalkp1/kalman-smoothing-using-tsmoothie-0175260464e5
self.smoother = KalmanSmoother(component='level_trend', component_noise={'level':0.02, 'season': .01, 'trend':0.02},n_seasons = 2, copy=None)
def smooth(self, points: List[float]):
self.smoother.smooth(points)
return self.smoother.smooth_data[0]
def smooth_track(self, track: Track) -> Track:
ls = [d.l for d in track.history]
ts = [d.t for d in track.history]
ws = [d.w for d in track.history]
hs = [d.h for d in track.history]
ls = self.apply(ls)
ts = self.apply(ts)
ws = self.apply(ws)
hs = self.apply(hs)
self.smoother.smooth(ls)
ls = self.smoother.smooth_data[0]
self.smoother.smooth(ts)
ts = self.smoother.smooth_data[0]
self.smoother.smooth(ws)
ws = self.smoother.smooth_data[0]
self.smoother.smooth(hs)
hs = self.smoother.smooth_data[0]
new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
return track.get_with_new_history(new_history)
def apply_to_frame_tracks(self, frame: Frame) -> Frame:
return Track(track.track_id, new_history, track.predictor_history, track.predictions, track.fps)
def smooth_frame_tracks(self, frame: Frame) -> Frame:
new_tracks = []
for track in frame.tracks.values():
new_track = self.apply_track(track)
new_track = self.smooth_track(track)
new_tracks.append(new_track)
frame.tracks = {t.track_id: t for t in new_tracks}
return frame
def apply_to_frame_predictions(self, frame: Frame) -> Frame:
def smooth_frame_predictions(self, frame) -> Frame:
for track in frame.tracks.values():
new_predictions = []
@ -879,69 +734,14 @@ class TrackPointFilter(ABC):
xs = [d[0] for d in prediction]
ys = [d[1] for d in prediction]
xs = self.apply(xs)
ys = self.apply(ys)
self.smoother.smooth(xs)
xs = self.smoother.smooth_data[0]
self.smoother.smooth(ys)
ys = self.smoother.smooth_data[0]
filtered_prediction = [[x,y] for x, y in zip(xs, ys)]
smooth_prediction = [[x,y] for x, y in zip(xs, ys)]
new_predictions.append(filtered_prediction)
new_predictions.append(smooth_prediction)
track.predictions = new_predictions
return frame
class Smoother(TrackPointFilter):
def __init__(self, window_len=6, convolution=False):
# for some reason this smoother messes the predictions. Probably skews the points too much??
if convolution:
self.smoother = ConvolutionSmoother(window_len=window_len, window_type='hanning', copy=None)
else:
# "Unlike Kalman filtering, which focuses on predicting and updating the current state using historical measurements, Kalman smoothing enhances the accuracy of past state values"
# see https://medium.com/@shahalkp1/kalman-smoothing-using-tsmoothie-0175260464e5
# self.smoother = KalmanSmoother(component='level_trend', component_noise={'level':0.02, 'season': .01, 'trend':0.02},n_seasons = 2, copy=False)
self.smoother = KalmanSmoother(component='level', component_noise={'level':0.01},observation_noise=.3, n_seasons = 0, copy=False)
def apply(self, points: List[float]):
self.smoother.smooth(points)
return self.smoother.smooth_data[0]
# aliases, for historic reasons
def smooth(self, points: List[float]):
return self.apply(points)
def smooth_track(self, track: Track) -> Track:
return self.apply_track(track)
def smooth_frame_tracks(self, frame: Frame) -> Frame:
return self.apply_to_frame_tracks(frame)
def smooth_frame_predictions(self, frame: Frame) -> Frame:
return self.apply_to_frame_predictions(frame)
class Noiser(TrackPointFilter):
def __init__(self, amplitude=.1):
self.amplitude = amplitude
def apply(self, points: List[float]):
return np.random.normal(points, scale=self.amplitude).tolist()
class RandomOffset(TrackPointFilter):
"""
A bit hacky way to offset the whole track. Does x & y & w & h with the same value
"""
def __init__(self, amplitude=.1):
self.amplitude = np.random.normal(scale=amplitude)
def apply(self, points: List[float]):
return [p + self.amplitude for p in points]
return frame

View file

@ -1,14 +1,11 @@
# lerp & inverse lerp from https://gist.github.com/laundmo/b224b1f4c8ef6ca5fe47e132c8deab56
from collections import namedtuple
import linecache
import math
import os
from pathlib import Path
import tracemalloc
from typing import Iterable
import cv2
import numpy as np
import torch
from trajectron.environment.map import GeometricMap
def lerp(a: float, b: float, t: float) -> float:
@ -30,85 +27,24 @@ def inv_lerp(a: float, b: float, v: float) -> float:
"""
return (v - a) / (b - a)
def easeInOutQuad(t: float) -> float:
"""Quadratic easing in/out - smoothing the transition."""
if t < 0.5:
return 2 * t * t
else:
return 1 - np.power(-2 * t + 2, 2) / 2
def exponentialDecayRounded(a, b, decay, dt, abs_tolerance):
"""Exponential decay as alternative to Lerp
Introduced by Freya Holmér: https://www.youtube.com/watch?v=LSNQuFEDOyQ
"""
c = b + (a-b) * math.exp(-decay * dt)
if abs(b-c) < abs_tolerance:
return b
return c
def exponentialDecay(a, b, decay, dt):
"""Exponential decay as alternative to Lerp
Introduced by Freya Holmér: https://www.youtube.com/watch?v=LSNQuFEDOyQ
"""
return b + (a-b) * math.exp(-decay * dt)
def relativePointToPolar(origin, point) -> tuple[float, float]:
x, y = point[0] - origin[0], point[1] - origin[1]
return np.sqrt(x**2 + y**2), np.arctan2(y, x)
def relativePolarToPoint(origin, r, angle) -> tuple[float, float]:
return r * np.cos(angle) + origin[0], r * np.sin(angle) + origin[1]
# def line_intersection(line1, line2):
# xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0])
# ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1])
# def det(a, b):
# return a[0] * b[1] - a[1] * b[0]
# div = det(xdiff, ydiff)
# if div == 0:
# return None
# d = (det(*line1), det(*line2))
# x = det(d, xdiff) / div
# y = det(d, ydiff) / div
# return x, y
# def polyline_intersection(poly1, poly2):
# for i, p1_first_point in enumerate(poly1[:-1]):
# p1_second_point = poly1[i + 1]
# for j, p2_first_point in enumerate(poly2[:-1]):
# p2_second_point = poly2[j + 1]
# intersection = line_intersection((p1_first_point, p1_second_point), (p2_first_point, p2_second_point))
# if intersection:
# return intersection # returns x,y
# return None
def get_bins(bin_size: float):
return [[bin_size, 0], [bin_size, bin_size], [0, bin_size], [-bin_size, bin_size], [-bin_size, 0], [-bin_size, -bin_size], [0, -bin_size], [bin_size, -bin_size]]
def convert_world_space_to_img_space(H: cv2.Mat, scale=100):
def convert_world_space_to_img_space(H: cv2.Mat):
"""Transform the given matrix so that it immediately converts
the points to img space"""
new_H = H.copy()
new_H[:2] = H[:2] * scale
new_H[:2] = H[:2] * 100
return new_H
def convert_world_points_to_img_points(points: Iterable, scale=100):
def convert_world_points_to_img_points(points: Iterable):
"""Transform the given matrix so that it immediately converts
the points to img space"""
if isinstance(points, np.ndarray):
return np.array(points) * scale
return [[p[0]*scale, p[1]*scale] for p in points]
return np.array(points) * 100
return [[p[0]*100, p[1]*100] for p in points]
def display_top(snapshot: tracemalloc.Snapshot, key_type='lineno', limit=5):
snapshot = snapshot.filter_traces((
@ -136,7 +72,6 @@ def display_top(snapshot: tracemalloc.Snapshot, key_type='lineno', limit=5):
print("Total allocated size: %.1f KiB" % (total / 1024))
ImageMapBounds = namedtuple('ImageMapBounds', ['min_x', 'max_x', 'min_y', 'max_y'])
class ImageMap(GeometricMap): # TODO Implement for image maps -> watch flipped coordinate system
def __init__(self, img: cv2.Mat, H_world_to_map: cv2.Mat, description=None):
# homography_matrix = np.loadtxt('H.txt')
@ -153,56 +88,11 @@ class ImageMap(GeometricMap): # TODO Implement for image maps -> watch flipped
layers = layers.copy() # copy to apply negative stride
# layers =
#scale 255
#alternatively: morph image to world space with a scale, as in trajectron/experiments/nuscenes/process_data.py
super().__init__(layers, homography_matrix, description)
self.set_bounds()
def set_bounds(self):
"""
Use homography and image to calculate the limits of positions in world coordinates
"""
# print(self.data.shape)
max_x = self.data.shape[1]
max_y = self.data.shape[2]
# this assumes a map that is only scaled and translated, not skewed
points_in_map = np.array([
[0, 0],
[max_x, max_y],
])
# calculate bounds:
H_map_to_world = np.linalg.inv(self.homography)
# Convert points to homogeneous coordinates and Apply the transformation
homogeneous_points = np.hstack((points_in_map, np.ones((points_in_map.shape[0], 1))))
transformed_points = np.dot(homogeneous_points, H_map_to_world.T)
# Convert back to Cartesian coordinates
transformed_points = transformed_points[:, :2]
self.bounds = ImageMapBounds(
transformed_points[0][0],
transformed_points[1][0],
transformed_points[0][1],
transformed_points[1][1]
)
@classmethod
def get_cropped_maps_from_scene_map_batch(cls, maps, scene_pts, patch_size, rotation=None, device='cpu'):
min_bounds = [maps[0].bounds.min_x, maps[0].bounds.min_y]
max_bounds = [maps[0].bounds.max_x, maps[0].bounds.max_y]
if torch.is_tensor(scene_pts):
min_bounds = torch.Tensor(min_bounds)
max_bounds = torch.Tensor(max_bounds)
scene_pts = scene_pts.clip(min=min_bounds, max=max_bounds)
return super().get_cropped_maps_from_scene_map_batch(maps, scene_pts, patch_size, rotation, device)
def to_map_points(self, scene_pts):
org_shape = None

View file

@ -1,298 +0,0 @@
from dataclasses import dataclass
from itertools import cycle
import json
import logging
import math
from os import PathLike
from pathlib import Path
import time
from typing import Any, Generator, Iterable, List, Literal, Optional, Tuple
import neoapi
import cv2
import numpy as np
from trap.base import Camera, UrlOrPath
logger = logging.getLogger('video_source')
class VideoSource:
"""Video Frame generator
"""
def recv(self) -> Generator[Optional[cv2.typing.MatLike], Any, None]:
raise RuntimeError("Not implemented")
def __iter__(self):
for i in self.recv():
yield i
BinningValue = Literal[1, 2]
Coordinate = Tuple[int, int]
@dataclass
class GigEConfig:
identifier: Optional[str] = None
binning_h: BinningValue = 1
binning_v: BinningValue = 1
pixel_format: int = neoapi.PixelFormat_BayerRG8
# when changing these values, make sure you also tweak the calibration
width: int = 2448
height: int = 2048
# changing these _automatically changes calibration cx and cy_!!
offset_x: int = 0
offset_y: int = 0
post_crop_tl: Optional[Coordinate] = None
post_crop_br: Optional[Coordinate] = None
@classmethod
def from_file(cls, file: PathLike):
with open(file, 'r') as fp:
return cls(**json.load(fp))
class GigE(VideoSource):
def __init__(self, config=GigEConfig):
self.config = config
self.camera = neoapi.Cam()
# self.camera.Connect('-B127')
self.camera.Connect(self.config.identifier)
# Default buffer mode, streaming, always returns latest frame
self.camera.SetImageBufferCount(10)
# neoAPI docs: Setting the neoapi.Cam.SetImageBufferCycleCount()to one ensures that all buffers but one are given back to the neoAPI to be re-cycled and never given to the user by the neoapi.Cam.GetImage() method.
self.camera.SetImageBufferCycleCount(1)
self.setPixelFormat(self.config.pixel_format)
self.cam_is_configured = False
self.converter_settings = neoapi.ConverterSettings()
self.converter_settings.SetDebayerFormat('BGR8') # opencv
self.converter_settings.SetDemosaicingMethod(neoapi.ConverterSettings.Demosaicing_Baumer5x5)
# self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_Global)
# self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_Adaptive)
# self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_ActiveNoiseReduction)
self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_Off)
self.converter_settings.SetSharpeningFactor(1)
self.converter_settings.SetSharpeningSensitivityThreshold(2)
def configCam(self):
if self.camera.IsConnected():
self.setPixelFormat(self.config.pixel_format)
# self.camera.f.PixelFormat.Set(neoapi.PixelFormat_RGB8)
self.camera.f.BinningHorizontal.Set(self.config.binning_h)
self.camera.f.BinningVertical.Set(self.config.binning_v)
self.camera.f.Height.Set(self.config.height)
self.camera.f.Width.Set(self.config.width)
self.camera.f.OffsetX.Set(self.config.offset_x)
self.camera.f.OffsetY.Set(self.config.offset_y)
# print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(20000)) # shutter 1/50 (hence; 1000000/shutter)
print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(60000)) # otherwise it becomes too blurry in movements
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Get())
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Set(value=35))
# print('brightness targt', self.camera.f.Auto.Set(neoapi.BrightnessCorrection_On))
# print('brightness targt', self.camera.f.BrightnessCorrection.Set(neoapi.BrightnessCorrection_On))
# print('brightness targt', self.camera.f.BrightnessCorrection.Set(neoapi.BrightnessCorrection_On))
print('exposure time', self.camera.f.ExposureTime.Get())
print('LUTEnable', self.camera.f.LUTEnable.Get())
print('LUTEnable', self.camera.f.LUTEnable.Set(True))
# print('LUTEnable', self.camera.f.LUTEnable.Set(False))
print('Gamma', self.camera.f.Gamma.Set(0.45))
# neoapi.region
# self.camera.f.regeo
# print('LUT', self.camera.f.LUTIndex.Get())
# print('LUT', self.camera.f.LUTEnable.Get())
# print('exposure time max', self.camera.f.ExposureTimeGapMax.Get())
# print('exposure time min', self.camera.f.ExposureTimeGapMin.Get())
# self.pixfmt = self.camera.f.PixelFormat.Get()
self.cam_is_configured = True
def setPixelFormat(self, pixfmt):
self.pixfmt = pixfmt
self.camera.f.PixelFormat.Set(pixfmt)
# self.pixfmt = self.camera.f.PixelFormat.Get()
def recv(self):
while True:
# print('receive')
if not self.camera.IsConnected():
self.cam_is_configured = False
return
if not self.cam_is_configured:
self.configCam()
i = self.camera.GetImage(0)
if i.IsEmpty():
time.sleep(.01)
continue
# print(i.GetAvailablePixelFormats())
i = i.Convert(self.converter_settings)
if i.IsEmpty():
time.sleep(.01)
continue
img = i.GetNPArray()
# imgarray = i.GetNPArray()
# if self.pixfmt == neoapi.PixelFormat_BayerRG12:
# img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
# elif self.pixfmt == neoapi.PixelFormat_BayerRG8:
# img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
# else:
# img = cv2.cvtColor(imgarray, cv2.COLOR_BGR2RGB)
# if img.dtype == np.uint16:
# img = cv2.convertScaleAbs(img, alpha=(255.0/65535.0))
img = self._crop(img)
yield img
def _crop(self, img):
tl = self.config.post_crop_tl or (0,0)
br = self.config.post_crop_br or (img.shape[1], img.shape[0])
return img[tl[1]:br[1],tl[0]:br[0],:]
class SingleCvVideoSource(VideoSource):
def recv(self):
while True:
ret, img = self.video.read()
self.frame_idx+=1
# seek to 0 if video has finished. Infinite loop
if not ret:
# now loading multiple files
break
# frame = Frame(index=self.n, img=img, H=self.camera.H, camera=self.camera)
yield img
class RtspSource(SingleCvVideoSource):
def __init__(self, video_url: str | Path, camera: Camera = None):
# keep max 1 frame in app-buffer (0 = unlimited)
# When using gstreamer 1.28 drop=true is deprecated, use: leaky-type=2 which frame to drop: https://gstreamer.freedesktop.org/documentation/applib/gstappsrc.html?gi-language=c
gst = f"rtspsrc location={video_url} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=1 drop=true"
logger.info(f"Capture gstreamer (gst-launch-1.0): {gst}")
self.video = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
self.frame_idx = 0
class FilelistSource(SingleCvVideoSource):
def __init__(self, video_sources: Iterable[UrlOrPath], camera: Camera = None, delay = True, offset = 0, end: Optional[int] = None, loop=False):
# store current position
self.video_sources = video_sources if not loop else cycle(video_sources)
self.camera = camera
self.video_path = None
self.video_nr = None
self.frame_count = None
self.frame_idx = None
self.n = 0
self.delay_generation = delay
self.offset = offset
self.end = end
def recv(self):
prev_time = time.time()
for video_nr, video_path in enumerate(self.video_sources):
self.video_path = video_path
self.video_nr = video_nr
logger.info(f"Play from '{str(video_path)}'")
video = cv2.VideoCapture(str(video_path))
fps = video.get(cv2.CAP_PROP_FPS)
target_frame_duration = 1./fps
self.frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
if self.frame_count < 0:
self.frame_count = math.inf
self.frame_idx = 0
# TODO)) Video offset
if self.offset:
logger.info(f"Start at frame {self.offset}")
video.set(cv2.CAP_PROP_POS_FRAMES, self.offset)
self.frame_idx = self.offset
while True:
ret, img = video.read()
self.frame_idx+=1
self.n+=1
# seek to 0 if video has finished. Infinite loop
if not ret:
# now loading multiple files
break
if "DATASETS/hof/" in str(video_path):
# hack to mask out area
cv2.rectangle(img, (0,0), (800,200), (0,0,0), -1)
# frame = Frame(index=self.n, img=img, H=self.camera.H, camera=self.camera)
yield img
if self.end is not None and self.frame_idx >= self.end:
logger.info(f"Reached frame {self.end}")
break
if self.delay_generation:
# defer next loop
now = time.time()
time_diff = (now - prev_time)
if time_diff < target_frame_duration:
time.sleep(target_frame_duration - time_diff)
now += target_frame_duration - time_diff
prev_time = now
class CameraSource(SingleCvVideoSource):
def __init__(self, identifier: int, camera: Camera):
self.video = cv2.VideoCapture(identifier)
self.camera = camera
# TODO: make config variables
self.video.set(cv2.CAP_PROP_FRAME_WIDTH, int(self.camera.w))
self.video.set(cv2.CAP_PROP_FRAME_HEIGHT, int(self.camera.h))
# print("exposure!", video.get(cv2.CAP_PROP_AUTO_EXPOSURE))
self.video.set(cv2.CAP_PROP_FPS, self.camera.fps)
self.frame_idx = 0
def get_video_source(video_sources: List[UrlOrPath], camera: Optional[Camera] = None, frame_offset=0, frame_end:Optional[int]=None, loop=False):
if str(video_sources[0]).isdigit():
# numeric input is a CV camera
if frame_offset:
logger.info("video-offset ignored for camera source")
return CameraSource(int(str(video_sources[0])), camera)
elif video_sources[0].url.scheme == 'rtsp':
# video_sources[0].url.hostname
if frame_offset:
logger.info("video-offset ignored for rtsp source")
return RtspSource(video_sources[0])
elif video_sources[0].url.scheme == 'gige':
if frame_offset:
logger.info("video-offset ignored for gige source")
config = GigEConfig.from_file(Path(video_sources[0].url.netloc + video_sources[0].url.path))
return GigE(config)
else:
return FilelistSource(video_sources, offset = frame_offset, end=frame_end, loop=loop)
# os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "fflags;nobuffer|flags;low_delay|avioflags;direct|rtsp_transport;udp"
def get_video_source_from_str(video_sources: List[str]):
paths = [UrlOrPath(s) for s in video_sources]
return get_video_source(paths)

3213
uv.lock

File diff suppressed because it is too large Load diff