Compare commits

..

No commits in common. "main" and "cluster_predictions" have entirely different histories.

39 changed files with 1065 additions and 9663 deletions

View file

@ -2,12 +2,12 @@
"batch_size": 512,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.001,
"learning_rate": 0.01,
"min_learning_rate": 1e-05,
"learning_decay_rate": 0.9999,
"prediction_horizon": 60,
"prediction_horizon": 30,
"minimum_history_length": 5,
"maximum_history_length": 150,
"maximum_history_length": 50,
"map_encoder": {
"PEDESTRIAN": {
"heading_state_index": [2, 3],

View file

@ -7,59 +7,21 @@
## How to
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo. Also, [laserspace](https://git.rubenvandeven.com/security_vision/laserspace) is used to map the shapes (which are generated by `stage.py`) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo.
These are roughly the steps to go from datagathering to training
1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
* Recording can be done with `ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4`
2. Follow the steps in the auxilary [traptools](https://git.rubenvandeven.com/security_vision/traptools) repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
3. Track lidar or video data:
1. Video: Run the video source & video tracker nodes:
* `uv run trap_video_source --homography ../DATASETS/hof4-test-angle/homography.json --video-src gige://../DATASETS/hof4-test-angle/gige_config.json --calibration ../DATASETS/hof4-test-angle/calibration.json` (Optionally, use recorded video with `--video-src videos/render-source-2025-10-19T21\:09.mp4 --video-offset 300`)
* `uv run trap_tracker --smooth-tracks --eval_device cuda:0 --detector ultralytics`
2. Lidar: `uv run trap_lidar --min-box-area 0 --pi LOCAL_IP --smooth-tracks`
4. Save the tracks emitted by the video or lidar tracker: `uv run trap_track_writer --output-dir EXPERIMENTS/raw/hof-lidar`
* Each recording adds a new txt file to the `raw` folder.
4. Parse tracker data to Trajectron format: `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME`
* Optionally, smooth tracks: `--smooth-tracks`
* Optionally, and variations with noise: `--noise-tracks 2` (creates 2 variations)
* Optionally, and variations with at a random offset: `--offset-tracks 2` (creates 2 variations)
3. Run the tracker, e.g. `uv run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`
* Note: You can run this right of the camera stream: `uv run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`, each recording adding a new file to the `raw` folder.
4. Parse tracker data to Trajectron format: `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME` Optionally, smooth tracks: `--smooth-tracks`
* Optionally, add a map: ideally a RGB png: 3 layers of 0-255
* `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
* See [[tests/trajectron_maps.ipynb]] for more info how to do so (e.g. the homography map/scale settings, which are also set in process_data)
5. Train Trajectron model `uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
* For faster training disalble edges:
` uv run trajectron_train --eval_every 200 --train_data_dict dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11_train.pkl --eval_data_dict dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir /home/ruben/suspicion/trap/SETTINGS/2025-11-dortmund/models --log_tag _dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11 --train_epochs 100 --conf /home/ruben/suspicion/trap/SETTINGS/2025-11-dortmund/trajectron.json --data_dir SETTINGS/2025-11-dortmund/trajectron --map_encoding --no_edge_encoding --dynamic_edges yes --no_edge_encoding --edge_influence_combine_method max --batch_size 512`
6. The run!
* `uv run supervisord`
<!-- * On a video file (you can use a wildcard) `DISPLAY=:1 uv run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
* or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`. -->
## Testnight 2025-06-13
Stappenplan:
* Hang lasers. Connect all cables etc.
* `DISPLAY=:0 cargo run --example laser_frame_stream_gui`
* Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
* In trap folder: `uv run supervisorctl start video`
* In laserspace folder: `DISPLAY=:0 cargo run --bin render_lines_gui` and use gui to draw and tweak projection area
* Use the save button to store configuration
/*
* in trap folder: `DISPLAY=:0 uv run trap_laser_calibration`
* follow instructions:
* camera points: 1-9 or cursor to create/select/move points
* move laser: vim movement keys : hjkl, use shift to move faster
* `c` to calibrate. Matrix is output to cli.
* `q` to quit
* saved to `laser_calib.json`, copy H field to `trap_rust/src/trap/laser.rs` (to e.g. TMP_STUDIO_CM_8)
* Restart `render_lines_gui` with new homographies
* `DISPLAY=:0 cargo run --bin render_lines_gui`
*/
* change video source in `supervisord.conf` and run `uv run supervisorctl update` to switch
* **if tracking is slow and there's no prediction.**
* `uv run python -c "import torch;print(torch.cuda.is_available())"`

View file

@ -1,130 +0,0 @@
{
"batch_size": 512,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.001,
"min_learning_rate": 1e-05,
"learning_decay_rate": 0.9999,
"prediction_horizon": 60,
"minimum_history_length": 5,
"maximum_history_length": 150,
"map_encoder": {
"PEDESTRIAN": {
"heading_state_index": [2, 3],
"patch_size": [
50,
10,
50,
90
],
"map_channels": 3,
"hidden_channels": [
10,
20,
5,
1
],
"output_size": 32,
"masks": [
5,
5,
5,
5
],
"strides": [
1,
1,
1,
1
],
"dropout": 0.5
}
},
"k": 1,
"k_eval": 1,
"kl_min": 0.07,
"kl_weight": 100.0,
"kl_weight_start": 0,
"kl_decay_rate": 0.99995,
"kl_crossover": 400,
"kl_sigmoid_divisor": 4,
"rnn_kwargs": {
"dropout_keep_prob": 0.75
},
"MLP_dropout_keep_prob": 0.9,
"enc_rnn_dim_edge": 1,
"enc_rnn_dim_edge_influence": 1,
"enc_rnn_dim_history": 32,
"enc_rnn_dim_future": 32,
"dec_rnn_dim": 128,
"q_z_xy_MLP_dims": null,
"p_z_x_MLP_dims": 32,
"GMM_components": 1,
"log_p_yt_xz_max": 6,
"N": 1,
"K": 25,
"tau_init": 2.0,
"tau_final": 0.05,
"tau_decay_rate": 0.997,
"use_z_logit_clipping": true,
"z_logit_clip_start": 0.05,
"z_logit_clip_final": 5.0,
"z_logit_clip_crossover": 300,
"z_logit_clip_divisor": 5,
"dynamic": {
"PEDESTRIAN": {
"name": "SingleIntegrator",
"distribution": true,
"limits": {}
}
},
"state": {
"PEDESTRIAN": {
"position": [
"x",
"y"
],
"velocity": [
"x",
"y"
],
"acceleration": [
"x",
"y"
]
}
},
"pred_state": {
"PEDESTRIAN": {
"position": [
"x",
"y"
]
}
},
"log_histograms": false,
"dynamic_edges": "yes",
"edge_state_combine_method": "sum",
"edge_influence_combine_method": "max",
"edge_addition_filter": [
0.25,
0.5,
0.75,
1.0
],
"edge_removal_filter": [
1.0,
0.0
],
"offline_scene_graph": "yes",
"incl_robot_node": false,
"node_freq_mult_train": false,
"node_freq_mult_eval": false,
"scene_freq_mult_train": false,
"scene_freq_mult_eval": false,
"scene_freq_mult_viz": false,
"edge_encoding": false,
"use_map_encoding": true,
"augment": false,
"override_attention_radius": []
}

View file

@ -2,10 +2,10 @@
# Default YOLO tracker settings for ByteTrack tracker https://github.com/ifzhang/ByteTrack
tracker_type: bytetrack # tracker type, ['botsort', 'bytetrack']
track_high_thresh: 0.000001 # threshold for the first association
track_low_thresh: 0.000001 # threshold for the second association
new_track_thresh: 0.000001 # threshold for init new track if the detection does not match any tracks
track_buffer: 10 # buffer to calculate the time when to remove tracks
match_thresh: 0.99 # threshold for matching tracks
track_high_thresh: 0.0001 # threshold for the first association
track_low_thresh: 0.0001 # threshold for the second association
new_track_thresh: 0.0001 # threshold for init new track if the detection does not match any tracks
track_buffer: 50 # buffer to calculate the time when to remove tracks
match_thresh: 0.95 # threshold for matching tracks
fuse_score: True # Whether to fuse confidence scores with the iou distances before matching
# min_box_area: 10 # threshold for min box areas(for tracker evaluation, not used for now)

View file

@ -16,7 +16,7 @@ dependencies = [
"gdown>=4.7.1,<5",
"pandas-helper-calc",
"tsmoothie>=1.0.5,<2",
"pyglet>=2.1.8,<3",
"pyglet>=2.0.15,<3",
"pyglet-cornerpin>=0.3.0,<0.4",
"opencv-python",
"setproctitle>=1.3.3,<2",
@ -34,15 +34,6 @@ dependencies = [
"facenet-pytorch>=2.5.3",
"simplification>=0.7.12",
"supervisor>=4.2.5",
"superfsmon>=1.2.3",
"noise>=1.2.2",
"svgpathtools>=1.7.1",
"velodyne-decoder>=3.1.0",
"open3d>=0.19.0",
"nptyping>=2.5.0",
"py-to-proto>=0.6.0",
"grpcio-tools>=1.76.0",
"dearpygui>=2.1.0",
]
[project.scripts]
@ -54,20 +45,12 @@ process_data = "trap.process_data:main"
blacklist = "trap.tools:blacklist_tracks"
rewrite_tracks = "trap.tools:rewrite_raw_track_files"
model_train = "trap.models.train:train"
trap_video_source = "trap.frame_emitter:FrameEmitter.parse_and_start"
trap_video_writer = "trap.frame_writer:FrameWriter.parse_and_start"
trap_tracker = "trap.tracker:Tracker.parse_and_start"
trap_track_writer = "trap.track_writer:TrackWriter.parse_and_start"
trap_lidar = "trap.lidar_tracker:Lidar.parse_and_start"
trap_stage = "trap.stage:Stage.parse_and_start"
trap_render_stage = "trap.stage_renderer:StageRenderer.parse_and_start"
trap_prediction = "trap.prediction_server:PredictionServer.parse_and_start"
trap_render_cv = "trap.cv_renderer:CvRenderer.parse_and_start"
trap_monitor = "trap.monitor:Monitor.parse_and_start" # migrate timer
trap_laser_calibration = "trap.laser_calibration:LaserCalibration.parse_and_start" # migrate timer
trap_settings = "trap.settings:Settings.parse_and_start" # migrate timer
[tool.uv]
@ -81,9 +64,6 @@ baumer-neoapi = { path = "../../Downloads/Baumer_neoAPI_1.5.0_lin_x86_64_python/
foucault = { git = "https://git.rubenvandeven.com/r/conductofconduct" }
opencv-python = {path="./opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"}
[tool.uv.workspace]
members = ["CenterTrack"]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

View file

@ -4,7 +4,7 @@ port = *:8293
# password = 123
[supervisord]
nodaemon = false
nodaemon = True
; The rpcinterface:supervisor section must remain in the config file for
@ -20,78 +20,29 @@ serverurl = http://localhost:8293
command=uv run trap_monitor
numprocs=1
directory=%(here)s
autostart=false
[program:video]
# command=uv run trap_video_source --homography ../DATASETS/hof3/homography.json --video-src ../DATASETS/hof3/hof3-cam-demo-twoperson.mp4 --calibration ../DATASETS/hof3/calibration.json --video-loop
command=uv run trap_video_source --homography ../DATASETS/hof3-cam-baumer-cropped/homography.json --video-src gige://../DATASETS/hof3-cam-baumer-cropped/gige_config.json --calibration ../DATASETS/hof3-cam-baumer-cropped/calibration.json
command=uv run trap_video_source --homography ../DATASETS/hof3/homography.json --video-src ../DATASETS/hof3/hof3-cam-demo-twoperson.mp4 --calibration ../DATASETS/hof3/calibration.json --video-loop
# command=uv run trap_video_source --homography ../DATASETS/hof3-cam-baumer/homography.json --video-src gige://../DATASETS/hof3-cam-baumer/gige_config.json --calibration ../DATASETS/hof3-cam-baumer/calibration.json
directory=%(here)s
directory=%(here)s
[program:tracker]
command=uv run trap_tracker --smooth-tracks
# command=uv run trap_lidar --min-box-area 0 --viz --smooth-tracks
# environment=DISPLAY=":0"
command=uv run trap_tracker
directory=%(here)s
autostart=false
[program:lidar]
command=uv run trap_lidar --min-box-area 0.1 --viz
environment=DISPLAY=":0"
directory=%(here)s
autostart=false
[program:track_writer]
command=uv run trap_track_writer --output-dir EXPERIMENTS/raw/hof-lidar
# environment=DISPLAY=":0"
directory=%(here)s
autostart=false
stopwaitsecs=60
[program:stage]
# command=uv run trap_stage
command=uv run trap_stage --verbose --camera-fps 12 --homography ../DATASETS/hof3/homography.json --calibration ../DATASETS/hof3/calibration.json --cache-path /tmp/history_cache-hof3.pcl --tracker-output-dir EXPERIMENTS/raw/hof3/
directory=%(here)s
[program:settings]
command=uv run trap_settings
autostart=true
environment=DISPLAY=":0"
command=uv run trap_stage
directory=%(here)s
[program:predictor]
# command=uv run trap_prediction --eval_device cuda:0 --model_dir EXPERIMENTS/models/models_20241229_21_35_13_hof3-m2-ud-split-conv12-f2.0-map-2024-12-29/ --num-samples 1 --map_encoding --eval_data_dict EXPERIMENTS/trajectron-data/hof3-m2-ud-split-nostep-conv12-f2.0-map-2024-12-29_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode
command=uv run trap_prediction --eval_device cuda:0 --model_dir SETTINGS/2025-11-dortmund/models/models_20251111_19_06_29_dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-11/ --num-samples 1 --map_encoding --eval_data_dict SETTINGS/2025-11-dortmund/trajectron/dortmund-nostep-nosmooth-noise2-offsets1-f2.0-map-2025-11-12_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode --conf SETTINGS/2025-11-dortmund/trajectron.json
# command=uv run trap_prediction --eval_device cuda:0 --model_dir EXPERIMENTS/models/models_20251106_11_51_00_hof-lidar-m2-ud-nostep-kalsmooth-noise2-offsets2-f2.0-map-2025-11-06/ --num-samples 1 --map_encoding --eval_data_dict EXPERIMENTS/trajectron-data/hof-lidar-m2-ud-nostep-kalsmooth-noise2-offsets2-f2.0-map-2025-11-06_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode
# uv run trajectron_train --continue_training_from EXPERIMENTS/models/models_20241229_21_35_13_hof3-m2-ud-split-conv12-f2.0-map-2024-12-29/ --eval_every 5 --train_data_dict hof3-nostep-conv12-f2.0-map-2024-12-27_train.pkl --eval_data_dict hof3-nostep-conv12-f2.0-map-2024-12-27_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _hof3-conv12-f2.0-map-2024-12-27 --train_epochs 10 --conf EXPERIMENTS/config.json --data_dir EXPERIMENTS/trajectron-data --map_encoding
command=uv run trap_prediction --eval_device cuda:0 --model_dir EXPERIMENTS/models/models_20241229_21_35_13_hof3-m2-ud-split-conv12-f2.0-map-2024-12-29/ --num-samples 1 --map_encoding --eval_data_dict EXPERIMENTS/trajectron-data/hof3-m2-ud-split-nostep-conv12-f2.0-map-2024-12-29_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode
directory=%(here)s
[program:render_cv]
command=uv run trap_render_cv
command=uv run trap_render_cv
directory=%(here)s
environment=DISPLAY=":0"
autostart=false
; can be long to quit if rendering to video file
stopwaitsecs=60
[program:render_cv]
command=uv run trap_render_cv
directory=%(here)s
environment=DISPLAY=":0"
autostart=false
; can be long to quit if rendering to video file
stopwaitsecs=60
[program:laserspace]
command=cargo run --release tcp://127.0.0.1:99174 ../trap/SETTINGS/2025-11-dortmund/laserspace.json
directory=%(here)s/../laserspace
environment=DISPLAY=":0"
autostart=false
; can be long to quit if rendering to video file
stopwaitsecs=60
# during development auto restart some services when the code changes
[program:superfsmon]
command=superfsmon trap/stage.py stage
directory=%(here)s
autostart=false

View file

@ -36,7 +36,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -151,7 +151,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@ -161,7 +161,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@ -187,7 +187,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@ -196,34 +196,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(tracks)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -1,181 +0,0 @@
from __future__ import annotations
import logging
from typing import List
import numpy as np
from trap.base import ProjectedTrack
from trap.lines import AppendableLine, Coordinate, DeltaT, ProceduralChain, RenderableLines, SrgbaColor, StaticLine
logger = logging.getLogger('anomaly')
def calc_anomaly(segments: List[DiffSegment], window: int = 3):
"""Calculate anomaly score based on provided segments
considering a sliding window of the last n items
"""
relevant_segments = segments[-window:]
scores = [s.avg_score() for s in relevant_segments]
s = list(filter(lambda x: x is not None,scores))
return np.average(s)
class DiffSegment():
"""
A segment of a prediction track, that can be diffed
with a track. The track is continously updated.
If a new prediction comes in, the diff is marked as
finished. After which it is animated and added to the
Scenario's anomaly score.
"""
DRAW_DECAY_SPEED = 25
POINT_INTERVAL = 4
def __init__(self, prediction: ProjectedTrack):
self.ptrack = prediction
self._last_diff_frame_idx: int = 0
self.finished = False
self.line = StaticLine()
self.points: List[Coordinate] = []
self._drawn_points = []
self._target_track = prediction
self.score = 0
def finish(self):
self.finished = True
def nr_of_passed_points(self) -> int:
if not self._last_diff_frame_idx:
return 0
return self._last_diff_frame_idx - self.ptrack.frame_index
# if isinstance(self.line, AppendableLine):
# return self.line.nr_of_passed_points() * self.POINT_INTERVAL
# else:
# return len(self.points) * self.POINT_INTERVAL
def avg_score(self):
frames_passed = self.nr_of_passed_points()
if not frames_passed:
return None
else:
return self.score/frames_passed
# run on each track update received
def update_track(self, track: ProjectedTrack):
self._target_track = track
if self.finished:
# don't add new points if finished
return
# migrate SceneraioScene function
start_frame_idx = max(self.ptrack.frame_index, self._last_diff_frame_idx)
traj_diff_steps_back = track.frame_index - start_frame_idx # positive value
pred_diff_steps_forward = start_frame_idx - self.ptrack.frame_index # positive value
if traj_diff_steps_back < 0 or len(track.history) < traj_diff_steps_back:
logger.warning("Track history doesn't reach prediction start. Should not be possible. Skip")
# elif len(ptrack.predictions[0]) < pred_diff_steps_back:
# logger.warning("Prediction does not reach prediction start. Should not be possible. Skip")
else:
trajectory = track.projected_history
# from start to as far as it gets
trajectory_range = trajectory[-1*traj_diff_steps_back:]
prediction_range = self.ptrack.predictions[0][pred_diff_steps_forward:] # in world coordinate space
line = []
for i, (p1, p2) in enumerate(zip(trajectory_range, prediction_range)):
diff = (p1[0]-p2[0], p1[1]-p2[1])
self.score += np.linalg.norm(diff)
offset_from_start = (pred_diff_steps_forward + i)
if offset_from_start % self.POINT_INTERVAL == 0:
self.line.extend([p1, p2])
self.points.extend([p1, p2])
self._last_diff_frame_idx = track.frame_index
# # run each render tick
# def update_drawn_positions(self, dt: DeltaT):
# if isinstance(self.line, AppendableLine):
# if self.finished and self.line.ready:
# # convert when fully drawn
# # print(self, "CONVERT LINE")
# self.line = ProceduralChain.from_appendable_line(self.line)
# if isinstance(self.line, ProceduralChain):
# self.line.target = self._target_track.projected_history[-1]
# # if not self.finished or not self.line.ready:
# self.line.update_drawn_positions(dt)
def as_renderable(self) -> RenderableLines:
color = SrgbaColor(0,0,1,1)
# if not self.finished or not self.line.ready:
return self.line.as_renderable(color)
# return self.line.as_renderable(color)
def calculate_loitering_scores(track: ProjectedTrack, min_duration_to_linger, linger_factor, velocity_threshold, window = None):
"""
Calculates a loitering score (0-1) for each track.
Args:
tracks: A list of tracks, where each track is a list of (frame_id, x, y, width, height).
min_duration_to_linger: Minimum number of frames to start considering a segment as lingering.
linger_factor: Divide number of lingering frames by 'linger_factor' to get a score 0-1
velocity_threshold: Maximum velocity (meters/frame) to consider as lingering.
Returns:
A generator providing loitering scores for each frame
"""
total_frames = len(track.projected_history)
if total_frames < 2:
return 0.0 # Not enough data
offset = window * -1 if window is not None else 0
x_coords = [t[0] for t in track.projected_history[offset:]]
y_coords = [t[1] for t in track.projected_history[offset:]]
# Calculate velocities
velocities = np.sqrt(np.diff(x_coords)**2 + np.diff(y_coords)**2)
# Calculate distances
# distances = np.diff(x_coords)
# distances_y = np.diff(y_coords)
# distances_total = np.sqrt(distances**2 + distances_y**2)
linger_duration = 0
linger_frames = 0
for i in range(len(velocities)):
if velocities[i] < velocity_threshold:
linger_duration += 1
if linger_duration >= min_duration_to_linger:
linger_frames +=1
else:
# decay if moving faster
linger_duration = max(linger_duration - 1.5, 0)
linger_frames = max(linger_frames - 1.5, 0)
# Calculate loitering score
if total_frames > 0:
loitering_score = min(1, max(0, linger_frames / linger_factor))
else:
loitering_score = 0.0
yield loitering_score

View file

@ -3,7 +3,6 @@ from __future__ import annotations
from abc import ABC, abstractmethod
import argparse
from collections import defaultdict
from copy import deepcopy
from enum import IntFlag
from itertools import cycle
import json
@ -16,7 +15,6 @@ import cv2
from dataclasses import dataclass, field
import dataclasses
from nptyping import Float64, NDArray, Shape
import numpy as np
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
@ -25,7 +23,6 @@ from bytetracker.basetrack import TrackState as ByteTrackTrackState
import pandas as pd
from shapely import Point
from trap.utils import get_bins, inv_lerp, lerp
from trajectron.environment import Environment, Node, Scene
from urllib.parse import urlparse
@ -74,7 +71,6 @@ class DetectionState(IntFlag):
Confirmed = 2 # after tentative
Lost = 4 # lost when DeepsortTrack.time_since_update > 0 but not Deleted
Interpolated = 8 # A position estimated through interpolation of adjecent detections
# Interpolated = 8 # A position estimated through interpolation of adjecent detections
@classmethod
def from_deepsort_track(cls, track: DeepsortTrack):
@ -90,13 +86,11 @@ class DetectionState(IntFlag):
def from_bytetrack_track(cls, track: ByteTrackTrack):
if track.state == ByteTrackTrackState.New:
return cls.Tentative
if track.state == ByteTrackTrackState.Removed:
if track.state == ByteTrackTrackState.Lost:
return cls.Lost
# if track.time_since_update > 0:
if track.state == ByteTrackTrackState.Tracked:
return cls.Confirmed
if track.state == ByteTrackTrackState.Lost:
return cls.Tentative
raise RuntimeError("Should not run into Deleted entries here")
@ -162,23 +156,18 @@ class DistortedCamera(ABC):
def from_calibfile(cls, calibration_path, H, fps):
with calibration_path.open('r') as fp:
data = json.load(fp)
camera = cls.from_calibdata(data, H, fps)
return camera
return cls.from_calibdata(data, H, fps)
@classmethod
def from_paths(cls, calibration_path: Path, h_path: Path, fps: float):
def from_paths(cls, calibration_path, h_path, fps):
H = H_from_path(h_path)
with calibration_path.open('r') as fp:
calibdata = json.load(fp)
if 'type' in calibdata and calibdata['type'] == 'fisheye':
camera = FisheyeCamera.from_calibdata(calibdata, H, fps)
elif 'type' in calibdata and calibdata['type'] == 'undistorted':
camera = UndistortedCamera(calibdata['fps'])
else:
camera = Camera.from_calibdata(calibdata, H, fps)
return camera
# return cls.from_calibfile(calibration_path, H, fps)
@ -189,8 +178,6 @@ class DistortedCamera(ABC):
coords = self.project_points(coords, scale)
return coords
class FisheyeCamera(DistortedCamera):
def __init__(self, dim1, dim2, dim3, K, D, new_K, scaled_K, balance, H, fps):
@ -211,24 +198,8 @@ class FisheyeCamera(DistortedCamera):
self.map1, self.map2 = cv2.fisheye.initUndistortRectifyMap(self.scaled_K, self.D, self._R, self.new_K, self.dim3, cv2.CV_16SC2)
# self.map1, self.map2 = cv2.fisheye.initUndistortRectifyMap(self.scaled_K, self.D, self._R, self.new_K, self.dim3, cv2.CV_32FC1)
def undistort_img(self, img: MatLike):
# map1, map2 = adjust_remap_maps(self.map1, self.map2, 2, (0,0))
# this only works on the undistort, but screws up when doing subsequent homography,
# there needs to be a way to combine both this remap and warpPerspective into a
# single remap call...
# scale = 0.3
# cx = self.dim3[0] / 2
# cy = self.dim3[1] / 2
# map1 = (self.map1 - cx) / scale + cx
# map2 = (self.map2 - cy) / scale + cy
# map1 += 900 #translate x (>0 left, <0 right)
# map2 += 1500 #translate y (>0 up, <0 down)
return cv2.remap(img, self.map1, self.map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
def undistort_points(self, distorted_points: PointList):
@ -260,20 +231,6 @@ class FisheyeCamera(DistortedCamera):
class UndistortedCamera(DistortedCamera):
def __init__(self, fps = 10):
self.fps = fps
self.H = np.eye(3,3)
def undistort_img(self, img: MatLike):
return deepcopy(img)
def undistort_points(self, distorted_points: PointList):
return deepcopy(distorted_points)
class Camera(DistortedCamera):
def __init__(self, mtx: cv2.Mat, dist: cv2.Mat, w: float, h: float, H: cv2.Mat, fps: float):
self.mtx = mtx
@ -332,7 +289,7 @@ class Detection:
@classmethod
def from_deepsort(cls, dstrack: DeepsortTrack, frame_nr: int):
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf or 0, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
@classmethod
@ -394,15 +351,9 @@ class Track:
if not self.created_at:
self.created_at = time.time()
if not self.updated_at:
self.updated_at = time.time()
def track_age(self) -> float:
return time.time() - self.created_at
def track_update_dt(self) -> float:
return time.time() - self.updated_at
self.update_at = time.time()
def get_projected_history(self, H: Optional[cv2.Mat] = None, camera: Optional[DistortedCamera]= None) -> NDArray[Shape["*, 2"], Float64]:
def get_projected_history(self, H: Optional[cv2.Mat] = None, camera: Optional[DistortedCamera]= None) -> np.array:
foot_coordinates = [d.get_foot_coords() for d in self.history]
# TODO)) Undistort points before perspective transform
if len(foot_coordinates):
@ -415,7 +366,7 @@ class Track:
else:
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
return coords[0]
return np.empty(shape=(0,2)) #np.array([], shape)
return np.array([])
def get_projected_history_as_dict(self, H, camera: Optional[DistortedCamera]= None) -> dict:
coords = self.get_projected_history(H, camera)
@ -765,8 +716,6 @@ class CameraAction(argparse.Action):
data = json.load(fp)
if 'type' in data and data['type'] == 'fisheye':
camera = FisheyeCamera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
elif 'type' in data and data['type'] == 'undistorted':
camera = UndistortedCamera(namespace.camera_fps)
else:
camera = Camera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
# # print(data)

View file

@ -37,15 +37,8 @@ class CounterFpsSender():
self.is_finished = threading.Event()
def tick(self):
"""
returns dt since previous tock
"""
self.iterations += 1
self.snapshot()
if len(self.tocs) > 1:
return float(self.tocs[-1][0] - self.tocs[-2][0])
else:
return 0.
def snapshot(self):
self.tocs.append((time.perf_counter(), self.iterations))

View file

@ -2,15 +2,12 @@
from __future__ import annotations
import datetime
import json
import logging
from pathlib import Path
import time
from argparse import ArgumentParser, Namespace
from multiprocessing.synchronize import Event as BaseEvent
from typing import Dict, List, Optional
from typing import Dict
from charset_normalizer import detect
import cv2
import ffmpeg
import numpy as np
@ -18,10 +15,8 @@ import pyglet
import zmq
from pyglet import shapes
from trap.base import Detection, UndistortedCamera
from trap.counter import CounterListerner
from trap.frame_emitter import Frame, Track
from trap.lines import load_lines_from_svg
from trap.node import Node
from trap.preview_renderer import FrameWriter
from trap.tools import draw_track_predictions, draw_track_projected, to_point
@ -33,7 +28,6 @@ class CvRenderer(Node):
def setup(self):
self.prediction_sock = self.sub(self.config.zmq_prediction_addr)
self.tracker_sock = self.sub(self.config.zmq_trajectory_addr)
self.detector_sock = self.sub(self.config.zmq_detection_addr)
self.frame_sock = self.sub(self.config.zmq_frame_addr)
# self.H = self.config.H
@ -52,15 +46,10 @@ class CvRenderer(Node):
self.frame: Frame|None= None
self.tracker_frame: Frame|None = None
self.prediction_frame: Frame|None = None
self.detections: List[Detection]|None = None
self.tracks: Dict[str, Track] = {}
self.predictions: Dict[str, Track] = {}
self.scale = 100
self.debug_lines = debug_lines = load_lines_from_svg(self.config.debug_map, self.scale, '') if self.config.debug_map else []
def refresh_labels(self, dt: float):
"""Every frame"""
@ -117,7 +106,7 @@ class CvRenderer(Node):
# return process
def run(self):
self.frame = None
frame = None
prediction_frame = None
tracker_frame = None
@ -126,11 +115,9 @@ class CvRenderer(Node):
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
# https://gist.github.com/ronekko/dc3747211543165108b11073f929b85e
cv2.moveWindow("frame", 0, -1)
cv2.moveWindow("frame", 1920, -1)
if self.config.full_screen:
cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.setMouseCallback('frame',self.click_print_position)
# bgsub = cv2.createBackgroundSubtractorMOG2(120, 50, detectShadows=True)
while self.run_loop():
@ -142,7 +129,7 @@ class CvRenderer(Node):
# continue
try:
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
@ -151,12 +138,10 @@ class CvRenderer(Node):
# logger.debug(f'new video frame {frame.index}')
if self.frame is None and i < 100:
if frame is None:
# might need to wait a few iterations before first frame comes available
time.sleep(.1)
continue
else:
self.frame = Frame(i, np.zeros((1920,1080,3)), camera=UndistortedCamera(12))
try:
prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
@ -174,43 +159,29 @@ class CvRenderer(Node):
except zmq.ZMQError as e:
logger.debug(f'reuse tracks')
try:
self.detections = self.detector_sock.recv_pyobj(zmq.NOBLOCK)
# print('detections')
except zmq.ZMQError as e:
# print('no detections')
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
if first_time is None:
first_time = self.frame.time
first_time = frame.time
# img = frame.img
# save_file = Path("videos/snap.png")
# if not save_file.exists():
# img = frame.camera.img_to_world(frame.img, 100)
# cv2.imwrite(save_file, img)
img = decorate_frame(frame, tracker_frame, prediction_frame, first_time, self.config, self.tracks, self.predictions, self.config.render_clusters)
img = decorate_frame(self.frame, tracker_frame, prediction_frame, first_time, self.config, self.tracks, self.predictions, self.detections, self.config.render_clusters, self.debug_lines, self.scale)
logger.debug(f"write frame {self.frame.time - first_time:.3f}s")
logger.debug(f"write frame {frame.time - first_time:.3f}s")
if self.out_writer:
self.out_writer.write(img)
if self.streaming_process:
self.streaming_process.stdin.write(img.tobytes())
if not self.config.no_window:
if self.config.render_window:
cv2.imshow('frame',cv2.resize(img, (1920, 1080)))
# cv2.imshow('frame',img)
cv2.waitKey(10)
cv2.waitKey(1)
# clear out old tracks & predictions:
for track_id, track in list(self.tracks.items()):
if get_animation_position(track, self.frame) == 1:
if get_animation_position(track, frame) == 1:
self.tracks.pop(track_id)
for prediction_id, track in list(self.predictions.items()):
if get_animation_position(track, self.frame) == 1:
if get_animation_position(track, frame) == 1:
self.predictions.pop(prediction_id)
logger.info('Stopping')
@ -239,12 +210,6 @@ class CvRenderer(Node):
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
render_parser.add_argument('--zmq-detection-addr',
help='Manually specity communication addr for the detection messages',
type=str,
default="ipc:///tmp/feeds_dets")
render_parser.add_argument('--zmq-prediction-addr',
help='Manually specity communication addr for the prediction messages',
type=str,
@ -253,8 +218,8 @@ class CvRenderer(Node):
render_parser.add_argument("--render-file",
help="Render a video file previewing the prediction, and its delay compared to the current frame",
action='store_true')
render_parser.add_argument("--no-window",
help="Disable a previewing to a window",
render_parser.add_argument("--render-window",
help="Render a previewing to a window",
action='store_true')
render_parser.add_argument("--full-screen",
@ -273,23 +238,8 @@ class CvRenderer(Node):
""",
type=str,
default=None)
render_parser.add_argument('--debug-map',
help='specify a map (svg-file) from which to load lines which will be overlayed',
type=str,
default="../DATASETS/hof-lidar/map_hof.svg")
return render_parser
def click_print_position(self, event,x,y,flags,param):
# if event == cv2.EVENT_LBUTTONDBLCLK:
if event == cv2.EVENT_LBUTTONUP:
if not self.frame:
return
scale = 100
print("click position:", x/scale, y/scale)
# self.frame.camera.points_img_to_world([[x, y]], 1)
# cv2.circle(img,(x,y),100,(255,0,0),-1)
mouseX,mouseY = x,y
# colorset = itertools.product([0,255], repeat=3) # but remove white
# colorset = [(0, 0, 0),
@ -320,8 +270,8 @@ def get_animation_position(track: Track, current_frame: Frame):
def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame, first_time: float, config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track], detections: Optional[List[Detection]], as_clusters = True, debug_lines = [], scale: float = 100) -> np.array:
def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame, first_time: float, config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track], as_clusters = True) -> np.array:
scale = 100
# TODO: replace opencv with QPainter to support alpha? https://doc.qt.io/qtforpython-5/PySide2/QtGui/QPainter.html#PySide2.QtGui.PySide2.QtGui.QPainter.drawImage
# or https://github.com/pygobject/pycairo?tab=readme-ov-file
# or https://pyglet.readthedocs.io/en/latest/programming_guide/shapes.html
@ -354,22 +304,6 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
# cv2.imwrite(str(self.config.output_dir / "orig.png"), warpedFrame)
cv2.rectangle(img, (0,0), (img.shape[1],25), (0,0,0), -1)
if detections:
for detection in detections:
points = [
detection.get_foot_coords(),
[detection.l, detection.t],
[detection.l + detection.w, detection.t + detection.h],
]
points = tracker_frame.camera.points_img_to_world(points, scale)
points = [to_point(p) for p in points] # to int
w = points[1][0]-points[2][0]
feet = [int(points[2][0] + .5 * w), points[2][1]]
cv2.rectangle(img, points[1], points[2], (255,255,0), 2)
cv2.circle(img, points[0], 5, (255,255,0), 2)
cv2.putText(img, f"{detection.conf:.02f}", (points[0][0], points[0][1]+20), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
def conversion(points):
return convert_world_points_to_img_points(points, scale)
@ -379,30 +313,7 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
else:
for track_id, track in tracks.items():
inv_H = np.linalg.pinv(tracker_frame.H)
draw_track_projected(img, track, int(track_id), tracker_frame.camera, conversion)
for line in debug_lines:
for rp1, rp2 in zip(line.points, line.points[1:]):
p1 = (
int(rp1.position[0]*scale),
int(rp1.position[1]*scale),
)
p2 = (
int(rp2.position[0]*scale),
int(rp2.position[1]*scale),
)
cv2.line(img, p1, p2, (255,0,0), 2)
# points = [(int(point[0]*scale), int(point[1]*scale)) for point in points]
# for num, points in enumerate(frame.camera.debug_lines):
# cv2.line(img, points[0], points[1], (255,0,0), 2)
# if hasattr(frame.camera, 'debug_points'):
# for num, point in enumerate(frame.camera.debug_points):
# cv2.circle(img, (int(point[0]*scale), int(point[1]*scale)), 5, (255,0,0), 2)
# cv2.putText(img, f"{num}", (int(point[0]*scale)+20, int(point[1]*scale)), cv2.FONT_HERSHEY_PLAIN, 1, (255,0,0), 1)
draw_track_projected(img, track, int(track_id), frame.camera, conversion)
if not prediction_frame:
cv2.putText(img, f"Waiting for prediction...", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
@ -412,9 +323,8 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
inv_H = np.linalg.pinv(prediction_frame.H)
# For debugging:
# draw_trackjectron_history(img, track, int(track.track_id), conversion)
# anim_position = get_animation_position(track, frame)
anim_position = 1
draw_track_predictions(img, track, int(track.track_id)+1, prediction_frame.camera, conversion, anim_position=anim_position, as_clusters=as_clusters)
anim_position = get_animation_position(track, frame)
draw_track_predictions(img, track, int(track.track_id)+1, frame.camera, conversion, anim_position=anim_position, as_clusters=as_clusters)
cv2.putText(img, f"{len(track.predictor_history) if track.predictor_history else 'none'}", to_point(track.history[0].get_foot_coords()), cv2.FONT_HERSHEY_COMPLEX, 1, (255,255,255), 1)
if prediction_frame.maps:
for i, m in enumerate(prediction_frame.maps):
@ -461,9 +371,9 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
for option, value in prediction_frame.log['predictor'].items():
options.append(f"{option}: {value}")
if len(options):
cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
return img

View file

@ -1,16 +1,18 @@
from __future__ import annotations
import logging
import multiprocessing
import pickle
from argparse import ArgumentParser, Namespace
from multiprocessing import Event
from pathlib import Path
import zmq
from trap import node
from trap.base import *
from trap.base import LambdaParser
from trap.gemma import ImgMovementFilter
from trap.preview_renderer import FrameWriter
from trap.timer import Timer
from trap.video_sources import get_video_source
logger = logging.getLogger('trap.frame_emitter')
@ -30,38 +32,27 @@ class FrameEmitter(node.Node):
self.video_srcs = self.config.video_src
def run(self):
offset = int(self.config.video_offset or 0)
source = get_video_source(self.video_srcs, self.config.camera, offset, self.config.video_end, self.config.video_loop)
video_gen = enumerate(source, start = offset)
while self.run_loop():
# writer = FrameWriter(self.config.record, None, None) if self.config.record else nullcontext
writer = FrameWriter(str(self.config.record), None, None) if self.config.record else None
try:
processor = ImgMovementFilter()
while self.run_loop():
try:
i, img = next(video_gen)
except StopIteration as e:
logger.info("Video source ended")
break
try:
i, img = next(video_gen)
except StopIteration as e:
logger.info("Video source ended")
break
frame = Frame(i, img=img, H=self.config.camera.H, camera=self.config.camera)
frame = Frame(i, img=img, H=self.config.camera.H, camera=self.config.camera)
# frame.img = processor.apply(frame.img)
# TODO: this is very dirty, need to find another way.
# perhaps multiprocessing Array?
self.frame_noimg_sock.send(pickle.dumps(frame.without_img()))
self.frame_sock.send(pickle.dumps(frame))
if writer:
writer.write(frame.img)
finally:
if writer:
writer.release()
# TODO: this is very dirty, need to find another way.
# perhaps multiprocessing Array?
self.frame_noimg_sock.send(pickle.dumps(frame.without_img()))
self.frame_sock.send(pickle.dumps(frame))
logger.info("Stopping")
@ -93,10 +84,6 @@ class FrameEmitter(node.Node):
help="End (or loop) playback at given frame.",
default=None,
type=int)
argparser.add_argument("--record",
help="Record source video to given filename",
default=None,
type=Path)
argparser.add_argument("--video-loop",
help="By default it emitter will run only once. This allows it to loop the video file to keep testing.",

View file

@ -1,97 +0,0 @@
# used for "Forward Referencing of type annotations"
from __future__ import annotations
import datetime
import logging
import time
from argparse import ArgumentParser
from pathlib import Path
import zmq
from trap.frame_emitter import Frame
from trap.node import Node
from trap.preview_renderer import FrameWriter as CvFrameWriter
logger = logging.getLogger("trap.simple_renderer")
class FrameWriter(Node):
def setup(self):
self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.out_writer = self.start_writer()
def start_writer(self):
if not self.config.output_dir.exists():
raise FileNotFoundError("Path does not exist")
date_str = datetime.datetime.now().isoformat(timespec="minutes")
filename = self.config.output_dir / f"render-source-{date_str}.mp4"
logger.info(f"Write to {filename}")
return CvFrameWriter(str(filename), None, None)
# fourcc = cv2.VideoWriter_fourcc(*'vp09')
# return cv2.VideoWriter(str(filename), fourcc, self.fps, self.frame_size)
def run(self):
i=0
try:
while self.run_loop():
i += 1
# zmq_ev = self.frame_sock.poll(timeout=2000)
# if not zmq_ev:
# # when no data comes in, loop so that is_running is checked
# continue
try:
frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
# else:
# logger.debug(f'new video frame {frame.index}')
if frame is None:
# might need to wait a few iterations before first frame comes available
time.sleep(.1)
continue
self.logger.debug(f"write frame {frame.time:.3f}")
self.out_writer.write(frame.img)
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
except KeyboardInterrupt as e:
print('stopping on interrupt')
self.logger.info('Stopping')
# if i>2:
if self.out_writer:
self.out_writer.release()
self.logger.info(f'Wrote to {self.out_writer.filename}')
self.logger.info('stopped')
@classmethod
def arg_parser(cls):
argparser = ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument("--output-dir",
help="Directory to save the video in",
required=True,
type=Path)
return argparser

View file

@ -1,292 +0,0 @@
from argparse import ArgumentParser
import enum
import json
from pathlib import Path
import time
from typing import Optional
import cv2
import numpy as np
from trap.base import DataclassJSONEncoder, DistortedCamera, Frame
from trap.lines import CoordinateSpace, RenderableLine, RenderableLines, RenderablePoint, RenderablePosition, SrgbaColor, cross_points
from trap.node import Node
from trap.stage import Coordinate
class Modes(enum.Enum):
POINTS = 1
TEST_LINE = 2
class LaserCalibration(Node):
"""
A calibrated camera can be used to reverse-map the points of the laser to world coordinates.
Note, it publishes on the address of the stage node, so they cannot run at the same time.
1. Draw points with the laser (use 1-9 to create/select, then position them with arrow keys)
2. Use cursor on camera stream to create an image point for.
- Locate nearby point to select and drag
3. Use image coordinate of point, undistort, homograph, gives world coordinate.
4. Perform homography on world coordinates + laser coordinates
"""
def setup(self):
# self.scenarios: List[DrawnScenario] = []
self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.laser_sock = self.pub(self.config.zmq_stage_addr)
self.camera: Optional[DistortedCamera] = None
self._selected_point = None
self._is_dragging = False
self.laser_points = {}
self.image_points = {}
self.mode = Modes.POINTS
self.H = None
self.img_size = (1920,1080)
self.frame_img_factor = (1,1)
if self.config.calibfile.exists():
with self.config.calibfile.open('r') as fp:
calibdata = json.load(fp)
self.laser_points = calibdata['laser_points']
self.image_points = calibdata['image_points']
self.H = calibdata['H']
def run(self):
cv2.namedWindow("laser_calib", cv2.WINDOW_NORMAL)
# https://gist.github.com/ronekko/dc3747211543165108b11073f929b85e
# cv2.moveWindow("laser_calib", 0, -1)
cv2.setMouseCallback('laser_calib',self.mouse_event)
cv2.setWindowProperty("laser_calib",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
# arrow up (82), down (84), arrow left(81)
frame = None
while self.run_loop_capped_fps(60):
if self.frame_sock.poll(0):
frame: Frame = self.frame_sock.recv_pyobj()
if not self.camera:
self.camera = frame.camera
if frame is None:
continue
self.frame_img_factor = frame.img.shape[1] / self.img_size[0], frame.img.shape[0] / self.img_size[1]
img = frame.img
img = cv2.resize(img, self.img_size)
cv2.putText(img, 'press 1-0 to create/edit points', (10,20), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,255))
if len(self.laser_points) < 4:
cv2.putText(img, 'add points to calculate homography', (10,40), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,255))
else:
cv2.putText(img, 'press c to calculate homography', (10,40), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,0))
cv2.putText(img, str(self.config.calibfile), (10,self.img_size[1]-30), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,0))
if self._selected_point:
color = (0,255,255)
cv2.putText(img, f'selected {self._selected_point}', (10,60), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
cv2.putText(img, 'press d to delete', (10,80), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
cv2.putText(img, 'use arrows to position laser for this point', (10,100), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
target = self.camera.points_img_to_world([self.image_points[self._selected_point]])[0].tolist()
target = round(target[0], 2), round(target[1], 2)
cv2.putText(img, f'map {self.laser_points[self._selected_point]} to {target} ({self.image_points[self._selected_point]})', (10,120), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
for k, coord in self.image_points.items():
color = (0,0,255) if self._selected_point == k else (255,0,0)
coord = int(coord[0] / self.frame_img_factor[0]), int(coord[1] / self.frame_img_factor[1])
cv2.circle(img, coord, 4, color, thickness=2)
cv2.putText(img, str(k), (coord[0]+10, coord[1]), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
key = cv2.waitKey(5) # or for arrows: full_key_code = cv2.waitKeyEx(0)
self.key_event(key)
# nr_keys = [ord(i) for i in range(10)] # select/add point
# cv2.
cv2.imshow('laser_calib', img)
lines = []
if self.mode == Modes.TEST_LINE:
lines.append(RenderableLine([
RenderablePoint((i,time.time()%18), SrgbaColor(0,1,0,1)) for i in range(-15, 40)
]))
# render in laser space
rl = RenderableLines(lines, CoordinateSpace.WORLD)
self.laser_sock.send_json(rl, cls=DataclassJSONEncoder)
else:
if self._selected_point:
point = self.laser_points[self._selected_point]
lines.extend(cross_points(point[0], point[1], .5, SrgbaColor(0,1,0,1)))
# render in laser space
rl = RenderableLines(lines, CoordinateSpace.LASER)
self.laser_sock.send_json(rl, cls=DataclassJSONEncoder)
# print(json.dumps(rl, cls=DataclassJSONEncoder))
def key_event(self, key: int):
if key < 0:
return
if key == ord('q'):
exit()
if key == 27: #esc
self._selected_point = None
if key == ord('c'):
self.calculate_homography()
self.save()
if key == ord('d') and self._selected_point:
self.delete_point(self._selected_point)
if key == ord('t'):
self.mode = Modes.TEST_LINE if self.mode == Modes.POINTS else Modes.POINTS
print(self.mode)
# arrow up (82), down (84), arrow left(81)
if self._selected_point and key in [81, 84, 82, 83,
ord('h'), ord('j'), ord('k'), ord('l'),
ord('H'), ord('J'), ord('K'), ord('L'),
]:
diff = [0,0]
if key in [81, ord('h')]:
diff[0] -= 1
if key == ord('H'):
diff[0] -= 10
if key in [83, ord('l')]:
diff[0] += 1
if key == ord('L'):
diff[0] += 10
if key in [82, ord('k')]:
diff[1] += 1
if key == ord('K'):
diff[1] += 10
if key in [84, ord('j')]:
diff[1] -= 1
if key == ord('J'):
diff[1] -= 10
self.laser_points[self._selected_point] = (
self.laser_points[self._selected_point][0] + diff[0],
self.laser_points[self._selected_point][1] + diff[1],
)
nr_keys = [ord(str(i)) for i in range(10)]
if key in nr_keys:
select = str(nr_keys.index(key))
self.create_or_select(select)
def mouse_event(self, event,x,y,flags,param):
x *= self.frame_img_factor[0]
y *= self.frame_img_factor[1]
if event == cv2.EVENT_MOUSEMOVE:
if not self._is_dragging or not self._selected_point:
return
self.image_points[self._selected_point] = (x, y)
if event == cv2.EVENT_LBUTTONDOWN:
# select or create
self._selected_point = None
for i, p in self.image_points.items():
d = (p[0]-x)**2 + (p[1]-y)**2
if d < 30:
self._selected_point = i
break
if self._selected_point is None:
self._selected_point = self.new_point((x,y), None)
self._is_dragging = True
if event == cv2.EVENT_LBUTTONUP:
self._is_dragging = False
# ... point stays selected to tweak laser
def create_or_select(self, nr: str):
if nr not in self.image_points:
self.new_point(None, None, nr)
self._selected_point = nr
return nr
def new_point(self, img_coord: Optional[Coordinate], laser_coord: Optional[Coordinate], nr: Optional[str]=None):
if nr:
new_nr = nr
else:
new_nr = None
for i in range(100):
k = str(i)
if k not in self.image_points:
new_nr = k
break
if not new_nr:
new_nr = 0 # cover unlikely case
self.image_points[new_nr] = img_coord or (100,100)
self.laser_points[new_nr] = laser_coord or (100,100)
return new_nr
def delete_point(self, point: str):
del self.image_points[point]
del self.laser_points[point]
self._selected_point = None
def calculate_homography(self):
if len(self.image_points) < 4:
return
world_points = self.camera.points_img_to_world(list(self.image_points.values()))
laser_points = np.array(list(self.laser_points.values()))
print('from', world_points)
print('to', laser_points)
self.H, status = cv2.findHomography(world_points, laser_points)
print('Found')
print(self.H)
def save(self):
with self.config.calibfile.open('w') as fp:
json.dump({
'laser_points': self.laser_points,
'image_points': self.image_points,
'H': self.H.tolist()
}, fp)
@classmethod
def arg_parser(cls) -> ArgumentParser:
argparser = ArgumentParser()
argparser.add_argument('--zmq-frame-addr',
help='Manually specity communication addr for the frame messages',
type=str,
default="ipc:///tmp/feeds_frame")
argparser.add_argument('--zmq-stage-addr',
help='Manually specity communication addr for the stage messages (the rendered lines)',
type=str,
default="tcp://0.0.0.0:99174")
argparser.add_argument('--calibfile',
help='specify file to save & load points with',
type=Path,
default=Path("./laser_calib.json"))
return argparser

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,65 +0,0 @@
from argparse import ArgumentParser
import time
from trap.counter import CounterListerner
from trap.node import Node
class Monitor(Node):
"""
Render a stage, on which different TrackScenarios take place to a
single image of lines. Which can be passed to different renderers
E.g. the laser or image renderers.
"""
FPS = 1
def setup(self):
# self.scenarios: List[DrawnScenario] = []
self.counter_listener = CounterListerner()
def run(self):
prev_time = time.perf_counter()
while self.is_running.is_set():
# self.tick() # don't polute it with own data
self.counter_listener.snapshot()
stats = self.counter_listener.to_string()
if len(stats):
self.logger.info(stats)
# else:
# self.logger.info("no stats")
# for i, (k, v) in enumerate(self.counter_listener.get_latest().items()):
# print(k,v)
# cv2.putText(img, f"{k} {v.value()}", (20,img.shape[0]-(40*i)-40), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
# 3) calculate latency for desired FPS
now = time.perf_counter()
time_diff = (now - prev_time)
if time_diff < 1/self.FPS:
# print(f"sleep {1/self.FPS - time_diff}")
time.sleep(1/self.FPS - time_diff)
now += 1/self.FPS - time_diff
prev_time = now
@classmethod
def arg_parser(cls) -> ArgumentParser:
argparser = ArgumentParser()
# argparser.add_argument('--zmq-trajectory-addr',
# help='Manually specity communication addr for the trajectory messages',
# type=str,
# default="ipc:///tmp/feeds_traj")
# argparser.add_argument('--zmq-prediction-addr',
# help='Manually specity communication addr for the prediction messages',
# type=str,
# default="ipc:///tmp/feeds_preds")
# argparser.add_argument('--zmq-stage-addr',
# help='Manually specity communication addr for the stage messages (the rendered lines)',
# type=str,
# default="tcp://0.0.0.0:99174")
return argparser

View file

@ -1,11 +1,8 @@
from collections import defaultdict
import logging
from logging.handlers import QueueHandler, QueueListener, SocketHandler
import multiprocessing
from multiprocessing.synchronize import Event as BaseEvent
from argparse import ArgumentParser, Namespace
import time
from typing import Any, Optional
from typing import Optional
import zmq
@ -15,23 +12,12 @@ from trap.timer import Timer
class Node():
def __init__(self, config: Namespace, is_running: BaseEvent, fps_counter: CounterFpsSender):
self.node_id = self.__class__.__name__.lower()
self.config = config
self.is_running = is_running
self.fps_counter = fps_counter
self.zmq_context = zmq.Context()
self.logger = self._logger()
self._prev_loop_time = 0
self.dt_since_last_tick = 0
self.config_sock = self.sub(self.config.zmq_config_addr)
self.config_init_sock = self.push(self.config.zmq_config_init_addr) # a sending sub
self.settings = defaultdict(None)
self.refresh_settings()
self.setup()
@classmethod
@ -39,7 +25,7 @@ class Node():
return logging.getLogger(f"trap.{cls.__name__}")
def tick(self):
self.dt_since_last_tick = self.fps_counter.tick()
self.fps_counter.tick()
# with self.fps_counter.get_lock():
# self.fps_counter.value+=1
@ -49,99 +35,16 @@ class Node():
def run(self):
raise RuntimeError("Not implemented run()")
def stop(self):
"""
Called when runloop is stopped. Override to clean up what was initiated in start() and run() methods
"""
pass
def refresh_settings(self):
try:
self.config_init_sock.send_string(self.node_id, zmq.NOBLOCK)
except Exception as e:
self.logger.warning('No settings socket available')
self.logger.exception(e)
def run_loop(self):
"""Use in run(), to check if it should keep looping
Takes care of tick()'ing the iterations/second counter
"""
self.tick()
self.check_config()
return self.is_running.is_set()
def check_config(self):
while True:
try:
config = self.config_sock.recv_json(zmq.NOBLOCK)
for field, value in config.items():
self.settings[field] = value
except zmq.ZMQError as e:
# no msgs
break
def get_setting(self, name: str, default: Any):
if name in self.settings:
return self.settings[name]
return default
def run_loop_capped_fps(self, max_fps: float, warn_below_fps: float = 0.):
"""Use in run(), to check if it should keep looping
Takes care of tick()'ing the iterations/second counter
"""
now = time.perf_counter()
time_diff = (now - self._prev_loop_time)
if warn_below_fps > 0 and time_diff > 1/warn_below_fps:
self.logger.warning(f"Running below {warn_below_fps} FPS: measured {1/time_diff} FPS")
if time_diff < 1/max_fps:
# print(f"sleep {1/max_fps - time_diff}")
time.sleep(1/max_fps - time_diff)
now += 1/max_fps - time_diff
self._prev_loop_time = now
return self.run_loop()
@classmethod
def arg_parser(cls) -> ArgumentParser:
raise RuntimeError("Not implemented arg_parser()")
@classmethod
def _get_arg_parser(cls) -> ArgumentParser:
parser = cls.arg_parser()
# add some defaults
parser.add_argument(
'--verbose',
'-v',
help="Increase verbosity. Add multiple times to increase further.",
action='count', default=0
)
parser.add_argument(
'--remote-log-addr',
help="Connect to a remote logger like cutelog. Specify the ip",
type=str,
default="100.72.38.82"
)
parser.add_argument(
'--remote-log-port',
help="Connect to a remote logger like cutelog. Specify the port",
type=int,
default=19996
)
parser.add_argument('--zmq-config-addr',
help='Manually specity communication addr for the config messages',
type=str,
default="ipc:///tmp/feeds_config")
parser.add_argument('--zmq-config-init-addr',
help='Manually specity communication addr for req-rep config messages',
type=str,
default="ipc:///tmp/feeds_config_rr")
return parser
def sub(self, addr: str):
"Default zmq sub configuration"
@ -158,60 +61,19 @@ class Node():
sock.bind(addr)
return sock
def push(self, addr: str):
"push-pull pair"
sock = self.zmq_context.socket(zmq.PUSH)
# sock.setsockopt(zmq.LINGER, 0)
sock.connect(addr)
return sock
def pull(self, addr: str):
"Push-pull pair"
sock = self.zmq_context.socket(zmq.PULL)
sock.bind(addr)
return sock
@classmethod
def start(cls, config: Namespace, is_running: BaseEvent, timer_counter: Optional[Timer]):
instance = cls(config, is_running, timer_counter)
try:
instance.run()
except Exception as e:
instance.logger.exception(f"{e}")
instance.stop()
instance.run()
instance.logger.info("Stopping")
@classmethod
def parse_and_start(cls):
"""To start the node from CLI/supervisor"""
config = cls._get_arg_parser().parse_args()
setup_logging(config) # running from cli, we need to setup logging
config = cls.arg_parser().parse_args()
is_running = multiprocessing.Event()
is_running.set()
statsender = CounterSender()
counter = CounterFpsSender(f"trap.{cls.__name__}", statsender)
# timer_counter = Timer(cls.__name__)
cls.start(config, is_running, counter)
def setup_logging(config: Namespace):
loglevel = logging.NOTSET if config.verbose > 1 else logging.DEBUG if config.verbose > 0 else logging.INFO
stream_handler = logging.StreamHandler()
log_handlers = [stream_handler]
if config.remote_log_addr:
logging.captureWarnings(True)
# root_logger.setLevel(logging.NOTSET) # to send all records to cutelog
socket_handler = SocketHandler(config.remote_log_addr, config.remote_log_port)
# print(socket_handler.host, socket_handler.port)
socket_handler.setLevel(logging.NOTSET)
log_handlers.append(socket_handler)
logging.basicConfig(
level=loglevel,
handlers=log_handlers, # [queue_handler]
format="%(asctime)s %(levelname)s:%(name)s:%(message)s",
datefmt="%H:%M:%S"
)
cls.start(config, is_running, counter)

View file

@ -6,26 +6,22 @@ import pathlib
import pickle
import random
import time
from typing import List
import warnings
from argparse import ArgumentParser, Namespace
from multiprocessing import Event
import dill
import numpy as np
import shapely
import torch
import zmq
from trajectron.environment import Environment, Scene, GeometricMap
from trajectron.environment import Environment, Scene
from trajectron.model.model_registrar import ModelRegistrar
from trajectron.model.online.online_trajectron import OnlineTrajectron
from trajectron.utils import prediction_output_to_trajectories
from trap.frame_emitter import DataclassJSONEncoder, Frame
from trap.lines import load_lines_from_svg
from trap.node import Node
from trap.tracker import Smoother
from trap.utils import ImageMap
logger = logging.getLogger("trap.prediction")
@ -54,21 +50,19 @@ def create_online_env(env, hyperparams, scene_idx, init_timestep):
init_timestep + 1),
state=hyperparams['state'])
online_scene.robot = test_scene.robot
radius = {k: 0 for k,v in env.attention_radius.items()}
online_scene.calculate_scene_graph(attention_radius=radius,
online_scene.calculate_scene_graph(attention_radius=env.attention_radius,
edge_addition_filter=hyperparams['edge_addition_filter'],
edge_removal_filter=hyperparams['edge_removal_filter'])
return Environment(node_type_list=env.node_type_list,
standardization=env.standardization,
scenes=[online_scene],
attention_radius=radius,
attention_radius=env.attention_radius,
robot_type=env.robot_type)
def get_maps_for_input(input_dict, scene: Scene, hyperparams, device):
scene_maps: List[ImageMap] = list()
def get_maps_for_input(input_dict, scene, hyperparams, device):
scene_maps = list()
scene_pts = list()
heading_angles = list()
patch_sizes = list()
@ -90,11 +84,9 @@ def get_maps_for_input(input_dict, scene: Scene, hyperparams, device):
else:
heading_angle = None
scene_map: ImageMap = scene.map[node.type]
scene_map.set_bounds() # update old pickled maps
scene_map = scene.map[node.type]
# map_point = x[-1, :2]
map_point = x[:2]
# map_point = x[:2].clip(0) # prevent crash for out of map point.
patch_size = hyperparams['map_encoder'][node.type]['patch_size']
@ -110,17 +102,11 @@ def get_maps_for_input(input_dict, scene: Scene, hyperparams, device):
heading_angles = torch.Tensor(heading_angles)
# print(scene_maps, patch_sizes, heading_angles)
# print(scene_pts)
try:
maps = scene_maps[0].get_cropped_maps_from_scene_map_batch(scene_maps,
scene_pts=torch.Tensor(scene_pts),
patch_size=patch_sizes[0],
rotation=heading_angles,
device='cpu')
except Exception as e:
# print(scene_maps)
logger.warning(f"Crash on getting maps for points: {scene_pts=} {heading_angles=} {patch_size=}")
raise e
maps = scene_maps[0].get_cropped_maps_from_scene_map_batch(scene_maps,
scene_pts=torch.Tensor(scene_pts),
patch_size=patch_sizes[0],
rotation=heading_angles,
device='cpu')
maps_dict = {node: maps[[i]].to(device) for i, node in enumerate(nodes_with_maps)}
return maps_dict
@ -165,15 +151,6 @@ class PredictionServer(Node):
self.trajectory_socket = self.sub(self.config.zmq_trajectory_addr)
self.prediction_socket = self.pub(self.config.zmq_prediction_addr)
self.external_predictions = not self.config.zmq_prediction_addr.startswith("ipc://")
self.cutoff_shape = None
if self.config.cutoff_map:
self.cutoff_line = load_lines_from_svg(self.config.cutoff_map, 100, '')[0]
self.cutoff_shape = shapely.Polygon([p.position for p in self.cutoff_line.points])
logger.info(f"{self.cutoff_shape}")
def send_frame(self, frame: Frame):
@ -197,8 +174,7 @@ class PredictionServer(Node):
# model_dir = 'models/models_04_Oct_2023_21_04_48_eth_vel_ar3'
# Load hyperparameters from json
# config_file = os.path.join(self.config.model_dir, self.config.conf)
config_file = self.config.conf
config_file = os.path.join(self.config.model_dir, self.config.conf)
if not os.path.exists(config_file):
raise ValueError('Config json not found!')
with open(config_file, 'r') as conf_json:
@ -238,9 +214,6 @@ class PredictionServer(Node):
logger.info(f"Basing online env on {eval_scene=} -- loaded from {self.config.eval_data_dict}")
online_env = create_online_env(eval_env, hyperparams, scene_idx, init_timestep)
print("overriding attention radius")
online_env.attention_radius = {(online_env.NodeType.PEDESTRIAN, online_env.NodeType.PEDESTRIAN): 0.1}
# auto-find highest iteration
model_registrar = ModelRegistrar(self.config.model_dir, self.config.eval_device)
model_iterations = pathlib.Path(self.config.model_dir).glob('model_registrar-*.pt')
@ -314,7 +287,6 @@ class PredictionServer(Node):
input_dict = {}
for identifier, track in frame.tracks.items():
# if len(trajectory['history']) < 7:
# # TODO: these trajectories should still be in the output, but without predictions
# continue
@ -331,16 +303,7 @@ class PredictionServer(Node):
if len(track.history) < 2:
continue
node = track.to_trajectron_node(frame.camera, online_env)
if self.cutoff_shape:
position = shapely.Point(node.data.data[-1][:2])
if not shapely.contains(self.cutoff_shape, position):
# logger.debug(f"Skip position {position}")
continue
# print(node.data.data[-1])
input_dict[node] = np.array(object=node.data.data[-1])
# print("history", node.data.data[-10:])
@ -379,7 +342,6 @@ class PredictionServer(Node):
# )
# input_dict[node] = np.array(object=[x[-1],y[-1],vx[-1],vy[-1],ax[-1],ay[-1]])
# break # only on
# print(input_dict)
@ -396,11 +358,9 @@ class PredictionServer(Node):
continue
maps = None
start_maps = time.time()
if hyperparams['use_map_encoding']:
maps = get_maps_for_input(input_dict, eval_scene, hyperparams, device=self.config.eval_device)
# print(maps)
# robot_present_and_future = None
@ -428,8 +388,7 @@ class PredictionServer(Node):
gmm_mode=self.config.gmm_mode, # "If True: The mode of the Gaussian Mixture Model (GMM) is sampled (see trajectron.model.mgcvae.py)"
z_mode=self.config.z_mode # "Predictions from the models most-likely high-level latent behavior mode" (see trajecton.models.components.discrete_latent:sample_p(most_likely_z=z_mode))
)
print(len(dists), len (preds))
intermediate = time.time()
# unsure what this bit from online_prediction.py does:
# detailed_preds_dict = dict()
# for node in eval_scene.nodes:
@ -449,8 +408,8 @@ class PredictionServer(Node):
end = time.time()
logger.debug("took %.2f s (= %.2f Hz), maps: %.2f, forward: %.2f w/ %d nodes and %d edges -- init: %.2f s" % (end - start,
1. / (end - start), (start-start_maps)/(end - start), (intermediate-start)/(end - start), len(trajectron.nodes),
logger.debug("took %.2f s (= %.2f Hz) w/ %d nodes and %d edges -- init: %.2f s" % (end - start,
1. / (end - start), len(trajectron.nodes),
trajectron.scene_graph.get_num_edges(), start-t_init))
# if self.config.center_data:
@ -472,7 +431,7 @@ class PredictionServer(Node):
futures_dict = futures_dict[ts_key]
response = {}
# logger.debug(f"{histories_dict=}")
logger.debug(f"{histories_dict=}")
for node in histories_dict:
history = histories_dict[node]
# future = futures_dict[node] # ground truth dict
@ -480,9 +439,7 @@ class PredictionServer(Node):
# print('preds', len(predictions[0][0]))
if not len(history) or np.isnan(history[-1]).any():
logger.warning(f'skip for no history for {node} @ {ts_key} [{len(prediction_dict)=}, {len(histories_dict)=}, {len(futures_dict)=}]')
# logger.info(f"{preds=}")
logger.warning('skip for no history')
continue
# response[node.id] = {
@ -540,9 +497,9 @@ class PredictionServer(Node):
# default='../Trajectron-plus-plus/experiments/pedestrians/models/models_04_Oct_2023_21_04_48_eth_vel_ar3')
inference_parser.add_argument("--conf",
help="path to json config file for hyperparameters",
type=pathlib.Path,
default='EXPERIMENTS/config.json')
help="path to json config file for hyperparameters, relative to model_dir",
type=str,
default='config.json')
# Model Parameters (hyperparameters)
inference_parser.add_argument("--offline_scene_graph",
@ -597,12 +554,12 @@ class PredictionServer(Node):
inference_parser.add_argument('--batch_size',
help='training batch size',
type=int,
default=512)
default=256)
inference_parser.add_argument('--k_eval',
help='how many samples to take during evaluation',
type=int,
default=1)
default=25)
# Data Parameters
inference_parser.add_argument("--eval_data_dict",
@ -624,7 +581,7 @@ class PredictionServer(Node):
inference_parser.add_argument("--eval_device",
help="what device to use during inference",
type=str,
default="cuda:0")
default="cpu")
inference_parser.add_argument('--seed',
@ -665,11 +622,6 @@ class PredictionServer(Node):
help="Center data around cx and cy. Should also be used when processing data",
action='store_true')
inference_parser.add_argument('--cutoff-map',
help='specify a map (svg-file) that specifies projection boundaries. In here, degrade chance to be selectede',
type=str,
default="../DATASETS/hof-lidar/map_hof.svg")
return inference_parser

View file

@ -300,7 +300,7 @@ class FrameWriter:
"""
def __init__(self, filename: str, fps: float, frame_size: Optional[tuple] = None) -> None:
self.filename = filename
self._fps = fps
self.fps = fps
self.frame_size = frame_size
self.tmp_dir = tempfile.TemporaryDirectory(prefix="trap-output-")

View file

@ -8,18 +8,16 @@ import time
from xml.dom.pulldom import default_bufsize
from attr import dataclass
import cv2
import noise
import numpy as np
import pandas as pd
import dill
import tqdm
import argparse
from typing import Dict, List, Optional
from typing import List, Optional
from trap.base import Track
from trap.config import CameraAction, HomographyAction
from trap.frame_emitter import Camera
from trap.tracker import FinalDisplacementFilter, Noiser, RandomOffset, Smoother, TrackReader
from trap.tracker import FinalDisplacementFilter, Smoother, TrackReader
#sys.path.append("../../")
from trajectron.environment import Environment, Scene, Node
@ -74,29 +72,22 @@ class TrackIteration:
smooth: bool
step_size: int
step_offset: int
noisy: bool = False
offset: bool = False
@classmethod
def iteration_variations(cls, smooth = True, toggle_smooth=True, sample_step_size=1, noisy_variations=0, offset_variations=0):
def iteration_variations(cls, smooth = True, toggle_smooth=True, sample_step_size=1):
iterations: List[TrackIteration] = []
for i in range(sample_step_size):
for n in range(noisy_variations+1):
for f in range(offset_variations+1):
iterations.append(TrackIteration(smooth, sample_step_size, i, noisy=bool(n), offset=bool(f)))
if smooth and toggle_smooth:
iterations.append(TrackIteration(not smooth, sample_step_size, i, noisy=bool(n), offset=bool(f)))
iterations.append(TrackIteration(smooth, sample_step_size, i))
if toggle_smooth:
iterations.append(TrackIteration(not smooth, sample_step_size, i))
return iterations
# maybe_makedirs('trajectron-data')
# for desired_source in [ 'hof2', ]:# ,'hof-maskrcnn', 'hof-yolov8', 'VIRAT-0102-parsed', 'virat-resnet-keypoints-full']:
def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, noise_tracks: int, offset_tracks: int, center_data: bool, bin_positions: bool, camera: Camera, step_size: int, filter_displacement:float, map_img_path: Optional[Path]):
def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, cm_to_m: bool, center_data: bool, bin_positions: bool, camera: Camera, step_size: int, filter_displacement:float, map_img_path: Optional[Path]):
name += f"-nostep" if step_size == 1 else f"-step{step_size}"
# name += f"-conv{smooth_window}" if smooth_tracks else f"-nosmooth"
name += f"-kalsmooth" if smooth_tracks else f"-nosmooth"
name += f"-noise{noise_tracks}" if noise_tracks else f""
name += f"-offsets{offset_tracks}" if offset_tracks else f""
name += f"-conv{smooth_window}" if smooth_tracks else f"-nosmooth"
name += f"-f{filter_displacement}" if filter_displacement > 0 else ""
name += "-map" if map_img_path else "-nomap"
name += f"-{datetime.date.today()}"
@ -106,21 +97,15 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
if map_img_path:
if not map_img_path.exists():
raise RuntimeError(f"Map image does not exists {map_img_path}")
print(f"Using map {map_img_path}")
type_map = {}
# TODO)) For now, assume the map is a 100x scale of the world coordinates (i.e. 100px per meter)
# thus when we do a homography of 5px per meter, scale down by 20
map_H_path = map_img_path.with_suffix('.json')
if map_H_path.exists():
homography_matrix = np.loadtxt(map_H_path)
else:
homography_matrix = np.array([
[5, 0,0],
[0, 5,0],
[0,0,1],
]) # 100 scale
homography_matrix = np.array([
[5, 0,0],
[0, 5,0],
[0,0,1],
]) # 100 scale
img = cv2.imread(map_img_path)
img = cv2.resize(img, (img.shape[1]//20, img.shape[0]//20))
type_map['PEDESTRIAN'] = ImageMap(
@ -138,37 +123,21 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
skipped_for_error = 0
created = 0
# smoother = Smoother(window_len=smooth_window, convolution=True) if smooth_tracks else None
smoother = Smoother(convolution=False) if smooth_tracks else None
noiser = Noiser(amplitude=.1) if noise_tracks else None
smoother = Smoother(window_len=smooth_window, convolution=True) if smooth_tracks else None
reader = TrackReader(src_dir, camera.fps)
tracks = [t for t in reader]
print(f"Unfiltered total: {len(tracks)} tracks")
if filter_displacement > 0:
filter = FinalDisplacementFilter(filter_displacement)
tracks = filter.apply(tracks, camera)
print(f"Filtered: {len(tracks)} tracks")
skip_idxs = []
for idx, track in enumerate(tracks):
track_history = track.get_projected_history(camera=camera)
distances = np.sqrt(np.sum(np.diff(track_history, axis=0)**2, axis=1))
# print(trajectory_org)
# print(distances)
if any(distances > 3):
skip_idxs.append(idx)
for idx in skip_idxs:
tracks.pop(idx)
print(f"Filtered {len(skip_idxs)} tracks which contained leaps")
total = len(tracks)
bar = tqdm.tqdm(total=total)
destinations = {
'train': int(total * .91),
'val': int(total * .08),
'test': int(total * .01), # I don't realyl care about this
'train': int(total * .8),
'val': int(total * .12),
'test': int(total * .08),
}
max_track = reader.get(str(max([int(k) for k in reader._tracks.keys()])))
@ -184,7 +153,7 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
dt3 = RollingAverage()
dt4 = RollingAverage()
sets: Dict[str, List[Track]] = {}
sets = {}
offset = 0
for data_class, nr in destinations.items():
# TODO)) think of a way to shuffle while keeping scenes
@ -194,9 +163,6 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
print(f"Camera FPS: {camera.fps}, actual fps: {camera.fps/step_size} (or {(1/camera.fps)*step_size})")
names: Dict[str, Path] = {}
max_pos = 0
for data_class, nr_of_items in destinations.items():
env = Environment(node_type_list=['PEDESTRIAN'], standardization=standardization)
attention_radius = dict()
@ -206,7 +172,6 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
scenes = []
split_id = f"{name}_{data_class}"
data_dict_path = dst_dir / (split_id + '.pkl')
names[data_class] = data_dict_path
# subpath = src_dir / data_class
@ -214,9 +179,7 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
# scene = None
scene_nodes = defaultdict(lambda: [])
variations = TrackIteration.iteration_variations(smooth_tracks, True, step_size, noise_tracks, offset_tracks)
print(f"Create {len(variations)} variations")
variations = TrackIteration.iteration_variations(smooth_tracks, False, step_size)
for i, track in enumerate(sets[data_class]):
bar.update()
@ -244,20 +207,13 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
interpolated_track = track.get_with_interpolated_history()
b = time.time()
for variation_nr, iteration_settings in enumerate(variations):
track = interpolated_track
if iteration_settings.noisy:
track = noiser.apply_track(track)
if iteration_settings.offset:
offset = RandomOffset(amplitude=.1)
track = offset.apply_track(track)
if iteration_settings.smooth:
track = smoother.smooth_track(track)
track = smoother.smooth_track(interpolated_track)
# track = Smoother(smooth_window, False).smooth_track(track)
else:
track = interpolated_track # TODO)) Copy & move smooth outside iter loop
c = time.time()
if iteration_settings.step_size > 1:
@ -268,7 +224,6 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
# track.get_projected_history(H=None, camera=self.config.camera)
node = track.to_trajectron_node(camera, env)
max_pos = max(node.data.data[0][0], max_pos)
data_class = time.time()
@ -330,8 +285,7 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
# print(scene.nodes[0].first_timestep)
print(f'Processed {len(scenes)} scene with {sum([len(s.nodes) for s in scenes])} nodes for data class {data_class}')
# print("MAXIMUM!!", max_pos)
print(f'Processed {len(scenes):.2f} scene for data class {data_class}')
env.scenes = scenes
@ -341,30 +295,9 @@ def process_data(src_dir: Path, dst_dir: Path, name: str, smooth_tracks: bool, n
with open(data_dict_path, 'wb') as f:
dill.dump(env, f, protocol=dill.HIGHEST_PROTOCOL)
bar.close()
# print(f"Linear: {l}")
# print(f"Non-Linear: {nl}")
print(f"error: {skipped_for_error}, used: {created}")
print("Run with")
target_model_dir = (dst_dir / "../models/").resolve()
target_config = (dst_dir / "../trajectron.json").resolve()
# set eval_every very high, because we're not interested in theoretical evaluations, and we don't mind overfitting
print(f"""
uv run trajectron_train --eval_every 200 \\
--train_data_dict {names['train'].name} \\
--eval_data_dict {names['val'].name} \\
--offline_scene_graph no --preprocess_workers 8 \\
--log_dir {target_model_dir} \\
--log_tag _{name} \\
--train_epochs 100 \\
--conf {target_config} \\
--data_dir {dst_dir} \\
{"--map_encoding" if map_img_path else ""} \\
--no_edge_encoding
""")
return names
def main():
parser = argparse.ArgumentParser()
@ -372,8 +305,6 @@ def main():
parser.add_argument("--dst-dir", "-d", type=Path, required=True, help="Destination directory to store parsed .pkl files (typically 'trajectron-data')")
parser.add_argument("--name", "-n", type=str, required=True, help="Identifier to prefix the output .pkl files with (result is NAME-train.pkl, NAME-test.pkl)")
parser.add_argument("--smooth-tracks", action='store_true', help=f"Enable smoother. Set to {smooth_window} frames")
parser.add_argument("--noise-tracks", type=int, default=0, help=f"Enable Noiser. provide number for how many noisy variations")
parser.add_argument("--offset-tracks", type=int, default=0, help=f"Enable Offset. provide number for how many random offset variations")
parser.add_argument("--cm-to-m", action='store_true', help=f"If homography is in cm, convert tracked points to meter for beter results")
parser.add_argument("--center-data", action='store_true', help=f"Normalise around center")
parser.add_argument("--bin-positions", action='store_true', help=f"Experiment to put round positions to a grid")
@ -411,8 +342,7 @@ def main():
args.dst_dir,
args.name,
args.smooth_tracks,
args.noise_tracks,
args.offset_tracks,
args.cm_to_m,
args.center_data,
args.bin_positions,
args.camera,

View file

@ -1,68 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# sources: renderable.proto
# plugin: python-betterproto
from dataclasses import dataclass
from typing import Dict, List
import betterproto
class CoordinateSpace(betterproto.Enum):
"""Enum for coordinate spaces"""
UNDEFINED = 0
CAMERA = 1
UNDISTORTED_CAMERA = 2
WORLD = 3
LASER = 4
RAW_LASER = 8
@dataclass
class RenderablePosition(betterproto.Message):
"""Message for RenderablePosition (Tuple[float, float])"""
x: float = betterproto.float_field(1)
y: float = betterproto.float_field(2)
@dataclass
class SrgbaColor(betterproto.Message):
"""Message for SrgbaColor"""
red: float = betterproto.float_field(1)
green: float = betterproto.float_field(2)
blue: float = betterproto.float_field(3)
alpha: float = betterproto.float_field(4)
@dataclass
class RenderablePoint(betterproto.Message):
"""Message for RenderablePoint"""
position: "RenderablePosition" = betterproto.message_field(1)
color: "SrgbaColor" = betterproto.message_field(2)
@dataclass
class RenderableLine(betterproto.Message):
"""Message for RenderableLine"""
points: List["RenderablePoint"] = betterproto.message_field(1)
@dataclass
class RenderableLines(betterproto.Message):
"""Message for RenderableLines"""
lines: List["RenderableLine"] = betterproto.message_field(1)
space: "CoordinateSpace" = betterproto.enum_field(2)
@dataclass
class RenderableLayers(betterproto.Message):
"""Message to represent RenderableLayers (Dict[int, RenderableLines])"""
layers: Dict[int, "RenderableLines"] = betterproto.map_field(
1, betterproto.TYPE_INT32, betterproto.TYPE_MESSAGE
)

View file

@ -1,50 +0,0 @@
syntax = "proto3";
package renderable;
// Enum for coordinate spaces
enum CoordinateSpace {
UNDEFINED=0;
CAMERA = 1;
UNDISTORTED_CAMERA = 2;
WORLD = 3;
LASER = 4;
RAW_LASER = 8;
}
// Message for RenderablePosition (Tuple[float, float])
message RenderablePosition {
float x = 1;
float y = 2;
}
// Message for SrgbaColor
message SrgbaColor {
float red = 1;
float green = 2;
float blue = 3;
float alpha = 4;
}
// Message for RenderablePoint
message RenderablePoint {
RenderablePosition position = 1;
SrgbaColor color = 2;
}
// Message for RenderableLine
message RenderableLine {
repeated RenderablePoint points = 1;
}
// Message for RenderableLines
message RenderableLines {
repeated RenderableLine lines = 1;
CoordinateSpace space = 2;
}
// Message to represent RenderableLayers (Dict[int, RenderableLines])
message RenderableLayers {
map<int32, RenderableLines> layers = 1;
}

View file

@ -1,41 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: renderable.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x10renderable.proto\x12\nrenderable\"*\n\x12RenderablePosition\x12\t\n\x01x\x18\x01 \x01(\x02\x12\t\n\x01y\x18\x02 \x01(\x02\"E\n\nSrgbaColor\x12\x0b\n\x03red\x18\x01 \x01(\x02\x12\r\n\x05green\x18\x02 \x01(\x02\x12\x0c\n\x04\x62lue\x18\x03 \x01(\x02\x12\r\n\x05\x61lpha\x18\x04 \x01(\x02\"j\n\x0fRenderablePoint\x12\x30\n\x08position\x18\x01 \x01(\x0b\x32\x1e.renderable.RenderablePosition\x12%\n\x05\x63olor\x18\x02 \x01(\x0b\x32\x16.renderable.SrgbaColor\"=\n\x0eRenderableLine\x12+\n\x06points\x18\x01 \x03(\x0b\x32\x1b.renderable.RenderablePoint\"h\n\x0fRenderableLines\x12)\n\x05lines\x18\x01 \x03(\x0b\x32\x1a.renderable.RenderableLine\x12*\n\x05space\x18\x02 \x01(\x0e\x32\x1b.renderable.CoordinateSpace\"\x98\x01\n\x10RenderableLayers\x12\x38\n\x06layers\x18\x01 \x03(\x0b\x32(.renderable.RenderableLayers.LayersEntry\x1aJ\n\x0bLayersEntry\x12\x0b\n\x03key\x18\x01 \x01(\x05\x12*\n\x05value\x18\x02 \x01(\x0b\x32\x1b.renderable.RenderableLines:\x02\x38\x01*Z\n\x0f\x43oordinateSpace\x12\r\n\tUNDEFINED\x10\x00\x12\n\n\x06\x43\x41MERA\x10\x01\x12\x16\n\x12UNDISTORTED_CAMERA\x10\x02\x12\t\n\x05WORLD\x10\x03\x12\t\n\x05LASER\x10\x04\x62\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'renderable_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_RENDERABLELAYERS_LAYERSENTRY._options = None
_RENDERABLELAYERS_LAYERSENTRY._serialized_options = b'8\001'
_COORDINATESPACE._serialized_start=579
_COORDINATESPACE._serialized_end=669
_RENDERABLEPOSITION._serialized_start=32
_RENDERABLEPOSITION._serialized_end=74
_SRGBACOLOR._serialized_start=76
_SRGBACOLOR._serialized_end=145
_RENDERABLEPOINT._serialized_start=147
_RENDERABLEPOINT._serialized_end=253
_RENDERABLELINE._serialized_start=255
_RENDERABLELINE._serialized_end=316
_RENDERABLELINES._serialized_start=318
_RENDERABLELINES._serialized_end=422
_RENDERABLELAYERS._serialized_start=425
_RENDERABLELAYERS._serialized_end=577
_RENDERABLELAYERS_LAYERSENTRY._serialized_start=503
_RENDERABLELAYERS_LAYERSENTRY._serialized_end=577
# @@protoc_insertion_point(module_scope)

View file

@ -1,175 +0,0 @@
from argparse import ArgumentParser
import json
import math
from pathlib import Path
from typing import Any, Dict
import zmq
from trap.node import Node
import dearpygui.dearpygui as dpg
class Settings(Node):
"""
Quickndirty gui to change some settings ad-hoc
no storage of values, no defaults. No detection of lost nodes, or sending config on them starting
"""
def setup(self):
self.config_sock.close() # setup by default for all nodes, but we want to publish
self.config_sock = self.pub(self.config.zmq_config_addr)
self.config_init_sock.close() # setup by default for all nodes, but we want to publish
self.config_init_sock = self.pull(self.config.zmq_config_init_addr)
self.settings_fields = {}
self.settings: Dict[str, Any] = {}
self.load()
dpg.create_context()
dpg.create_viewport(title='Trap settings', width=600, height=1200)
dpg.setup_dearpygui()
with dpg.window(label="General", pos=(0, 0)):
dpg.add_text(f"Settings from {self.config.settings_file}")
dpg.add_button(label="Save", callback=self.save)
with dpg.window(label="Renderer", pos=(0, 600)):
for i in range(8) :
self.register_setting(f'stagerenderer.layer.{i}', dpg.add_checkbox(label=f"layer {i}", default_value=self.get_setting(f'stagerenderer.layer.{i}', True), callback=self.on_change))
self.register_setting(f'stagerenderer.scale', dpg.add_slider_float(label="scale", default_value=self.get_setting(f'stagerenderer.scale', 1), max_value=3, callback=self.on_change))
self.register_setting(f'stagerenderer.dx', dpg.add_slider_int(label="dx", default_value=self.get_setting(f'stagerenderer.dx', 0), min_value=-300, max_value=300, callback=self.on_change))
self.register_setting(f'stagerenderer.dy', dpg.add_slider_int(label="dy", default_value=self.get_setting(f'stagerenderer.dy', 0), min_value=-300, max_value=300, callback=self.on_change))
self.register_setting(f'stagerenderer.fade', dpg.add_slider_float(label="fade factor", default_value=self.get_setting(f'stagerenderer.fade', 0.27), max_value=1, callback=self.on_change))
with dpg.window(label="Stage", pos=(150, 0)):
self.register_setting(f'stage.fps', dpg.add_slider_int(label="FPS cap", default_value=self.get_setting(f'stage.fps', 30), callback=self.on_change))
self.register_setting(f'stage.prediction_interval', dpg.add_slider_int(label="prediction interval", default_value=self.get_setting('stage.prediction_interval', 18), callback=self.on_change))
self.register_setting(f'stage.loitering_animation', dpg.add_checkbox(label="loitering_animation", default_value=self.get_setting('stage.loitering_animation', True), callback=self.on_change))
with dpg.window(label="Lidar", pos=(0, 100), autosize=True):
self.register_setting(f'lidar.crop_map_boundaries', dpg.add_checkbox(label="crop_map_boundaries", default_value=self.get_setting(f'lidar.crop_map_boundaries', True), callback=self.on_change))
self.register_setting(f'lidar.viz_cropping', dpg.add_checkbox(label="viz_cropping", default_value=self.get_setting(f'lidar.viz_cropping', True), callback=self.on_change))
# self.register_setting(f'lidar.voxel_downsample', dpg.add_checkbox(label="voxel_downsample", default_value=self.get_setting(f'lidar.voxel_downsample', True), callback=self.on_change))
self.register_setting(f'lidar.tracking_enabled', dpg.add_checkbox(label="tracking_enabled", default_value=self.get_setting(f'lidar.tracking_enabled', True), callback=self.on_change))
self.register_setting(f'lidar.kalman_factor', dpg.add_slider_float(label="kalman_factor", default_value=self.get_setting(f'lidar.kalman_factor', 1.3), max_value=3, callback=self.on_change))
dpg.add_separator(label="Clustering")
cluster_methods = ("birch", "optics", "dbscan")
self.register_setting('lidar.cluster.method', dpg.add_combo(label="Method", items=cluster_methods, default_value=self.get_setting('lidar.cluster.method', default='dbscan'), callback=self.on_change))
self.register_setting(f'lidar.eps', dpg.add_slider_float(label="DBSCAN epsilon", default_value=self.get_setting(f'lidar.eps', 0.3), max_value=1, callback=self.on_change))
self.register_setting(f'lidar.min_samples', dpg.add_slider_int(label="DBSCAN min_samples", default_value=self.get_setting(f'lidar.min_samples', 8), max_value=30, callback=self.on_change))
dpg.add_text("When using BIRCH, the resulting subclusters can be postprocessed by DBSCAN:")
self.register_setting('lidar.birch_process_subclusters', dpg.add_checkbox(label="Process subclusters", default_value=self.get_setting('lidar.birch_process_subclusters', True), callback=self.on_change))
self.register_setting('lidar.birch_threshold', dpg.add_slider_float(label="Threshold", default_value=self.get_setting('lidar.birch_threshold', 1), max_value=2.5, callback=self.on_change))
self.register_setting('lidar.birch_branching_factor', dpg.add_slider_int(label="Branching factor", default_value=self.get_setting('lidar.birch_branching_factor', 50), max_value=100, callback=self.on_change))
dpg.add_separator(label="Cluster filter")
self.register_setting(f'lidar.min_box_area', dpg.add_slider_float(label="min_box_area", default_value=self.get_setting(f'lidar.min_box_area', .1), min_value=0, max_value=1, callback=self.on_change))
self.register_setting(f'lidar.max_box_area', dpg.add_slider_float(label="max_box_area", default_value=self.get_setting(f'lidar.max_box_area', 5), min_value=.5, max_value=10, callback=self.on_change))
for i, lidar in enumerate(["192.168.1.16", "192.168.0.10"]):
name = lidar.replace(".", "_")
with dpg.window(label=f"Lidar {lidar}", pos=(i * 300, 450),autosize=True):
# dpg.add_text("test")
# dpg.add_input_text(label="string", default_value="Quick brown fox")
self.register_setting(f'lidar.{name}.enabled', dpg.add_checkbox(label="enabled", default_value=self.get_setting(f'lidar.{name}.enabled', True), callback=self.on_change))
self.register_setting(f'lidar.{name}.rot_x', dpg.add_slider_float(label="rot_x", default_value=self.get_setting(f'lidar.{name}.rot_x', 0), max_value=math.pi * 2, callback=self.on_change))
self.register_setting(f'lidar.{name}.rot_y', dpg.add_slider_float(label="rot_y", default_value=self.get_setting(f'lidar.{name}.rot_y', 0), max_value=math.pi * 2, callback=self.on_change))
self.register_setting(f'lidar.{name}.rot_z', dpg.add_slider_float(label="rot_z", default_value=self.get_setting(f'lidar.{name}.rot_z', 0), max_value=math.pi * 2, callback=self.on_change))
self.register_setting(f'lidar.{name}.trans_x', dpg.add_slider_float(label="trans_x", default_value=self.get_setting(f'lidar.{name}.trans_x', 0), min_value=-15, max_value=15, callback=self.on_change))
self.register_setting(f'lidar.{name}.trans_y', dpg.add_slider_float(label="trans_y", default_value=self.get_setting(f'lidar.{name}.trans_y', 0), min_value=-15, max_value=15, callback=self.on_change))
self.register_setting(f'lidar.{name}.trans_z', dpg.add_slider_float(label="trans_z", default_value=self.get_setting(f'lidar.{name}.trans_z', 0), min_value=-15, max_value=15, callback=self.on_change))
self.send_for_prefix("") # spread the defaults
dpg.show_viewport()
def stop(self):
dpg.destroy_context()
def check_config(self):
# override node function to disable it
pass
def refresh_settings(self):
# override node function to disable it
pass
def get_setting(self, name: str, default: Any):
"""
Automatically configure the value with the default when requesting it
"""
r = super().get_setting(name, default)
self.settings[name] = r
return r
def register_setting(self, name: str, field: int):
self.settings_fields[field] = name
def on_change(self, sender, value, user_data = None):
# print(sender, app_data, user_data)
setting = self.settings_fields[sender]
print(setting, value)
self.settings[setting] = value
self.config_sock.send_json({setting: value})
def send_for_prefix(self, prefix: str):
self.config_sock.send_json(self.get_by_prefix(prefix))
def save(self):
with self.config.settings_file.open('w') as fp:
self.logger.info(f"Save to {self.config.settings_file}")
json.dump(self.settings, fp)
def get_by_prefix(self, prefix: str) -> Dict[str, Any]:
return {key: value for key, value in self.settings.items() if key.startswith(prefix)}
def load(self) -> Dict[str, Any]:
if not self.config.settings_file.exists():
self.logger.info(f"No config at {self.config.settings_file}")
return {}
self.logger.info(f"Loading from {self.config.settings_file}")
with self.config.settings_file.open('r') as fp:
self.settings = json.load(fp)
def run(self):
# below replaces, start_dearpygui()
while self.run_loop() and dpg.is_dearpygui_running():
# 1) receive init requests
try:
init_msg = self.config_init_sock.recv_string(zmq.NOBLOCK)
self.logger.info(f"Send init for {init_msg}")
print('init', init_msg)
self.send_for_prefix(init_msg)
except zmq.ZMQError as e:
# no msgs
pass
dpg.render_dearpygui_frame()
@classmethod
def arg_parser(cls):
argparser = ArgumentParser()
argparser.add_argument('--settings-file',
help='Where to store settings',
type=Path,
default=Path("./settings.json"))
return argparser

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,276 +0,0 @@
from argparse import ArgumentParser
from collections import deque
import math
import re
from typing import List
import numpy as np
import pyglet
from torch import mul
import zmq
from trap.lines import RenderableLayers, message_to_layers
from trap.node import Node
BG_COLOR = (0,0,255)
class StageRenderer(Node):
def setup(self):
# self.prediction_sock = self.sub(self.config.zmq_prediction_addr)
# self.tracker_sock = self.sub(self.config.zmq_trajectory_addr)
# self.detector_sock = self.sub(self.config.zmq_detection_addr)
# self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.stage_sock = self.sub(self.config.zmq_stage_addr)
self.log_sock = self.pull(self.config.zmq_log_addr)
# setup pyglet:
display = pyglet.display.get_display()
screens = display.get_screens()
# use configured montior, fall back to whatever is available
self.screen = sorted(screens, reverse=True, key=lambda s: s.get_monitor_name() == self.config.monitor)[0]
if self.screen.get_monitor_name() != self.config.monitor:
self.logger.warning(f"Not displaying on configured monitor. {self.screen.get_monitor_name()} instead of {self.config.monitor}")
# print(self.screen.get_modes())
config = pyglet.gl.Config(sample_buffers=1, samples=4)
# when screen is in portrait, window mode here expects still (larger x smaller) number.
# self.window.get_size() will be reported properly
wh = sorted((self.screen.width, self.screen.height), reverse=self.config.fullscreen)
self.window = pyglet.window.Window(width=wh[0], height=wh[1], config=config, fullscreen=self.config.fullscreen, screen=self.screen)
self.window.set_exclusive_keyboard(True)
self.window.set_exclusive_keyboard(False)
self.window.set_exclusive_mouse(True)
self.window.set_exclusive_mouse(False)
# self.window.set_size(1080, 1920)
window_size = self.window.get_size()
padding = 40
print(window_size)
self.window.set_handler('on_draw', self.on_draw)
# self.window.set_handler('on_close', self.on_close)
# pyglet.gl.glClearColor(81./255, 20/255, 46./255, 0)
pyglet.gl.glClearColor(0/255, 0/255, 255/255, 0)
self.fps_display = pyglet.window.FPSDisplay(window=self.window, color=(255,255,255,255))
self.fps_display.label.x = self.window.width - 50
self.fps_display.label.y = self.window.height - 17
self.fps_display.label.bold = False
self.fps_display.label.font_size = 10
self.current_layers: RenderableLayers = {}
self.lines: List[pyglet.shapes.Line] = []
self.lines_batch = pyglet.graphics.Batch()
self.text = pyglet.text.document.FormattedDocument("")
self.text_batch = pyglet.graphics.Batch()
self.text_layout = pyglet.text.layout.TextLayout(
self.text, padding, (self.window.get_size()[0]-padding*2) // 2 - 100,
width=self.window.get_size()[1] - 2*padding,
height=(self.window.get_size()[0] - padding) // 2,
multiline=True, wrap_lines=False, batch=self.text_batch)
max_len = 31
self.log_msgs = deque([], maxlen=max_len)
self.log_msgs.extend(["-"] * max_len)
translate = (10,-400)
# scale = 5
smallest_dimension = min(self.window.get_size())
max_x = 16.3
max_y = 14.3
scale = min(smallest_dimension / max_x, smallest_dimension/max_y)
self.logger.info(f"Use {scale=}")
self.transform = np.array([
[scale, 0,translate[0]],
[0,-scale,window_size[1]],
[0,0,1]
])
self.bg_image = pyglet.image.load(self.config.floorplan)
scale = (window_size[0] - padding*2) / (self.bg_image.width)
print('image_scale', scale, self.bg_image.width, self.bg_image.height)
# self.bg_image.height = int(self.bg_image.height / 3)
# self.bg_image.width = int(self.bg_image.width / 3)
img_y = window_size[1]-int(self.bg_image.height*scale)-padding*2
self.bg_sprite = pyglet.sprite.Sprite(img=self.bg_image, x=padding, y=img_y)
self.bg_sprite.scale = scale
clear_area = img_y
self.clear_transparent = pyglet.shapes.Rectangle(0, window_size[1]-clear_area, window_size[0], clear_area, color=(*BG_COLOR,255//70))
self.clear_fully= pyglet.shapes.Rectangle(0, 0, window_size[0], window_size[1]-clear_area, color=(*BG_COLOR,255))
self.window.clear()
def check_running(self, dt):
if not self.run_loop():
self.window.close()
self.event_loop.exit()
def run(self):
self.event_loop = pyglet.app.EventLoop()
pyglet.clock.schedule_interval(self.check_running, 0.1)
# pyglet.clock.schedule(self.receive)
self.event_loop.run()
def receive(self, dt):
try:
msg = self.stage_sock.recv(zmq.NOBLOCK)
self.current_layers = message_to_layers(msg)
self.update_lines()
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
pass
while True:
try:
log_msg = self.log_sock.recv_string(zmq.NOBLOCK)
self.log_msgs.append(log_msg)
except zmq.ZMQError as e:
# idx = frame.index if frame else "NONE"
# logger.debug(f"reuse video frame {idx}")
break
self.update_msgs()
def update_lines(self):
"""
Render the renderable lines of selected layers
"""
additional_scale = self.get_setting('stagerenderer.scale', 1)
dx = self.get_setting('stagerenderer.dx', 0)
dy = self.get_setting('stagerenderer.dy', 0)
transform = self.transform.copy()
transform[0][0] *= additional_scale
transform[1][1] *= additional_scale
transform[0][2] += dx
transform[1][2] += dy
i = -1
for nr, lines in self.current_layers.items():
if not self.get_setting(f'stagerenderer.layer.{nr}', True):
continue
for line in lines.lines:
for p1, p2 in zip(line.points, line.points[1:]):
i += 1
pp1 = np.array([p1.position[0], p1.position[1], 1])
pp2 = np.array([p2.position[0], p2.position[1], 1])
pos1 = (transform@pp1)[:2].astype(int)
pos2 = (transform@pp2)[:2].astype(int)
color = (p2.color.as_array()*255).astype(int)
if i < len(self.lines):
shape = self.lines[i]
shape.x = pos1[0]
shape.y = pos1[1]
shape.x2 = pos2[0]
shape.y2 = pos2[1]
shape.color = color
else:
self.lines.append(pyglet.shapes.Line(pos1[0], pos1[1],
pos2[0],
pos2[1],
3,
color,
batch=self.lines_batch))
too_many = len(self.lines) - 1 - i
if too_many > 0:
for j in reversed(range(i, i+too_many)):
self.lines[i].delete()
del self.lines[i]
def update_msgs(self):
text = "\n".join(self.log_msgs)
self.text.text = text
self.text.set_style(0, len(self.text.text), dict(
font_name='Arial', # change to a font installed on your system
font_size=18,
color=(255, 255, 255, 255),
))
colorsmap = {
'ANOMALOUS': (255, 0, 0, 255),
'LOITERING': (255, 255, 0, 255),
'DETECTED': (255, 0, 255, 255),
'SUBSTANTIAL': (255, 0, 255, 255),
'LOST': (0, 0, 0, 255),
}
matchtext = "".join(self.log_msgs) # find no newlines
for state,color in colorsmap.items():
for match in re.finditer(state, matchtext):
self.text.set_style(match.start(), match.end(), dict(
color=color
))
def on_draw(self):
self.receive(.1)
# self.window.clear()
self.clear_transparent.color = (*BG_COLOR, int(3))
self.clear_transparent.draw()
self.clear_fully.draw()
self.fps_display.draw()
self.bg_sprite.draw()
self.lines_batch.draw()
self.text_batch.draw()
@classmethod
def arg_parser(cls):
render_parser = ArgumentParser()
render_parser.add_argument('--zmq-stage-addr',
help='Manually specity communication addr for the stage messages (the rendered lines)',
type=str,
default="tcp://0.0.0.0:99174")
render_parser.add_argument('--zmq-log-addr',
help='Manually specity communication addr for the log messages',
type=str,
default="tcp://0.0.0.0:99188")
render_parser.add_argument("--fullscreen",
help="Set Window full screen",
action='store_true')
render_parser.add_argument('--floorplan',
help='specify a map (png-file) onto which overlayed',
type=str,
default="SETTINGS/2025-11-dortmund/space/floorplan.png")
render_parser.add_argument('--monitor',
help='Specify a screen on which to output (eg. HDMI-0)',
type=str,
default="HDMI-0")
return render_parser

View file

@ -16,7 +16,7 @@ from trap.preview_renderer import DrawnTrack
import trap.tracker
from trap.config import parser
from trap.frame_emitter import Camera, Detection, DetectionState, video_src_from_config, Frame
from trap.tracker import DETECTOR_YOLOv8, FinalDisplacementFilter, Smoother, TrackReader, _ultralytics_track, Track, TrainingDataWriter, Tracker, read_tracks_json
from trap.tracker import DETECTOR_YOLOv8, FinalDisplacementFilter, Smoother, TrackReader, _yolov8_track, Track, TrainingDataWriter, Tracker, read_tracks_json
from collections import defaultdict
import logging
@ -330,7 +330,6 @@ def track_predictions_to_lines(track: Track, camera:Camera, anim_position=.8):
return
current_point = track.get_projected_history(camera=camera)[-1]
slide_t = min(1, max(0, inv_lerp(0, 0.8, anim_position))) # slide_position
lines = []
@ -372,7 +371,6 @@ def draw_track_predictions(img: cv2.Mat, track: Track, color_index: int, camera:
"""
lines = track_predictions_to_lines(track, camera, anim_position)
if not lines:
return
@ -463,12 +461,9 @@ def draw_track_projected(img: cv2.Mat, track: Track, color_index: int, camera: C
for j in range(len(history)-1):
# a = history[j]
b = history[j+1]
detection = track.history[j+1]
color = point_color if detection.state == DetectionState.Confirmed else (100,100,100)
# cv2.line(img, to_point(a), to_point(b), point_color, 1)
cv2.circle(img, to_point(b), 3, color, 2)
cv2.circle(img, to_point(b), 3, point_color, 2)
def draw_track(img: cv2.Mat, track: Track, color_index: int):

View file

@ -1,186 +0,0 @@
from dataclasses import dataclass
import logging
from pathlib import Path
import pickle
from threading import Lock
import time
from typing import Dict, Iterable, List, Optional, Set
import numpy as np
from trap.base import Camera, Track
from trap.lines import Coordinate
from trap.tracker import FinalDisplacementFilter, Smoother, TrackReader
from scipy.spatial import KDTree
logger = logging.getLogger('history')
@dataclass
class TrackHistoryState():
"""
The lock of TrackHistory is not pickle-able so separate it into a separate state
"""
tracks: List[Track]
track_histories: Dict[str, np.ndarray]
indexed_track_ids: List[str]
tree: KDTree
class TrackHistory():
def __init__(self, path: Path, camera: Camera, cache_path: Optional[Path]):
self.path = path
self.camera = camera
self.cache_path = cache_path
self.lock = Lock()
self.load_from_cache() or self.reload()
def load_from_cache(self):
if self.cache_path is None:
return False
if self.cache_path.exists():
logger.debug("Load history state from cache")
with self.cache_path.open('rb') as fp:
try:
state = pickle.load(fp)
if not isinstance(state, TrackHistoryState):
raise RuntimeError("Pickled data is not a trackhistorystate")
self.state = state
return True
except Exception as e:
logger.warning(f"Cannot read cache {self.cache_path}: {e}")
return False
def build_tree(self):
reader = TrackReader(self.path, self.camera.fps)
logger.debug(f'loaded {len(reader)} tracks')
track_filter = FinalDisplacementFilter(2)
tracks = track_filter.apply(reader, self.camera)
logger.debug(f'after filtering left with {len(tracks)} tracks')
tracks: List[Track] = [t.get_with_interpolated_history() for t in tracks]
logger.debug(f'interpolated {len(tracks)} tracks')
# use convolution here, because precision does not matter and it is _way_ faster
smoother = Smoother(convolution=True)
tracks = [smoother.smooth_track(t) for t in tracks]
logger.debug(f'smoothed')
tracks = {track.track_id: track for track in tracks}
track_histories = {t.track_id: t.get_projected_history(camera=self.camera) for t in tracks.values()}
downsampled_histories = {t_id: self.downsample_history(h) for t_id, h in track_histories.items()}
logger.debug(f'projected to world space')
# Sample data (coordinates and metadata)
# coordinates = [(1, 2, 'Point A'), (3, 4, 'Point B'), (5, 6, 'Point C'), (7, 8, 'Point D')]
all_points = []
indexed_track_ids: List[str] = []
for track_id, history in downsampled_histories.items():
all_points.extend([
[point[0], point[1]] for point in history
])
indexed_track_ids.extend([track_id] * len(history))
# self.flat_idx = self.flat_histories[:,2]
# Create the KD-Tree
tree = KDTree(all_points)
logger.debug('built tree')
return TrackHistoryState(
tracks, track_histories, indexed_track_ids, tree
)
def reload(self):
state = self.build_tree()
# aquire lock as brief as possible
with self.lock:
self.state = state
if self.cache_path:
with self.cache_path.open('wb') as fp:
logger.debug("Writing history to cache")
pickle.dump(self.state, fp)
def get_nearest_tracks(self, point: Coordinate, k:int, max_r: Optional[float] = np.inf):
with self.lock:
distances, indexes = self.state.tree.query(point, k, distance_upper_bound=max_r)
# filter out when there's no
indexes = indexes[distances != np.inf]
track_ids: Set[str] = {self.state.indexed_track_ids[idx] for idx in indexes}
# nearby_indexes = self.tree.query_ball_point(point, r)
# track_ids = set([self.flat_idx[idx] for idx in nearby_indexes])
return track_ids
def ids_as_trajectory(self, track_ids: Iterable[str]):
for track_id in track_ids:
yield self.state.tracks[track_id].get_projected_history(camera=self.camera)
@classmethod
def downsample_history(cls, history, cell_size=.3):
if not len(history):
return []
positions = np.unique(np.round(history / cell_size), axis=0) * cell_size
return positions
if __name__ == "__main__":
path = Path("EXPERIMENTS/raw/hof3/")
logging.basicConfig(level=logging.DEBUG)
calibration_path = Path("../DATASETS/hof3/calibration.json")
homography_path = Path("../DATASETS/hof3/homography.json")
camera = Camera.from_paths(calibration_path, homography_path, 12)
# device = device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
s = time.time()
history = TrackHistory(path, camera, Path("/tmp/historystate_hof3.pcl"))
dt = time.time() - s
print(f'loaded {len(history.state.tracks)} tracks in {dt}s')
track = list(history.state.tracks.values())[25]
trajectory_crop = TrackHistory.downsample_history(history.state.track_histories[track.track_id])
trajectory_org = track.get_projected_history(camera=camera)
target_point = trajectory_org[len(trajectory_org)//2+90]
import matplotlib.pyplot as plt # Visualization
track_set = history.get_nearest_tracks(target_point, 10, max_r=np.inf)
plt.gca().set_aspect('equal')
plt.scatter(trajectory_crop[:,0], trajectory_crop[:,1], c='orange')
plt.plot(trajectory_org[:,0], trajectory_org[:,1], c='blue', alpha=1)
plt.scatter(target_point[0], target_point[1], c='red', alpha=1)
for track_id in track_set:
closeby = history.state.tracks[track_id].get_projected_history(camera=camera)
plt.plot(closeby[:,0], closeby[:,1], c='green', alpha=.1)
plt.show()

View file

@ -1,74 +0,0 @@
# used for "Forward Referencing of type annotations"
from __future__ import annotations
from argparse import ArgumentParser
from pathlib import Path
import zmq
from trap.base import Track
from trap.frame_emitter import Frame
from trap.node import Node
from trap.tracker import TrainingDataWriter, TrainingTrackWriter
class TrackWriter(Node):
def setup(self):
self.track_sock = self.sub(self.config.zmq_lost_addr)
self.log_sock = self.push(self.config.zmq_log_addr)
def run(self):
with TrainingTrackWriter(self.config.output_dir) as writer:
try:
while self.run_loop():
zmq_ev = self.track_sock.poll(timeout=1000)
if not zmq_ev:
# when no data comes in, loop so that is_running is checked
continue
try:
track: Track = self.track_sock.recv_pyobj()
if len(track.history) < 20:
self.logger.debug(f"ignore short track {len(track.history)}")
continue
writer.add(track)
self.logger.info(f"Added track {track.track_id}")
try:
self.log_sock.send_string(f"Added track {track.track_id} to dataset, {len(track.history)} datapoints", zmq.NOBLOCK)
except Exception as e:
self.logger.warning("Not sent the message, broken socket?")
except zmq.ZMQError as e:
pass
except KeyboardInterrupt as e:
print('stopping on interrupt')
self.logger.info('Stopping')
@classmethod
def arg_parser(cls):
argparser = ArgumentParser()
argparser.add_argument('--zmq-log-addr',
help='Manually specity communication addr for the log messages',
type=str,
default="tcp://0.0.0.0:99188")
argparser.add_argument('--zmq-lost-addr',
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_lost")
argparser.add_argument("--output-dir",
help="Directory to save the video in",
required=True,
default=Path("EXPERIMENTS/raw/hof-lidar"),
type=Path)
return argparser

View file

@ -1,4 +1,3 @@
from abc import ABC, abstractmethod
import argparse
import csv
import json
@ -29,14 +28,12 @@ from torchvision.models.detection import (FasterRCNN_ResNet50_FPN_V2_Weights,
keypointrcnn_resnet50_fpn,
maskrcnn_resnet50_fpn_v2)
from tsmoothie.smoother import ConvolutionSmoother, KalmanSmoother
from ultralytics import YOLO, RTDETR
from ultralytics.engine.model import Model as UltralyticsModel
from ultralytics.engine.results import Results as UltralyticsResult
from ultralytics import YOLO
from ultralytics.engine.results import Results as YOLOResult
from trap import timer
from trap.frame_emitter import (Camera, DataclassJSONEncoder, Detection,
DetectionState, Frame, Track)
from trap.gemma import ImgMovementFilter
from trap.node import Node
# Detection = [int, int, int, int, float, int]
@ -54,32 +51,30 @@ DETECTOR_RETINANET = 'retinanet'
DETECTOR_MASKRCNN = 'maskrcnn'
DETECTOR_FASTERRCNN = 'fasterrcnn'
DETECTOR_YOLOv8 = 'ultralytics'
DETECTOR_RTDETR = 'rtdetr'
TRACKER_DEEPSORT = 'deepsort'
TRACKER_BYTETRACK = 'bytetrack'
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8, DETECTOR_RTDETR]
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8]
TRACKERS =[TRACKER_DEEPSORT, TRACKER_BYTETRACK]
TRACKER_CONFIDENCE_MINIMUM = .001
TRACKER_BYTETRACK_MINIMUM = .001 # bytetrack can track items iwth lower thershold
TRACKER_CONFIDENCE_MINIMUM = .2
TRACKER_BYTETRACK_MINIMUM = .1 # bytetrack can track items iwth lower thershold
NON_MAXIMUM_SUPRESSION = 1
RCNN_SCALE = .4 # seems to have no impact on detections in the corners
def _ultralytics_track(img: cv2.Mat, frame_idx: int, model: UltralyticsModel, **kwargs) -> List[Detection]:
def _yolov8_track(frame: Frame, model: YOLO, **kwargs) -> List[Detection]:
results: List[UltralyticsResult] = list(model.track(img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, conf=0.001, **kwargs))
results: List[YOLOResult] = list(model.track(frame.img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, conf=0.00001, **kwargs))
if results[0].boxes is None or results[0].boxes.id is None:
# work around https://github.com/ultralytics/ultralytics/issues/5968
return []
boxes = results[0].boxes.xywh.cpu()
confidence = results[0].boxes.conf.cpu().tolist()
track_ids = results[0].boxes.id.int().cpu().tolist()
classes = results[0].boxes.cls.int().cpu().tolist()
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], conf, DetectionState.Confirmed, frame_idx, class_id) for bbox, track_id, class_id, conf in zip(boxes, track_ids, classes, confidence)]
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], 1, DetectionState.Confirmed, frame.index, class_id) for bbox, track_id, class_id in zip(boxes, track_ids, classes)]
class Multifile():
def __init__(self, srcs: List[Path]):
@ -119,7 +114,6 @@ class FinalDisplacementFilter(TrackFilter):
def filter(self, track: Track, camera: Camera):
history = track.get_projected_history(H=None, camera=camera)
displacement = np.linalg.norm(history[0]-history[-1])
return displacement > self.min_displacement
@ -127,37 +121,14 @@ class TrackReader:
def __init__(self, path: Path, fps: int, include_blacklisted = False, exclude_whitelisted = False):
self.blacklist_file = path / "blacklist.jsonl"
self.whitelist_file = path / "whitelist.jsonl" # for skipping
# self.tracks_file = path / "tracks.pkl"
self.tracks_files = path.glob('tracks*.pkl')
self.tracks_file = path / "tracks.pkl"
# with self.tracks_file.open('r') as fp:
# tracks_dict: dict = json.load(fp)
tracks: Dict[str, Track] = {}
for tracks_file in self.tracks_files:
logger.info(f"Read {tracks_file}")
with tracks_file.open('rb') as fp:
while True:
# multiple tracks can be pickled separately
try:
trackset: Dict[str, Track] = pickle.load(fp)
for track_id, track in trackset.items():
if len(tracks) < 1:
max_item = 0
else:
max_item = max([int(t) for t in tracks.keys()])
with self.tracks_file.open('rb') as fp:
tracks: dict = pickle.load(fp)
if int(track.track_id) < max_item:
track_id = str(max_item+1)
else:
track_id = track.track_id
track.track_id = track_id
tracks[track.track_id] = track
except EOFError:
break
if self.blacklist_file.exists():
with jsonlines.open(self.blacklist_file, 'r') as reader:
@ -183,7 +154,7 @@ class TrackReader:
def __len__(self):
return len(self._tracks)
def get(self, track_id) -> Track:
def get(self, track_id):
return self._tracks[track_id]
# detection_values = self._tracks[track_id]
# history = []
@ -278,50 +249,8 @@ class TrainingDataWriter:
self.training_fp.close()
rewrite_raw_track_files(self.path)
class TrainingTrackWriter:
"""
Supersedes TrainingDataWriter, by writing full tracks"""
def __init__(self, training_path: Optional[Path]):
if training_path is None:
self.path = None
return
if not isinstance(training_path, Path):
raise ValueError("save-for-training should be a path")
if not training_path.exists():
logger.info(f"Making path for training data: {training_path}")
training_path.mkdir(parents=True, exist_ok=False)
else:
logger.warning(f"Path for training-data exists: {training_path}. Continuing assuming that's ok.")
self.path = training_path
def __enter__(self):
if self.path:
d = datetime.now().isoformat(timespec="minutes")
self.training_fp = open(self.path / f'tracks-{d}.pcl', 'wb')
logger.debug(f"Writing tracker data to {self.training_fp.name}")
# following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
# self.csv = csv.DictWriter(self.training_fp, fieldnames=FIELDNAMES, delimiter='\t', quoting=csv.QUOTE_NONE)
self.count = 0
return self
def add(self, track: Track):
self.count += 1;
pickle.dump(track, self.training_fp)
def __exit__(self, exc_type, exc_value, exc_tb):
# ... ignore exception (type, value, traceback)
if not self.path:
return
self.training_fp.close()
# rewrite_raw_track_files(self.path)
def rewrite_raw_track_files(path: Path):
source_files = list(sorted(path.glob("*.txt"))) # we loop twice, so need a list instead of generator
@ -362,7 +291,7 @@ def rewrite_raw_track_files(path: Path):
with file.open('w') as target_fp:
for i in range(line_nrs):
line = sources.readline().rstrip()
line = sources.readline()
current_file = sources.current_file
if prev_file != current_file:
offset: int = max_track_id
@ -466,8 +395,6 @@ class Tracker(Node):
# # TODO: config device
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.frame_preprocess = ImgMovementFilter()
# TODO: support removal
self.tracks: DefaultDict[str, Track] = defaultdict(lambda: Track())
@ -509,17 +436,7 @@ class Tracker(Node):
self.mot_tracker = TrackerWrapper.init_type(self.config.tracker)
elif self.config.detector == DETECTOR_YOLOv8:
# self.model = YOLO('EXPERIMENTS/yolov8x.pt')
# best from arsen:
# self.model = YOLO('./tracker/all_yolo11-2-20-15-41/weights')
# self.model = YOLO('tracker/all_yolo11-2-20-15-41/weights/best.pt')
# self.model = YOLO('models/yolo11x-pose.pt')
# self.model = YOLO("models/yolo12l.pt")
# self.model = YOLO("models/yolo12x.pt", imgsz=self.config.imgsz) #see https://github.com/orgs/ultralytics/discussions/8812
self.model = YOLO("models/yolo12x.pt")
# NOTE: changing the model, also tweak imgsz in
elif self.config.detector == DETECTOR_RTDETR:
# self.model = RTDETR('models/rtdetr-x.pt') # drops frames
self.model = RTDETR('models/rtdetr-l.pt') # somewhat less good in corners, but less frame dropping == better tracking
self.model = YOLO('yolo11x.pt')
else:
raise RuntimeError(f"{self.config.detector} is not implemented yet. See --help")
@ -538,22 +455,14 @@ class Tracker(Node):
self.frame_sock = self.sub(self.config.zmq_frame_addr)
self.trajectory_socket = self.pub(self.config.zmq_trajectory_addr)
self.detection_socket = self.pub(self.config.zmq_detection_addr)
logger.debug("Set up tracker")
def track_frame(self, frame: Frame):
det_img = frame.img
# det_img = self.frame_preprocess.apply(frame.img)
if self.config.detector in [DETECTOR_YOLOv8, DETECTOR_RTDETR]:
# both ultralytics
detections: List[Detection] = _ultralytics_track(det_img, frame.index, self.model, classes=[0, 15, 16], imgsz=self.config.imgsz)
if self.config.detector == DETECTOR_YOLOv8:
detections: List[Detection] = _yolov8_track(frame, self.model, classes=[0, 15, 16], imgsz=[1152, 640])
else :
detections: List[Detection] = self._resnet_track(det_img, frame.index, scale = RCNN_SCALE)
# emit raw detections
self.detection_socket.send_pyobj(detections)
detections: List[Detection] = self._resnet_track(frame, scale = RCNN_SCALE)
for detection in detections:
track = self.tracks[detection.track_id]
@ -566,7 +475,8 @@ class Tracker(Node):
track.history.append(detection) # add to history
return detections
def run(self):
"""
Live tracking of frames coming in over zmq
@ -701,12 +611,13 @@ class Tracker(Node):
logger.info('Stopping')
def _resnet_track(self, img: cv2.Mat, frame_idx: int, scale: float = 1) -> List[Detection]:
def _resnet_track(self, frame: Frame, scale: float = 1) -> List[Detection]:
img = frame.img
if scale != 1:
dsize = (int(img.shape[1] * scale), int(img.shape[0] * scale))
img = cv2.resize(img, dsize)
detections = self._resnet_detect_persons(img)
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame_idx)
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame.index)
# active_tracks = [t for t in tracks if t.is_confirmed()]
return [d.get_scaled(1/scale) for d in tracks]
@ -768,11 +679,6 @@ class Tracker(Node):
help='Manually specity communication addr for the trajectory messages',
type=str,
default="ipc:///tmp/feeds_traj")
argparser.add_argument('--zmq-detection-addr',
help='Manually specity communication addr for the detection messages',
type=str,
default="ipc:///tmp/feeds_dets")
argparser.add_argument("--save-for-training",
help="Specify the path in which to save",
@ -791,10 +697,6 @@ class Tracker(Node):
argparser.add_argument("--smooth-tracks",
help="Smooth the tracker tracks before sending them to the predictor",
action='store_true')
argparser.add_argument("--imgsz",
help="Detector imgsz parameter (applicable to ultralytics detectors)",
type=int,
default=640)
return argparser
@ -842,33 +744,49 @@ def run():
is_running.clear()
class TrackPointFilter(ABC):
@abstractmethod
def apply(self, points: List[float]):
pass
class Smoother:
def __init__(self, window_len=6, convolution=False):
# for some reason this smoother messes the predictions. Probably skews the points too much??
if convolution:
self.smoother = ConvolutionSmoother(window_len=window_len, window_type='hanning', copy=None)
else:
# "Unlike Kalman filtering, which focuses on predicting and updating the current state using historical measurements, Kalman smoothing enhances the accuracy of past state values"
# see https://medium.com/@shahalkp1/kalman-smoothing-using-tsmoothie-0175260464e5
self.smoother = KalmanSmoother(component='level_trend', component_noise={'level':0.02, 'season': .01, 'trend':0.02},n_seasons = 2, copy=None)
def apply_track(self, track: Track) -> Track:
def smooth(self, points: List[float]):
self.smoother.smooth(points)
return self.smoother.smooth_data[0]
def smooth_track(self, track: Track) -> Track:
ls = [d.l for d in track.history]
ts = [d.t for d in track.history]
ws = [d.w for d in track.history]
hs = [d.h for d in track.history]
ls = self.apply(ls)
ts = self.apply(ts)
ws = self.apply(ws)
hs = self.apply(hs)
self.smoother.smooth(ls)
ls = self.smoother.smooth_data[0]
self.smoother.smooth(ts)
ts = self.smoother.smooth_data[0]
self.smoother.smooth(ws)
ws = self.smoother.smooth_data[0]
self.smoother.smooth(hs)
hs = self.smoother.smooth_data[0]
new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
return track.get_with_new_history(new_history)
def apply_to_frame_tracks(self, frame: Frame) -> Frame:
# return Track(track.track_id, new_history, track.predictor_history, track.predictions, track.fps)
def smooth_frame_tracks(self, frame: Frame) -> Frame:
new_tracks = []
for track in frame.tracks.values():
new_track = self.apply_track(track)
new_track = self.smooth_track(track)
new_tracks.append(new_track)
frame.tracks = {t.track_id: t for t in new_tracks}
return frame
def apply_to_frame_predictions(self, frame: Frame) -> Frame:
def smooth_frame_predictions(self, frame: Frame) -> Frame:
for track in frame.tracks.values():
new_predictions = []
@ -879,69 +797,14 @@ class TrackPointFilter(ABC):
xs = [d[0] for d in prediction]
ys = [d[1] for d in prediction]
xs = self.apply(xs)
ys = self.apply(ys)
self.smoother.smooth(xs)
xs = self.smoother.smooth_data[0]
self.smoother.smooth(ys)
ys = self.smoother.smooth_data[0]
filtered_prediction = [[x,y] for x, y in zip(xs, ys)]
smooth_prediction = [[x,y] for x, y in zip(xs, ys)]
new_predictions.append(filtered_prediction)
new_predictions.append(smooth_prediction)
track.predictions = new_predictions
return frame
class Smoother(TrackPointFilter):
def __init__(self, window_len=6, convolution=False):
# for some reason this smoother messes the predictions. Probably skews the points too much??
if convolution:
self.smoother = ConvolutionSmoother(window_len=window_len, window_type='hanning', copy=None)
else:
# "Unlike Kalman filtering, which focuses on predicting and updating the current state using historical measurements, Kalman smoothing enhances the accuracy of past state values"
# see https://medium.com/@shahalkp1/kalman-smoothing-using-tsmoothie-0175260464e5
# self.smoother = KalmanSmoother(component='level_trend', component_noise={'level':0.02, 'season': .01, 'trend':0.02},n_seasons = 2, copy=False)
self.smoother = KalmanSmoother(component='level', component_noise={'level':0.01},observation_noise=.3, n_seasons = 0, copy=False)
def apply(self, points: List[float]):
self.smoother.smooth(points)
return self.smoother.smooth_data[0]
# aliases, for historic reasons
def smooth(self, points: List[float]):
return self.apply(points)
def smooth_track(self, track: Track) -> Track:
return self.apply_track(track)
def smooth_frame_tracks(self, frame: Frame) -> Frame:
return self.apply_to_frame_tracks(frame)
def smooth_frame_predictions(self, frame: Frame) -> Frame:
return self.apply_to_frame_predictions(frame)
class Noiser(TrackPointFilter):
def __init__(self, amplitude=.1):
self.amplitude = amplitude
def apply(self, points: List[float]):
return np.random.normal(points, scale=self.amplitude).tolist()
class RandomOffset(TrackPointFilter):
"""
A bit hacky way to offset the whole track. Does x & y & w & h with the same value
"""
def __init__(self, amplitude=.1):
self.amplitude = np.random.normal(scale=amplitude)
def apply(self, points: List[float]):
return [p + self.amplitude for p in points]
return frame

View file

@ -1,5 +1,4 @@
# lerp & inverse lerp from https://gist.github.com/laundmo/b224b1f4c8ef6ca5fe47e132c8deab56
from collections import namedtuple
import linecache
import math
import os
@ -8,7 +7,6 @@ import tracemalloc
from typing import Iterable
import cv2
import numpy as np
import torch
from trajectron.environment.map import GeometricMap
def lerp(a: float, b: float, t: float) -> float:
@ -30,12 +28,6 @@ def inv_lerp(a: float, b: float, v: float) -> float:
"""
return (v - a) / (b - a)
def easeInOutQuad(t: float) -> float:
"""Quadratic easing in/out - smoothing the transition."""
if t < 0.5:
return 2 * t * t
else:
return 1 - np.power(-2 * t + 2, 2) / 2
@ -136,7 +128,6 @@ def display_top(snapshot: tracemalloc.Snapshot, key_type='lineno', limit=5):
print("Total allocated size: %.1f KiB" % (total / 1024))
ImageMapBounds = namedtuple('ImageMapBounds', ['min_x', 'max_x', 'min_y', 'max_y'])
class ImageMap(GeometricMap): # TODO Implement for image maps -> watch flipped coordinate system
def __init__(self, img: cv2.Mat, H_world_to_map: cv2.Mat, description=None):
# homography_matrix = np.loadtxt('H.txt')
@ -153,56 +144,11 @@ class ImageMap(GeometricMap): # TODO Implement for image maps -> watch flipped
layers = layers.copy() # copy to apply negative stride
# layers =
#scale 255
#alternatively: morph image to world space with a scale, as in trajectron/experiments/nuscenes/process_data.py
super().__init__(layers, homography_matrix, description)
self.set_bounds()
def set_bounds(self):
"""
Use homography and image to calculate the limits of positions in world coordinates
"""
# print(self.data.shape)
max_x = self.data.shape[1]
max_y = self.data.shape[2]
# this assumes a map that is only scaled and translated, not skewed
points_in_map = np.array([
[0, 0],
[max_x, max_y],
])
# calculate bounds:
H_map_to_world = np.linalg.inv(self.homography)
# Convert points to homogeneous coordinates and Apply the transformation
homogeneous_points = np.hstack((points_in_map, np.ones((points_in_map.shape[0], 1))))
transformed_points = np.dot(homogeneous_points, H_map_to_world.T)
# Convert back to Cartesian coordinates
transformed_points = transformed_points[:, :2]
self.bounds = ImageMapBounds(
transformed_points[0][0],
transformed_points[1][0],
transformed_points[0][1],
transformed_points[1][1]
)
@classmethod
def get_cropped_maps_from_scene_map_batch(cls, maps, scene_pts, patch_size, rotation=None, device='cpu'):
min_bounds = [maps[0].bounds.min_x, maps[0].bounds.min_y]
max_bounds = [maps[0].bounds.max_x, maps[0].bounds.max_y]
if torch.is_tensor(scene_pts):
min_bounds = torch.Tensor(min_bounds)
max_bounds = torch.Tensor(max_bounds)
scene_pts = scene_pts.clip(min=min_bounds, max=max_bounds)
return super().get_cropped_maps_from_scene_map_batch(maps, scene_pts, patch_size, rotation, device)
def to_map_points(self, scene_pts):
org_shape = None

View file

@ -35,14 +35,6 @@ class GigEConfig:
binning_v: BinningValue = 1
pixel_format: int = neoapi.PixelFormat_BayerRG8
# when changing these values, make sure you also tweak the calibration
width: int = 2448
height: int = 2048
# changing these _automatically changes calibration cx and cy_!!
offset_x: int = 0
offset_y: int = 0
post_crop_tl: Optional[Coordinate] = None
post_crop_br: Optional[Coordinate] = None
@ -66,98 +58,47 @@ class GigE(VideoSource):
self.camera.SetImageBufferCycleCount(1)
self.setPixelFormat(self.config.pixel_format)
self.cam_is_configured = False
self.converter_settings = neoapi.ConverterSettings()
self.converter_settings.SetDebayerFormat('BGR8') # opencv
self.converter_settings.SetDemosaicingMethod(neoapi.ConverterSettings.Demosaicing_Baumer5x5)
# self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_Global)
# self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_Adaptive)
# self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_ActiveNoiseReduction)
self.converter_settings.SetSharpeningMode(neoapi.ConverterSettings.Sharpening_Off)
self.converter_settings.SetSharpeningFactor(1)
self.converter_settings.SetSharpeningSensitivityThreshold(2)
def configCam(self):
if self.camera.IsConnected():
self.setPixelFormat(self.config.pixel_format)
# self.camera.f.PixelFormat.Set(neoapi.PixelFormat_RGB8)
self.camera.f.BinningHorizontal.Set(self.config.binning_h)
self.camera.f.BinningVertical.Set(self.config.binning_v)
self.camera.f.Height.Set(self.config.height)
self.camera.f.Width.Set(self.config.width)
self.camera.f.OffsetX.Set(self.config.offset_x)
self.camera.f.OffsetY.Set(self.config.offset_y)
# print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(20000)) # shutter 1/50 (hence; 1000000/shutter)
print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(60000)) # otherwise it becomes too blurry in movements
# print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(20000)) # shutter 1/50
print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(25000))
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Get())
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Set(value=35))
# print('brightness targt', self.camera.f.Auto.Set(neoapi.BrightnessCorrection_On))
# print('brightness targt', self.camera.f.BrightnessCorrection.Set(neoapi.BrightnessCorrection_On))
# print('brightness targt', self.camera.f.BrightnessCorrection.Set(neoapi.BrightnessCorrection_On))
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Set(30))
print('exposure time', self.camera.f.ExposureTime.Get())
print('LUTEnable', self.camera.f.LUTEnable.Get())
print('LUTEnable', self.camera.f.LUTEnable.Set(True))
# print('LUTEnable', self.camera.f.LUTEnable.Set(False))
print('Gamma', self.camera.f.Gamma.Set(0.45))
# neoapi.region
# self.camera.f.regeo
print('Gamma', self.camera.f.Gamma.Set(0.39))
# print('LUT', self.camera.f.LUTIndex.Get())
# print('LUT', self.camera.f.LUTEnable.Get())
# print('exposure time max', self.camera.f.ExposureTimeGapMax.Get())
# print('exposure time min', self.camera.f.ExposureTimeGapMin.Get())
# self.pixfmt = self.camera.f.PixelFormat.Get()
self.cam_is_configured = True
def setPixelFormat(self, pixfmt):
self.pixfmt = pixfmt
self.camera.f.PixelFormat.Set(pixfmt)
# self.pixfmt = self.camera.f.PixelFormat.Get()
def recv(self):
while True:
# print('receive')
if not self.camera.IsConnected():
self.cam_is_configured = False
return
if not self.cam_is_configured:
self.configCam()
i = self.camera.GetImage(0)
if i.IsEmpty():
time.sleep(.01)
continue
# print(i.GetAvailablePixelFormats())
i = i.Convert(self.converter_settings)
imgarray = i.GetNPArray()
if self.pixfmt == neoapi.PixelFormat_BayerRG12:
img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
elif self.pixfmt == neoapi.PixelFormat_BayerRG8:
img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
else:
img = cv2.cvtColor(imgarray, cv2.COLOR_BGR2RGB)
if i.IsEmpty():
time.sleep(.01)
continue
img = i.GetNPArray()
# imgarray = i.GetNPArray()
# if self.pixfmt == neoapi.PixelFormat_BayerRG12:
# img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
# elif self.pixfmt == neoapi.PixelFormat_BayerRG8:
# img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
# else:
# img = cv2.cvtColor(imgarray, cv2.COLOR_BGR2RGB)
# if img.dtype == np.uint16:
# img = cv2.convertScaleAbs(img, alpha=(255.0/65535.0))
if img.dtype == np.uint16:
img = cv2.convertScaleAbs(img, alpha=(255.0/65535.0))
img = self._crop(img)
yield img
@ -166,6 +107,8 @@ class GigE(VideoSource):
br = self.config.post_crop_br or (img.shape[1], img.shape[0])
return img[tl[1]:br[1],tl[0]:br[0],:]
class SingleCvVideoSource(VideoSource):
def recv(self):
@ -183,10 +126,7 @@ class SingleCvVideoSource(VideoSource):
class RtspSource(SingleCvVideoSource):
def __init__(self, video_url: str | Path, camera: Camera = None):
# keep max 1 frame in app-buffer (0 = unlimited)
# When using gstreamer 1.28 drop=true is deprecated, use: leaky-type=2 which frame to drop: https://gstreamer.freedesktop.org/documentation/applib/gstappsrc.html?gi-language=c
gst = f"rtspsrc location={video_url} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=1 drop=true"
gst = f"rtspsrc location={video_url} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=0 drop=true"
logger.info(f"Capture gstreamer (gst-launch-1.0): {gst}")
self.video = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
self.frame_idx = 0
@ -271,7 +211,7 @@ class CameraSource(SingleCvVideoSource):
self.video.set(cv2.CAP_PROP_FPS, self.camera.fps)
self.frame_idx = 0
def get_video_source(video_sources: List[UrlOrPath], camera: Optional[Camera] = None, frame_offset=0, frame_end:Optional[int]=None, loop=False):
def get_video_source(video_sources: List[UrlOrPath], camera: Camera, frame_offset=0, frame_end:Optional[int]=None, loop=False):
if str(video_sources[0]).isdigit():
# numeric input is a CV camera
@ -292,7 +232,3 @@ def get_video_source(video_sources: List[UrlOrPath], camera: Optional[Camera] =
return FilelistSource(video_sources, offset = frame_offset, end=frame_end, loop=loop)
# os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "fflags;nobuffer|flags;low_delay|avioflags;direct|rtsp_transport;udp"
def get_video_source_from_str(video_sources: List[str]):
paths = [UrlOrPath(s) for s in video_sources]
return get_video_source(paths)

418
uv.lock
View file

@ -18,15 +18,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2f/7a/874c46ad2d14998bc2eedac1133c5299e12fe728d2ce91b4d64f2fcc5089/absl_py-2.2.0-py3-none-any.whl", hash = "sha256:5c432cdf7b045f89c4ddc3bba196cabb389c0c321322f8dec68eecdfa732fdad", size = 276986 },
]
[[package]]
name = "addict"
version = "2.4.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/85/ef/fd7649da8af11d93979831e8f1f8097e85e82d5bfeabc8c68b39175d8e75/addict-2.4.0.tar.gz", hash = "sha256:b3b2210e0e067a281f5646c8c5db92e99b7231ea8b0eb5f74dbdf9e259d4e494", size = 9186 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl", hash = "sha256:249bb56bbfd3cdc2a004ea0ff4c2b6ddc84d53bc2194761636eb314d5cfa5dfc", size = 3832 },
]
[[package]]
name = "aiofiles"
version = "24.1.0"
@ -91,15 +82,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/6a/bc7e17a3e87a2985d3e8f4da4cd0f481060eb78fb08596c42be62c90a4d9/aiosignal-1.3.2-py2.py3-none-any.whl", hash = "sha256:45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5", size = 7597 },
]
[[package]]
name = "alchemy-logging"
version = "1.5.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f0/2a/950fc0f382a65023e64301d9a3a24458b0f7d9be45df7ca7bd242bee4f8a/alchemy-logging-1.5.0.tar.gz", hash = "sha256:ef87de99898f1e62c7cae99e4d2ed4c02c32ccf4c21d6c7a08d93af2bc9945cc", size = 20433 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/93/0c3073f2a18173d5486abf74dd28d9157e36e159092535e12379333c2f89/alchemy_logging-1.5.0-py3-none-any.whl", hash = "sha256:459ee641c00553175a38f47587ff654ca544985626938e097cb6f85325bd241e", size = 18765 },
]
[[package]]
name = "annotated-types"
version = "0.7.0"
@ -274,15 +256,6 @@ css = [
{ name = "tinycss2" },
]
[[package]]
name = "blinker"
version = "1.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458 },
]
[[package]]
name = "bytetracker"
version = "0.3.2"
@ -379,15 +352,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e6/75/49e5bfe642f71f272236b5b2d2691cf915a7283cc0ceda56357b61daa538/comm-0.2.2-py3-none-any.whl", hash = "sha256:e6fb86cb70ff661ee8c9c14e7d36d6de3b4066f1441be4063df9c5009f0a64d3", size = 7180 },
]
[[package]]
name = "configargparse"
version = "1.7.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/85/4d/6c9ef746dfcc2a32e26f3860bb4a011c008c392b83eabdfb598d1a8bbe5d/configargparse-1.7.1.tar.gz", hash = "sha256:79c2ddae836a1e5914b71d58e4b9adbd9f7779d4e6351a637b7d2d9b6c46d3d9", size = 43958 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/31/28/d28211d29bcc3620b1fece85a65ce5bb22f18670a03cd28ea4b75ede270c/configargparse-1.7.1-py3-none-any.whl", hash = "sha256:8b586a31f9d873abd1ca527ffbe58863c99f36d896e2829779803125e83be4b6", size = 25607 },
]
[[package]]
name = "cycler"
version = "0.12.1"
@ -397,37 +361,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/05/c19819d5e3d95294a6f5947fb9b9629efb316b96de511b418c53d245aae6/cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30", size = 8321 },
]
[[package]]
name = "dash"
version = "3.2.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "flask" },
{ name = "importlib-metadata" },
{ name = "nest-asyncio" },
{ name = "plotly" },
{ name = "requests" },
{ name = "retrying" },
{ name = "setuptools" },
{ name = "typing-extensions" },
{ name = "werkzeug" },
]
sdist = { url = "https://files.pythonhosted.org/packages/80/37/8b5621e0a0b3c6e81a8b6cd3f033aa4b750f53e288dd1a494a887a8a06e9/dash-3.2.0.tar.gz", hash = "sha256:93300b9b99498f8b8ed267e61c455b4ee1282c7e4d4b518600eec87ce6ddea55", size = 7558708 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d3/36/e0010483ca49b9bf6f389631ccea07b3ff6b678d14d8c7a0a4357860c36a/dash-3.2.0-py3-none-any.whl", hash = "sha256:4c1819588d83bed2cbcf5807daa5c2380c8c85789a6935a733f018f04ad8a6a2", size = 7900661 },
]
[[package]]
name = "dearpygui"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2e/26/93db234a69d01ae84ed61abb991a3da5555410abdf38d5932fb7ed594e12/dearpygui-2.1.0-cp310-cp310-macosx_10_6_x86_64.whl", hash = "sha256:374d4c605affdf8a49eef4cf434f76e17df374590e51745b62c67025d1d41ec3", size = 2101011 },
{ url = "https://files.pythonhosted.org/packages/ef/13/0301fd7fd3ac01ed23003873681c835f18d14267953c54ff6889cb1d0212/dearpygui-2.1.0-cp310-cp310-macosx_13_0_arm64.whl", hash = "sha256:074985fa9d1622edda89c7f113d7a25ae5543f2e3f684bba3601e688c873936f", size = 1874384 },
{ url = "https://files.pythonhosted.org/packages/5d/54/5e53d99a1d352f387bd18250d8bcfe9e60594eefc062f246f61810c1bd80/dearpygui-2.1.0-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:49808389f1d1acfb4f5cbd9f1b1ec188fbcd2d828414094864f7035e27158db2", size = 2636636 },
{ url = "https://files.pythonhosted.org/packages/34/44/2508c4ba08b28cc9e827a04ae00fc73dbe6820531241eb43ba28f370372b/dearpygui-2.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:792c1cd34dd0d03bf15838cc1e66ad01282c672ef0d2b9981368b6dcd37e67d3", size = 1810184 },
]
[[package]]
name = "debugpy"
version = "1.8.13"
@ -491,15 +424,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8f/d7/9322c609343d929e75e7e5e6255e614fcc67572cfd083959cdef3b7aad79/docutils-0.21.2-py3-none-any.whl", hash = "sha256:dafca5b9e384f0e419294eb4d2ff9fa826435bf15f15b7bd45723e8ad76811b2", size = 587408 },
]
[[package]]
name = "dpkt"
version = "1.9.8"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c9/7d/52f17a794db52a66e46ebb0c7549bf2f035ed61d5a920ba4aaa127dd038e/dpkt-1.9.8.tar.gz", hash = "sha256:43f8686e455da5052835fd1eda2689d51de3670aac9799b1b00cfd203927ee45", size = 180073 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/11/79/479e2194c9096b92aecdf33634ae948d2be306c6011673e98ee1917f32c2/dpkt-1.9.8-py3-none-any.whl", hash = "sha256:4da4d111d7bf67575b571f5c678c71bddd2d8a01a3d57d489faf0a92c748fbfd", size = 194973 },
]
[[package]]
name = "exceptiongroup"
version = "1.2.2"
@ -578,23 +502,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4d/36/2a115987e2d8c300a974597416d9de88f2444426de9571f4b59b2cca3acc/filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de", size = 16215 },
]
[[package]]
name = "flask"
version = "3.1.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "blinker" },
{ name = "click" },
{ name = "itsdangerous" },
{ name = "jinja2" },
{ name = "markupsafe" },
{ name = "werkzeug" },
]
sdist = { url = "https://files.pythonhosted.org/packages/dc/6d/cfe3c0fcc5e477df242b98bfe186a4c34357b4847e87ecaef04507332dab/flask-3.1.2.tar.gz", hash = "sha256:bf656c15c80190ed628ad08cdfd3aaa35beb087855e2f494910aa3774cc4fd87", size = 720160 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/f9/7f9263c5695f4bd0023734af91bedb2ff8209e8de6ead162f35d8dc762fd/flask-3.1.2-py3-none-any.whl", hash = "sha256:ca1d8112ec8a6158cc29ea4858963350011b5c846a414cdb7a954aa9e967d03c", size = 103308 },
]
[[package]]
name = "fonttools"
version = "4.56.0"
@ -682,46 +589,20 @@ wheels = [
[[package]]
name = "grpcio"
version = "1.76.0"
version = "1.71.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b6/e0/318c1ce3ae5a17894d5791e87aea147587c9e702f24122cc7a5c8bbaeeb1/grpcio-1.76.0.tar.gz", hash = "sha256:7be78388d6da1a25c0d5ec506523db58b18be22d9c37d8d3a32c08be4987bd73", size = 12785182 }
sdist = { url = "https://files.pythonhosted.org/packages/1c/95/aa11fc09a85d91fbc7dd405dcb2a1e0256989d67bf89fa65ae24b3ba105a/grpcio-1.71.0.tar.gz", hash = "sha256:2b85f7820475ad3edec209d3d89a7909ada16caab05d3f2e08a7e8ae3200a55c", size = 12549828 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/88/17/ff4795dc9a34b6aee6ec379f1b66438a3789cd1315aac0cbab60d92f74b3/grpcio-1.76.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:65a20de41e85648e00305c1bb09a3598f840422e522277641145a32d42dcefcc", size = 5840037 },
{ url = "https://files.pythonhosted.org/packages/4e/ff/35f9b96e3fa2f12e1dcd58a4513a2e2294a001d64dec81677361b7040c9a/grpcio-1.76.0-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:40ad3afe81676fd9ec6d9d406eda00933f218038433980aa19d401490e46ecde", size = 11836482 },
{ url = "https://files.pythonhosted.org/packages/3e/1c/8374990f9545e99462caacea5413ed783014b3b66ace49e35c533f07507b/grpcio-1.76.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:035d90bc79eaa4bed83f524331d55e35820725c9fbb00ffa1904d5550ed7ede3", size = 6407178 },
{ url = "https://files.pythonhosted.org/packages/1e/77/36fd7d7c75a6c12542c90a6d647a27935a1ecaad03e0ffdb7c42db6b04d2/grpcio-1.76.0-cp310-cp310-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:4215d3a102bd95e2e11b5395c78562967959824156af11fa93d18fdd18050990", size = 7075684 },
{ url = "https://files.pythonhosted.org/packages/38/f7/e3cdb252492278e004722306c5a8935eae91e64ea11f0af3437a7de2e2b7/grpcio-1.76.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:49ce47231818806067aea3324d4bf13825b658ad662d3b25fada0bdad9b8a6af", size = 6611133 },
{ url = "https://files.pythonhosted.org/packages/7e/20/340db7af162ccd20a0893b5f3c4a5d676af7b71105517e62279b5b61d95a/grpcio-1.76.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:8cc3309d8e08fd79089e13ed4819d0af72aa935dd8f435a195fd152796752ff2", size = 7195507 },
{ url = "https://files.pythonhosted.org/packages/10/f0/b2160addc1487bd8fa4810857a27132fb4ce35c1b330c2f3ac45d697b106/grpcio-1.76.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:971fd5a1d6e62e00d945423a567e42eb1fa678ba89072832185ca836a94daaa6", size = 8160651 },
{ url = "https://files.pythonhosted.org/packages/2c/2c/ac6f98aa113c6ef111b3f347854e99ebb7fb9d8f7bb3af1491d438f62af4/grpcio-1.76.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:9d9adda641db7207e800a7f089068f6f645959f2df27e870ee81d44701dd9db3", size = 7620568 },
{ url = "https://files.pythonhosted.org/packages/90/84/7852f7e087285e3ac17a2703bc4129fafee52d77c6c82af97d905566857e/grpcio-1.76.0-cp310-cp310-win32.whl", hash = "sha256:063065249d9e7e0782d03d2bca50787f53bd0fb89a67de9a7b521c4a01f1989b", size = 3998879 },
{ url = "https://files.pythonhosted.org/packages/10/30/d3d2adcbb6dd3ff59d6ac3df6ef830e02b437fb5c90990429fd180e52f30/grpcio-1.76.0-cp310-cp310-win_amd64.whl", hash = "sha256:a6ae758eb08088d36812dd5d9af7a9859c05b1e0f714470ea243694b49278e7b", size = 4706892 },
]
[[package]]
name = "grpcio-tools"
version = "1.76.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "grpcio" },
{ name = "protobuf" },
{ name = "setuptools" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a0/77/17d60d636ccd86a0db0eccc24d02967bbc3eea86b9db7324b04507ebaa40/grpcio_tools-1.76.0.tar.gz", hash = "sha256:ce80169b5e6adf3e8302f3ebb6cb0c3a9f08089133abca4b76ad67f751f5ad88", size = 5390807 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/57/4b/6fceb806f6d5055793f5db0d7a1e3449ea16482c2aec3ad93b05678c325a/grpcio_tools-1.76.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:9b99086080ca394f1da9894ee20dedf7292dd614e985dcba58209a86a42de602", size = 2545596 },
{ url = "https://files.pythonhosted.org/packages/3b/11/57af2f3f32016e6e2aae063a533aae2c0e6c577bc834bef97277a7fa9733/grpcio_tools-1.76.0-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:8d95b5c2394bbbe911cbfc88d15e24c9e174958cb44dad6aa8c46fe367f6cc2a", size = 5843462 },
{ url = "https://files.pythonhosted.org/packages/3f/8b/470bedaf7fb75fb19500b4c160856659746dcf53e3d9241fcc17e3af7155/grpcio_tools-1.76.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d54e9ce2ffc5d01341f0c8898c1471d887ae93d77451884797776e0a505bd503", size = 2591938 },
{ url = "https://files.pythonhosted.org/packages/77/3e/530e848e00d6fe2db152984b2c9432bb8497a3699719fd7898d05cb7d95e/grpcio_tools-1.76.0-cp310-cp310-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:c83f39f64c2531336bd8d5c846a2159c9ea6635508b0f8ed3ad0d433e25b53c9", size = 2905296 },
{ url = "https://files.pythonhosted.org/packages/75/b5/632229d17364eb7db5d3d793131172b2380323c4e6500f528743e477267c/grpcio_tools-1.76.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:be480142fae0d986d127d6cb5cbc0357e4124ba22e96bb8b9ece32c48bc2c8ea", size = 2656266 },
{ url = "https://files.pythonhosted.org/packages/ff/71/5756aa9a14d16738b04677b89af8612112d69fb098ffdbc5666020933f23/grpcio_tools-1.76.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:7fefd41fc4ca11fab36f42bdf0f3812252988f8798fca8bec8eae049418deacd", size = 3105798 },
{ url = "https://files.pythonhosted.org/packages/ab/de/9058021da11be399abe6c5d2a9a2abad1b00d367111018637195d107539b/grpcio_tools-1.76.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:63551f371082173e259e7f6ec24b5f1fe7d66040fadd975c966647bca605a2d3", size = 3654923 },
{ url = "https://files.pythonhosted.org/packages/8e/93/29f04cc18f1023b2a4342374a45b1cd87a0e1458fc44aea74baad5431dcd/grpcio_tools-1.76.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:75a2c34584c99ff47e5bb267866e7dec68d30cd3b2158e1ee495bfd6db5ad4f0", size = 3322558 },
{ url = "https://files.pythonhosted.org/packages/d9/ab/8936708d30b9a2484f6b093dfc57843c1d0380de0eba78a8ad8693535f26/grpcio_tools-1.76.0-cp310-cp310-win32.whl", hash = "sha256:908758789b0a612102c88e8055b7191eb2c4290d5d6fc50fb9cac737f8011ef1", size = 993621 },
{ url = "https://files.pythonhosted.org/packages/3d/d2/c5211feb81a532eca2c4dddd00d4971b91c10837cd083781f6ab3a6fdb5b/grpcio_tools-1.76.0-cp310-cp310-win_amd64.whl", hash = "sha256:ec6e49e7c4b2a222eb26d1e1726a07a572b6e629b2cf37e6bb784c9687904a52", size = 1158401 },
{ url = "https://files.pythonhosted.org/packages/7c/c5/ef610b3f988cc0cc67b765f72b8e2db06a1db14e65acb5ae7810a6b7042e/grpcio-1.71.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:c200cb6f2393468142eb50ab19613229dcc7829b5ccee8b658a36005f6669fdd", size = 5210643 },
{ url = "https://files.pythonhosted.org/packages/bf/de/c84293c961622df302c0d5d07ec6e2d4cd3874ea42f602be2df09c4ad44f/grpcio-1.71.0-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:b2266862c5ad664a380fbbcdbdb8289d71464c42a8c29053820ee78ba0119e5d", size = 11308962 },
{ url = "https://files.pythonhosted.org/packages/7c/38/04c9e0dc8c904570c80faa1f1349b190b63e45d6b2782ec8567b050efa9d/grpcio-1.71.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:0ab8b2864396663a5b0b0d6d79495657ae85fa37dcb6498a2669d067c65c11ea", size = 5699236 },
{ url = "https://files.pythonhosted.org/packages/95/96/e7be331d1298fa605ea7c9ceafc931490edd3d5b33c4f695f1a0667f3491/grpcio-1.71.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c30f393f9d5ff00a71bb56de4aa75b8fe91b161aeb61d39528db6b768d7eac69", size = 6339767 },
{ url = "https://files.pythonhosted.org/packages/5d/b7/7e7b7bb6bb18baf156fd4f2f5b254150dcdd6cbf0def1ee427a2fb2bfc4d/grpcio-1.71.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f250ff44843d9a0615e350c77f890082102a0318d66a99540f54769c8766ab73", size = 5943028 },
{ url = "https://files.pythonhosted.org/packages/13/aa/5fb756175995aeb47238d706530772d9a7ac8e73bcca1b47dc145d02c95f/grpcio-1.71.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e6d8de076528f7c43a2f576bc311799f89d795aa6c9b637377cc2b1616473804", size = 6031841 },
{ url = "https://files.pythonhosted.org/packages/54/93/172783e01eed61f7f180617b7fa4470f504e383e32af2587f664576a7101/grpcio-1.71.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:9b91879d6da1605811ebc60d21ab6a7e4bae6c35f6b63a061d61eb818c8168f6", size = 6651039 },
{ url = "https://files.pythonhosted.org/packages/6f/99/62654b220a27ed46d3313252214f4bc66261143dc9b58004085cd0646753/grpcio-1.71.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f71574afdf944e6652203cd1badcda195b2a27d9c83e6d88dc1ce3cfb73b31a5", size = 6198465 },
{ url = "https://files.pythonhosted.org/packages/68/35/96116de833b330abe4412cc94edc68f99ed2fa3e39d8713ff307b3799e81/grpcio-1.71.0-cp310-cp310-win32.whl", hash = "sha256:8997d6785e93308f277884ee6899ba63baafa0dfb4729748200fcc537858a509", size = 3620382 },
{ url = "https://files.pythonhosted.org/packages/b7/09/f32ef637e386f3f2c02effac49699229fa560ce9007682d24e9e212d2eb4/grpcio-1.71.0-cp310-cp310-win_amd64.whl", hash = "sha256:7d6ac9481d9d0d129224f6d5934d5832c4b1cddb96b59e7eba8416868909786a", size = 4280302 },
]
[[package]]
@ -794,27 +675,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/9c/1f/19ebc343cc71a7ffa78f17018535adc5cbdd87afb31d7c34874680148b32/ifaddr-0.2.0-py3-none-any.whl", hash = "sha256:085e0305cfe6f16ab12d72e2024030f5d52674afad6911bb1eee207177b8a748", size = 12314 },
]
[[package]]
name = "importlib-metadata"
version = "8.7.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "zipp" },
]
sdist = { url = "https://files.pythonhosted.org/packages/76/66/650a33bd90f786193e4de4b3ad86ea60b53c89b669a5c7be931fac31cdb0/importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000", size = 56641 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/b0/36bd937216ec521246249be3bf9855081de4c5e06a0c9b4219dbeda50373/importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd", size = 27656 },
]
[[package]]
name = "importlib-resources"
version = "6.5.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/cf/8c/f834fbf984f691b4f7ff60f50b514cc3de5cc08abfc3295564dd89c5e2e7/importlib_resources-6.5.2.tar.gz", hash = "sha256:185f87adef5bcc288449d98fb4fba07cea78bc036455dd44c5fc4a2fe78fed2c", size = 44693 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a4/ed/1f1afb2e9e7f38a545d628f864d562a5ae64fe6f7a10e28ffb9b185b4e89/importlib_resources-6.5.2-py3-none-any.whl", hash = "sha256:789cfdc3ed28c78b67a06acb8126751ced69a3d5f79c095a98298cd8a760ccec", size = 37461 },
]
[[package]]
name = "ipykernel"
version = "6.29.5"
@ -1326,15 +1186,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/9c/fd/b247aec6add5601956d440488b7f23151d8343747e82c038af37b28d6098/multidict-6.2.0-py3-none-any.whl", hash = "sha256:5d26547423e5e71dcc562c4acdc134b900640a39abd9066d7326a7cc2324c530", size = 10266 },
]
[[package]]
name = "narwhals"
version = "2.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b7/95/aa46616f5e567ff5d262f4c207d5ca79cb2766010c786c351b8e7f930ef4/narwhals-2.9.0.tar.gz", hash = "sha256:d8cde40a6a8a7049d8e66608b7115ab19464acc6f305d136a8dc8ba396c4acfe", size = 584098 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/13/34/00c7ae8194074ed82b64e0bb7c24220eac5f77ac90c16e23cf0d2cfd2a03/narwhals-2.9.0-py3-none-any.whl", hash = "sha256:c59f7de4763004ae81691ce16df71b4e55aead0ead7ccde8c8f2ef8c9559c765", size = 422255 },
]
[[package]]
name = "nbclient"
version = "0.10.2"
@ -1449,12 +1300,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2e/5e/cb3dbdf3ae18e281b8b1b4691bb5d3465b383e04bde2c2a782c893f1ee21/nicegui-2.13.0-py3-none-any.whl", hash = "sha256:2343d37885df2c2e388a4f4c3f0ce9b308be02e16b0303108471a1a38fe3508f", size = 16482500 },
]
[[package]]
name = "noise"
version = "1.2.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/18/29/bb830ee6d934311e17a7a4fa1368faf3e73fbb09c0d80fc44e41828df177/noise-1.2.2.tar.gz", hash = "sha256:57a2797436574391ff63a111e852e53a4164ecd81ad23639641743cd1a209b65", size = 125615 }
[[package]]
name = "notebook"
version = "7.3.3"
@ -1483,18 +1328,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f9/33/bd5b9137445ea4b680023eb0469b2bb969d61303dedb2aac6560ff3d14a1/notebook_shim-0.2.4-py3-none-any.whl", hash = "sha256:411a5be4e9dc882a074ccbcae671eda64cceb068767e9a3419096986560e1cef", size = 13307 },
]
[[package]]
name = "nptyping"
version = "2.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e1/b7/ffe533358c32506b1708feec0fb04ba0a35a959a94163fff5333671909da/nptyping-2.5.0.tar.gz", hash = "sha256:e3d35b53af967e6fb407c3016ff9abae954d3a0568f7cc13a461084224e8e20a", size = 71623 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b1/28/92edc05378175de13a3d4986cee7531853634a22b7e5e21a988fa84fde3f/nptyping-2.5.0-py3-none-any.whl", hash = "sha256:764e51836faae33a7ae2e928af574cfb701355647accadcc89f2ad793630b7c8", size = 37602 },
]
[[package]]
name = "numpy"
version = "1.26.4"
@ -1511,32 +1344,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/19/77/538f202862b9183f54108557bfda67e17603fc560c384559e769321c9d92/numpy-1.26.4-cp310-cp310-win_amd64.whl", hash = "sha256:b97fe8060236edf3662adfc2c633f56a08ae30560c56310562cb4f95500022d5", size = 15808905 },
]
[[package]]
name = "open3d"
version = "0.19.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "addict" },
{ name = "configargparse" },
{ name = "dash" },
{ name = "flask" },
{ name = "matplotlib" },
{ name = "nbformat" },
{ name = "numpy" },
{ name = "pandas" },
{ name = "pillow" },
{ name = "pyquaternion" },
{ name = "pyyaml" },
{ name = "scikit-learn" },
{ name = "tqdm" },
{ name = "werkzeug" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/5c/4b/91e8a4100adf0ccd2f7ad21dd24c2e3d8f12925396528d0462cfb1735e5a/open3d-0.19.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:f7128ded206e07987cc29d0917195fb64033dea31e0d60dead3629b33d3c175f", size = 103086005 },
{ url = "https://files.pythonhosted.org/packages/c7/45/13bc9414ee9db611cba90b9efa69f66f246560e8ade575f1ee5b7f7b5d31/open3d-0.19.0-cp310-cp310-manylinux_2_31_x86_64.whl", hash = "sha256:5b60234fa6a56a20caf1560cad4e914133c8c198d74d7b839631c90e8592762e", size = 447678387 },
{ url = "https://files.pythonhosted.org/packages/bc/1c/0219416429f88ebc94fcb269fb186b153affe5b91dffe8f9062330d7776d/open3d-0.19.0-cp310-cp310-win_amd64.whl", hash = "sha256:18bb8b86e5fa9e582ed11b9651ff6e4a782e6778c9b8bfc344fc866dc8b5f49c", size = 69150378 },
]
[[package]]
name = "opencv-python"
version = "4.10.0.84"
@ -1692,19 +1499,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/6d/45/59578566b3275b8fd9157885918fcd0c4d74162928a5310926887b856a51/platformdirs-4.3.7-py3-none-any.whl", hash = "sha256:a03875334331946f13c549dbd8f4bac7a13a50a895a0eb1e8c6a8ace80d40a94", size = 18499 },
]
[[package]]
name = "plotly"
version = "6.3.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "narwhals" },
{ name = "packaging" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0c/63/961d47c9ffd592a575495891cdcf7875dc0903ebb33ac238935714213789/plotly-6.3.1.tar.gz", hash = "sha256:dd896e3d940e653a7ce0470087e82c2bd903969a55e30d1b01bb389319461bb0", size = 6956460 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/93/023955c26b0ce614342d11cc0652f1e45e32393b6ab9d11a664a60e9b7b7/plotly-6.3.1-py3-none-any.whl", hash = "sha256:8b4420d1dcf2b040f5983eed433f95732ed24930e496d36eb70d211923532e64", size = 9833698 },
]
[[package]]
name = "prometheus-client"
version = "0.21.1"
@ -1753,17 +1547,16 @@ wheels = [
[[package]]
name = "protobuf"
version = "6.33.0"
version = "6.30.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/19/ff/64a6c8f420818bb873713988ca5492cba3a7946be57e027ac63495157d97/protobuf-6.33.0.tar.gz", hash = "sha256:140303d5c8d2037730c548f8c7b93b20bb1dc301be280c378b82b8894589c954", size = 443463 }
sdist = { url = "https://files.pythonhosted.org/packages/55/de/8216061897a67b2ffe302fd51aaa76bbf613001f01cd96e2416a4955dd2b/protobuf-6.30.1.tar.gz", hash = "sha256:535fb4e44d0236893d5cf1263a0f706f1160b689a7ab962e9da8a9ce4050b780", size = 429304 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/ee/52b3fa8feb6db4a833dfea4943e175ce645144532e8a90f72571ad85df4e/protobuf-6.33.0-cp310-abi3-win32.whl", hash = "sha256:d6101ded078042a8f17959eccd9236fb7a9ca20d3b0098bbcb91533a5680d035", size = 425593 },
{ url = "https://files.pythonhosted.org/packages/7b/c6/7a465f1825872c55e0341ff4a80198743f73b69ce5d43ab18043699d1d81/protobuf-6.33.0-cp310-abi3-win_amd64.whl", hash = "sha256:9a031d10f703f03768f2743a1c403af050b6ae1f3480e9c140f39c45f81b13ee", size = 436882 },
{ url = "https://files.pythonhosted.org/packages/e1/a9/b6eee662a6951b9c3640e8e452ab3e09f117d99fc10baa32d1581a0d4099/protobuf-6.33.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:905b07a65f1a4b72412314082c7dbfae91a9e8b68a0cc1577515f8df58ecf455", size = 427521 },
{ url = "https://files.pythonhosted.org/packages/10/35/16d31e0f92c6d2f0e77c2a3ba93185130ea13053dd16200a57434c882f2b/protobuf-6.33.0-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:e0697ece353e6239b90ee43a9231318302ad8353c70e6e45499fa52396debf90", size = 324445 },
{ url = "https://files.pythonhosted.org/packages/e6/eb/2a981a13e35cda8b75b5585aaffae2eb904f8f351bdd3870769692acbd8a/protobuf-6.33.0-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:e0a1715e4f27355afd9570f3ea369735afc853a6c3951a6afe1f80d8569ad298", size = 339159 },
{ url = "https://files.pythonhosted.org/packages/21/51/0b1cbad62074439b867b4e04cc09b93f6699d78fd191bed2bbb44562e077/protobuf-6.33.0-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:35be49fd3f4fefa4e6e2aacc35e8b837d6703c37a2168a55ac21e9b1bc7559ef", size = 323172 },
{ url = "https://files.pythonhosted.org/packages/07/d1/0a28c21707807c6aacd5dc9c3704b2aa1effbf37adebd8caeaf68b17a636/protobuf-6.33.0-py3-none-any.whl", hash = "sha256:25c9e1963c6734448ea2d308cfa610e692b801304ba0908d7bfa564ac5132995", size = 170477 },
{ url = "https://files.pythonhosted.org/packages/83/f6/28460c49a8a93229e2264cd35fd147153fb524cbd944789db6b6f3cc9b13/protobuf-6.30.1-cp310-abi3-win32.whl", hash = "sha256:ba0706f948d0195f5cac504da156d88174e03218d9364ab40d903788c1903d7e", size = 419150 },
{ url = "https://files.pythonhosted.org/packages/96/82/7045f5b3f3e338a8ab5852d22ce9c31e0a40d8b0f150a3735dc494be769a/protobuf-6.30.1-cp310-abi3-win_amd64.whl", hash = "sha256:ed484f9ddd47f0f1bf0648806cccdb4fe2fb6b19820f9b79a5adf5dcfd1b8c5f", size = 431007 },
{ url = "https://files.pythonhosted.org/packages/b0/b6/732d04d0cdf457d05b7cba83ae73735d91ceced2439735b4500e311c44a5/protobuf-6.30.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:aa4f7dfaed0d840b03d08d14bfdb41348feaee06a828a8c455698234135b4075", size = 417579 },
{ url = "https://files.pythonhosted.org/packages/fc/22/29dd085f6e828ab0424e73f1bae9dbb9e8bb4087cba5a9e6f21dc614694e/protobuf-6.30.1-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:47cd320b7db63e8c9ac35f5596ea1c1e61491d8a8eb6d8b45edc44760b53a4f6", size = 317319 },
{ url = "https://files.pythonhosted.org/packages/26/10/8863ba4baa4660e3f50ad9ae974c47fb63fa6d4089b15f7db82164b1c879/protobuf-6.30.1-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:e3083660225fa94748ac2e407f09a899e6a28bf9c0e70c75def8d15706bf85fc", size = 316213 },
{ url = "https://files.pythonhosted.org/packages/a1/d6/683a3d470398e45b4ad9b6c95b7cbabc32f9a8daf454754f0e3df1edffa6/protobuf-6.30.1-py3-none-any.whl", hash = "sha256:3c25e51e1359f1f5fa3b298faa6016e650d148f214db2e47671131b9063c53be", size = 167064 },
]
[[package]]
@ -1817,20 +1610,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/a9/023730ba63db1e494a271cb018dcd361bd2c917ba7004c3e49d5daf795a2/py_cpuinfo-9.0.0-py3-none-any.whl", hash = "sha256:859625bc251f64e21f077d099d4162689c762b5d6a4c3c97553d56241c9674d5", size = 22335 },
]
[[package]]
name = "py-to-proto"
version = "0.6.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alchemy-logging" },
{ name = "protobuf" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/2d/66/521c7f0e630dd0cd32cd25e2e25d2360cfcf179dbe5de68d3d6142b83336/py_to_proto-0.6.0-py310-none-any.whl", hash = "sha256:48a2a109b81574c63a757e79c09b80a4d18a828928db8232646ff2e4c13e4263", size = 33383 },
{ url = "https://files.pythonhosted.org/packages/85/d4/077c7f02971e243630dc44fe75544d7694ec49fda7a35bf56935af045d1c/py_to_proto-0.6.0-py38-none-any.whl", hash = "sha256:01c5f2f64b31f0bd9dcb073309264c1f6fa3d8fb410edf329ee537d6d69184b3", size = 33397 },
{ url = "https://files.pythonhosted.org/packages/e8/13/58fcb27c1981ad1ab2b2c7921f611c597d741e73e6e1a1340aa19baf96e5/py_to_proto-0.6.0-py39-none-any.whl", hash = "sha256:1d8a4800b4d829819d598f79feb78acdd274f5f50446e3e10e9482edd383104d", size = 33397 },
]
[[package]]
name = "pycparser"
version = "2.22"
@ -1889,11 +1668,11 @@ wheels = [
[[package]]
name = "pyglet"
version = "2.1.11"
version = "2.1.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e3/6b/84c397a74cd33eb377168c682e9e3d6b90c1c10c661e11ea5b397ac8497c/pyglet-2.1.11.tar.gz", hash = "sha256:8285d0af7d0ab443232a81df4d941e0d5c48c18a23ec770b3e5c59a222f5d56e", size = 6594448 }
sdist = { url = "https://files.pythonhosted.org/packages/08/90/7f8a8d939dbf8f6875b957540cc3091e936e41c4ac8f190a9517589678f8/pyglet-2.1.3.tar.gz", hash = "sha256:9a2c3c84228402ea7103443ac8a52060cc1c91419951ec1105845ce30fed2ce8", size = 6515859 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/77/a2/2b09fbff0eedbe44fbf164b321439a38f7c5568d8b754aa197ee45886431/pyglet-2.1.11-py3-none-any.whl", hash = "sha256:fa0f4fdf366cfc5040aeb462416910b0db2fa374b7d620b7a432178ca3fa8af1", size = 1032213 },
{ url = "https://files.pythonhosted.org/packages/83/d6/41208b6741e732a7faf160e89346a17e81b14899bd7ae90058da858083d6/pyglet-2.1.3-py3-none-any.whl", hash = "sha256:5a7eaf35869ecf7451fb49cc064c4c0e9a118eaa5e771667c607125b13f85e33", size = 962091 },
]
[[package]]
@ -2045,11 +1824,12 @@ wheels = [
[[package]]
name = "pywin32"
version = "306"
version = "310"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/08/dc/28c668097edfaf4eac4617ef7adf081b9cf50d254672fcf399a70f5efc41/pywin32-306-cp310-cp310-win32.whl", hash = "sha256:06d3420a5155ba65f0b72f2699b5bacf3109f36acbe8923765c22938a69dfc8d", size = 8506422 },
{ url = "https://files.pythonhosted.org/packages/d3/d6/891894edec688e72c2e308b3243fad98b4066e1839fd2fe78f04129a9d31/pywin32-306-cp310-cp310-win_amd64.whl", hash = "sha256:84f4471dbca1887ea3803d8848a1616429ac94a4a8d05f4bc9c5dcfd42ca99c8", size = 9226392 },
{ url = "https://files.pythonhosted.org/packages/95/da/a5f38fffbba2fb99aa4aa905480ac4b8e83ca486659ac8c95bce47fb5276/pywin32-310-cp310-cp310-win32.whl", hash = "sha256:6dd97011efc8bf51d6793a82292419eba2c71cf8e7250cfac03bba284454abc1", size = 8848240 },
{ url = "https://files.pythonhosted.org/packages/aa/fe/d873a773324fa565619ba555a82c9dabd677301720f3660a731a5d07e49a/pywin32-310-cp310-cp310-win_amd64.whl", hash = "sha256:c3e78706e4229b915a0821941a84e7ef420bf2b77e08c9dae3c76fd03fd2ae3d", size = 9601854 },
{ url = "https://files.pythonhosted.org/packages/3c/84/1a8e3d7a15490d28a5d816efa229ecb4999cdc51a7c30dd8914f669093b8/pywin32-310-cp310-cp310-win_arm64.whl", hash = "sha256:33babed0cf0c92a6f94cc6cc13546ab24ee13e3e800e61ed87609ab91e4c8213", size = 8522963 },
]
[[package]]
@ -2151,15 +1931,6 @@ socks = [
{ name = "pysocks" },
]
[[package]]
name = "retrying"
version = "1.4.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c8/5a/b17e1e257d3e6f2e7758930e1256832c9ddd576f8631781e6a072914befa/retrying-1.4.2.tar.gz", hash = "sha256:d102e75d53d8d30b88562d45361d6c6c934da06fab31bd81c0420acb97a8ba39", size = 11411 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/67/f3/6cd296376653270ac1b423bb30bd70942d9916b6978c6f40472d6ac038e7/retrying-1.4.2-py3-none-any.whl", hash = "sha256:bbc004aeb542a74f3569aeddf42a2516efefcdaff90df0eb38fbfbf19f179f59", size = 10859 },
]
[[package]]
name = "rfc3339-validator"
version = "0.1.4"
@ -2420,20 +2191,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a0/4b/528ccf7a982216885a1ff4908e886b8fb5f19862d1962f56a3fce2435a70/starlette-0.46.1-py3-none-any.whl", hash = "sha256:77c74ed9d2720138b25875133f3a2dae6d854af2ec37dceb56aef370c1d8a227", size = 71995 },
]
[[package]]
name = "superfsmon"
version = "1.2.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "supervisor", marker = "sys_platform != 'win32'" },
{ name = "supervisor-win", marker = "sys_platform == 'win32'" },
{ name = "watchdog" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e9/c2/269264babce3c29f5721cdb7c79ab4930562b67786bb6e5cc838e36e3530/superfsmon-1.2.3.tar.gz", hash = "sha256:fe5918872dc258eacff98cd054b28b73531f9897f72f8583fb2bbd448fc33928", size = 5186 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bd/c5/d8fbf5c3901db69f7b1e25708fc865570712264026d06f75c5d535ec4ab1/superfsmon-1.2.3-py3-none-any.whl", hash = "sha256:da798e2a2c260fa633213df9f2f26d504fe234f78886e5f62ae4d81f0130bdf7", size = 4738 },
]
[[package]]
name = "supervisor"
version = "4.2.5"
@ -2446,41 +2203,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/7a/0ad3973941590c040475046fef37a2b08a76691e61aa59540828ee235a6e/supervisor-4.2.5-py2.py3-none-any.whl", hash = "sha256:2ecaede32fc25af814696374b79e42644ecaba5c09494c51016ffda9602d0f08", size = 319561 },
]
[[package]]
name = "supervisor-win"
version = "4.7.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pywin32", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/98/48/5d6cd1b7492bf2c11452fd638de45519d2c103caed70c5bdb4ecebbac568/supervisor-win-4.7.0.tar.gz", hash = "sha256:c474d92edc7050b55adae2f7c7789d5d69f180dee7868a27673b1d38f8bea484", size = 397342 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/69/4d/3a493f15f5b80608857ef157f382ace494f51d9031e6bee6082437dd1403/supervisor_win-4.7.0-py2.py3-none-any.whl", hash = "sha256:bd98554c2a0878704c3f3fd95e38965d9986eae6a2ad29f34d73d0aee138a481", size = 303996 },
]
[[package]]
name = "svgpathtools"
version = "1.7.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
{ name = "scipy" },
{ name = "svgwrite" },
]
sdist = { url = "https://files.pythonhosted.org/packages/df/5c/27c896f25e794d8eb1e75a1ab04fad3fcc272b5251d20f634a669e858da0/svgpathtools-1.7.1.tar.gz", hash = "sha256:beaef20fd78164aa5f0a7d4fd164ef20cb0d3d015cdec50c8c168e9d6547f041", size = 2135227 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a6/00/c23f53a9e91092239ff6f1fcc39463626e293f6b24898739996fe2a6eebd/svgpathtools-1.7.1-py2.py3-none-any.whl", hash = "sha256:3cbb8ba0e8d200f9639034608d9c55b68efbc1bef99ea99559a3e7cb024fb738", size = 68280 },
]
[[package]]
name = "svgwrite"
version = "1.4.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/16/c1/263d4e93b543390d86d8eb4fc23d9ce8a8d6efd146f9427364109004fa9b/svgwrite-1.4.3.zip", hash = "sha256:a8fbdfd4443302a6619a7f76bc937fc683daf2628d9b737c891ec08b8ce524c3", size = 189516 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/84/15/640e399579024a6875918839454025bb1d5f850bb70d96a11eabb644d11c/svgwrite-1.4.3-py3-none-any.whl", hash = "sha256:bb6b2b5450f1edbfa597d924f9ac2dd099e625562e492021d7dd614f65f8a22d", size = 67122 },
]
[[package]]
name = "tensorboard"
version = "2.19.0"
@ -2768,21 +2490,15 @@ source = { editable = "." }
dependencies = [
{ name = "baumer-neoapi" },
{ name = "bytetracker" },
{ name = "dearpygui" },
{ name = "deep-sort-realtime" },
{ name = "facenet-pytorch" },
{ name = "ffmpeg-python" },
{ name = "foucault" },
{ name = "gdown" },
{ name = "grpcio-tools" },
{ name = "ipywidgets" },
{ name = "jsonlines" },
{ name = "noise" },
{ name = "nptyping" },
{ name = "open3d" },
{ name = "opencv-python" },
{ name = "pandas-helper-calc" },
{ name = "py-to-proto" },
{ name = "pyglet" },
{ name = "pyglet-cornerpin" },
{ name = "python-statemachine" },
@ -2791,9 +2507,7 @@ dependencies = [
{ name = "setproctitle" },
{ name = "shapely" },
{ name = "simplification" },
{ name = "superfsmon" },
{ name = "supervisor" },
{ name = "svgpathtools" },
{ name = "tensorboardx" },
{ name = "torch", version = "1.12.1", source = { registry = "https://pypi.org/simple" }, marker = "sys_platform != 'linux'" },
{ name = "torch", version = "1.12.1+cu113", source = { url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl" }, marker = "sys_platform == 'linux'" },
@ -2803,29 +2517,22 @@ dependencies = [
{ name = "trajectron-plus-plus" },
{ name = "tsmoothie" },
{ name = "ultralytics" },
{ name = "velodyne-decoder" },
]
[package.metadata]
requires-dist = [
{ name = "baumer-neoapi", path = "../../Downloads/Baumer_neoAPI_1.5.0_lin_x86_64_python/wheel/baumer_neoapi-1.5.0-cp34.cp35.cp36.cp37.cp38.cp39.cp310.cp311.cp312-none-linux_x86_64.whl" },
{ name = "bytetracker", git = "https://github.com/rubenvandeven/bytetrack-pip" },
{ name = "dearpygui", specifier = ">=2.1.0" },
{ name = "deep-sort-realtime", specifier = ">=1.3.2,<2" },
{ name = "facenet-pytorch", specifier = ">=2.5.3" },
{ name = "ffmpeg-python", specifier = ">=0.2.0,<0.3" },
{ name = "foucault", git = "https://git.rubenvandeven.com/r/conductofconduct" },
{ name = "gdown", specifier = ">=4.7.1,<5" },
{ name = "grpcio-tools", specifier = ">=1.76.0" },
{ name = "ipywidgets", specifier = ">=8.1.5,<9" },
{ name = "jsonlines", specifier = ">=4.0.0,<5" },
{ name = "noise", specifier = ">=1.2.2" },
{ name = "nptyping", specifier = ">=2.5.0" },
{ name = "open3d", specifier = ">=0.19.0" },
{ name = "opencv-python", path = "opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl" },
{ name = "pandas-helper-calc", git = "https://github.com/scls19fr/pandas-helper-calc" },
{ name = "py-to-proto", specifier = ">=0.6.0" },
{ name = "pyglet", specifier = ">=2.1.8,<3" },
{ name = "pyglet", specifier = ">=2.0.15,<3" },
{ name = "pyglet-cornerpin", specifier = ">=0.3.0,<0.4" },
{ name = "python-statemachine", specifier = ">=2.5.0" },
{ name = "pyusb", specifier = ">=1.3.1,<2" },
@ -2833,9 +2540,7 @@ requires-dist = [
{ name = "setproctitle", specifier = ">=1.3.3,<2" },
{ name = "shapely", specifier = ">=2.1" },
{ name = "simplification", specifier = ">=0.7.12" },
{ name = "superfsmon", specifier = ">=1.2.3" },
{ name = "supervisor", specifier = ">=4.2.5" },
{ name = "svgpathtools", specifier = ">=1.7.1" },
{ name = "tensorboardx", specifier = ">=2.6.2.2,<3" },
{ name = "torch", marker = "python_full_version < '3.10' or python_full_version >= '4' or sys_platform != 'linux'", specifier = "==1.12.1" },
{ name = "torch", marker = "python_full_version >= '3.10' and python_full_version < '4' and sys_platform == 'linux'", url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl" },
@ -2845,7 +2550,6 @@ requires-dist = [
{ name = "trajectron-plus-plus", editable = "../Trajectron-plus-plus" },
{ name = "tsmoothie", specifier = ">=1.0.5,<2" },
{ name = "ultralytics", specifier = "~=8.3" },
{ name = "velodyne-decoder", specifier = ">=3.1.0" },
]
[[package]]
@ -2873,11 +2577,11 @@ wheels = [
[[package]]
name = "typing-extensions"
version = "4.15.0"
version = "4.13.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391 }
sdist = { url = "https://files.pythonhosted.org/packages/0e/3e/b00a62db91a83fff600de219b6ea9908e6918664899a2d85db222f4fbf19/typing_extensions-4.13.0.tar.gz", hash = "sha256:0a4ac55a5820789d87e297727d229866c9650f6521b64206413c4fbada24d95b", size = 106520 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614 },
{ url = "https://files.pythonhosted.org/packages/e0/86/39b65d676ec5732de17b7e3c476e45bb80ec64eb50737a8dce1a4178aba1/typing_extensions-4.13.0-py3-none-any.whl", hash = "sha256:c8dd92cc0d6425a97c18fbb9d1954e5ff92c1ca881a309c45f06ebc0b79058e5", size = 45683 },
]
[[package]]
@ -3000,57 +2704,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a6/3d/7b22abbdb059d551507275a2815bc2b1974e3b9f6a13781c1eac9e858965/vbuild-0.8.2-py2.py3-none-any.whl", hash = "sha256:d76bcc976a1c53b6a5776ac947606f9e7786c25df33a587ebe33ed09dd8a1076", size = 9371 },
]
[[package]]
name = "velodyne-decoder"
version = "3.1.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "dpkt" },
{ name = "importlib-resources" },
{ name = "numpy" },
{ name = "pyyaml" },
]
sdist = { url = "https://files.pythonhosted.org/packages/21/92/9f329e46b1b2e2f1010a4ca99b159633fd92404889066d3694b90c2589ca/velodyne_decoder-3.1.0.tar.gz", hash = "sha256:d87da7e1ccda58eddf573edb7c6a2b299fffd9b687e1ea5f24f2fc1b94a1abcd", size = 75189 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/66/c3/7c0cdf266c9007821614004f24367593e35ea1b794090d91a36e57b22df3/velodyne_decoder-3.1.0-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:a9a021d6a7e382bd8604d640434ccea777ca716f74e81eae68d0d825525ed863", size = 267943 },
{ url = "https://files.pythonhosted.org/packages/6c/11/9e19533911eb9bd20fb2598ccf9579c3ec1fc908111a69e2f10ca2682397/velodyne_decoder-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5c6a263fe2e5d5f6fcbda54aa5a4359ff74069e9c044414712be43b277e02607", size = 249930 },
{ url = "https://files.pythonhosted.org/packages/64/31/2a509389f9496e62b71c6278c7fe27666d9d2e424e085e0b166abf8d918a/velodyne_decoder-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc2b5015a9d3286bbd0dac2168227892d74b69df276d8409288802163e0b29d2", size = 375722 },
{ url = "https://files.pythonhosted.org/packages/10/bd/3214b3da85a892c57b179159b5dddcc3fc6a749ed8b8760c8f88a165660e/velodyne_decoder-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdef9a6da33e77bbfbb504279590e087b40cb79e7a0d1d773d6850913970cac6", size = 396289 },
{ url = "https://files.pythonhosted.org/packages/16/a8/32abf4ab8421a1874ecf6f2e47e0ea7ed9c95e24a3d88b6eff84177f6aaa/velodyne_decoder-3.1.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cd1b8af06ed862cfd65a430febf523c3a84ca43eb2123783d2c5e7bf1b4e3744", size = 804012 },
{ url = "https://files.pythonhosted.org/packages/3a/33/531c3cd86571b22b44a7d579fb6df550aaa554910576970d9311dee3e0cc/velodyne_decoder-3.1.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0523003f672fee4de4e0b2444ee92891601ac173dd96d7687d26da46a47790df", size = 844248 },
{ url = "https://files.pythonhosted.org/packages/f8/23/de92d024ed525517c308c5ca347b24fea110b45df363734628670589f832/velodyne_decoder-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:8830cb92e031314ad809141b6eff7effeed94fd01f743a7e430b064d613401f2", size = 243310 },
{ url = "https://files.pythonhosted.org/packages/32/db/f746aedcdf35a694ddf431c08aada87db2a6a03a3ed6cc64212f064a466f/velodyne_decoder-3.1.0-cp312-abi3-macosx_10_15_x86_64.whl", hash = "sha256:60dc598ab6bdb461351ad5923b66bc219bfda1801a355b15e4d76742344dda28", size = 265217 },
{ url = "https://files.pythonhosted.org/packages/28/21/ba36a151b34f783005cdc281f3218ada97c63788a26d0be23689752995ec/velodyne_decoder-3.1.0-cp312-abi3-macosx_11_0_arm64.whl", hash = "sha256:47af0c685f1a80b915b179f417eef2499496fb1f15e5449aa34769f32cef67a2", size = 246741 },
{ url = "https://files.pythonhosted.org/packages/fe/f3/6450f5d35ac345027b742b5f0225277c6de25158635a719d87b1998cfa59/velodyne_decoder-3.1.0-cp312-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c88fa75eefa60860802247c11cc58c8a9879dc308b5045d021009d18086a411", size = 372638 },
{ url = "https://files.pythonhosted.org/packages/48/6e/4d47f1dfba4d100d861ca937dba645d3d7ca3f6dfd9879ef5c7098afa342/velodyne_decoder-3.1.0-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2dcaddddcd410220166a5fce461a67b0cb6f098e64a79cfbb24559ca239ac87e", size = 393789 },
{ url = "https://files.pythonhosted.org/packages/f4/2a/98fe5f64e3d826919f4e9399efd9460b742c679b891a1799759d9e2bf906/velodyne_decoder-3.1.0-cp312-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:854f0c93522589c3ec0cf884a92566906dac2c606e36d59e42eff52e088a3c25", size = 801652 },
{ url = "https://files.pythonhosted.org/packages/76/d8/6865ab284f2d924607c2f1a2a91041ac9511a2d3722f007fba5ad8b2a4db/velodyne_decoder-3.1.0-cp312-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:8b82d3e55934bc2b078bbb211012a66c74ec98d2dc5965a64c4865be8bf4c5a7", size = 841421 },
{ url = "https://files.pythonhosted.org/packages/82/d9/051784cb0eff7177f44f2d080fab545e612ddc8b47ed3b5ca5364b0abfe8/velodyne_decoder-3.1.0-cp312-abi3-win_amd64.whl", hash = "sha256:27e192dcb364460251e7f6ba3552541dcbb97e99a24b96a41e2f61f0e2036074", size = 242691 },
]
[[package]]
name = "watchdog"
version = "6.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/db/7d/7f3d619e951c88ed75c6037b246ddcf2d322812ee8ea189be89511721d54/watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282", size = 131220 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0c/56/90994d789c61df619bfc5ce2ecdabd5eeff564e1eb47512bd01b5e019569/watchdog-6.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d1cdb490583ebd691c012b3d6dae011000fe42edb7a82ece80965b42abd61f26", size = 96390 },
{ url = "https://files.pythonhosted.org/packages/55/46/9a67ee697342ddf3c6daa97e3a587a56d6c4052f881ed926a849fcf7371c/watchdog-6.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bc64ab3bdb6a04d69d4023b29422170b74681784ffb9463ed4870cf2f3e66112", size = 88389 },
{ url = "https://files.pythonhosted.org/packages/44/65/91b0985747c52064d8701e1075eb96f8c40a79df889e59a399453adfb882/watchdog-6.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c897ac1b55c5a1461e16dae288d22bb2e412ba9807df8397a635d88f671d36c3", size = 89020 },
{ url = "https://files.pythonhosted.org/packages/30/ad/d17b5d42e28a8b91f8ed01cb949da092827afb9995d4559fd448d0472763/watchdog-6.0.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c7ac31a19f4545dd92fc25d200694098f42c9a8e391bc00bdd362c5736dbf881", size = 87902 },
{ url = "https://files.pythonhosted.org/packages/5c/ca/c3649991d140ff6ab67bfc85ab42b165ead119c9e12211e08089d763ece5/watchdog-6.0.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9513f27a1a582d9808cf21a07dae516f0fab1cf2d7683a742c498b93eedabb11", size = 88380 },
{ url = "https://files.pythonhosted.org/packages/a9/c7/ca4bf3e518cb57a686b2feb4f55a1892fd9a3dd13f470fca14e00f80ea36/watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13", size = 79079 },
{ url = "https://files.pythonhosted.org/packages/5c/51/d46dc9332f9a647593c947b4b88e2381c8dfc0942d15b8edc0310fa4abb1/watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379", size = 79078 },
{ url = "https://files.pythonhosted.org/packages/d4/57/04edbf5e169cd318d5f07b4766fee38e825d64b6913ca157ca32d1a42267/watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e", size = 79076 },
{ url = "https://files.pythonhosted.org/packages/ab/cc/da8422b300e13cb187d2203f20b9253e91058aaf7db65b74142013478e66/watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f", size = 79077 },
{ url = "https://files.pythonhosted.org/packages/2c/3b/b8964e04ae1a025c44ba8e4291f86e97fac443bca31de8bd98d3263d2fcf/watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26", size = 79078 },
{ url = "https://files.pythonhosted.org/packages/62/ae/a696eb424bedff7407801c257d4b1afda455fe40821a2be430e173660e81/watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c", size = 79077 },
{ url = "https://files.pythonhosted.org/packages/b5/e8/dbf020b4d98251a9860752a094d09a65e1b436ad181faf929983f697048f/watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2", size = 79078 },
{ url = "https://files.pythonhosted.org/packages/07/f6/d0e5b343768e8bcb4cda79f0f2f55051bf26177ecd5651f84c07567461cf/watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a", size = 79065 },
{ url = "https://files.pythonhosted.org/packages/db/d9/c495884c6e548fce18a8f40568ff120bc3a4b7b99813081c8ac0c936fa64/watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680", size = 79070 },
{ url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067 },
]
[[package]]
name = "watchfiles"
version = "1.0.4"
@ -3202,12 +2855,3 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fd/b7/2e9a5b18eb0fe24c3a0e8bae994e812ed9852ab4fd067c0107fadde0d5f0/yarl-1.18.3-cp310-cp310-win_amd64.whl", hash = "sha256:93b2e109287f93db79210f86deb6b9bbb81ac32fc97236b16f7433db7fc437d8", size = 90484 },
{ url = "https://files.pythonhosted.org/packages/f5/4b/a06e0ec3d155924f77835ed2d167ebd3b211a7b0853da1cf8d8414d784ef/yarl-1.18.3-py3-none-any.whl", hash = "sha256:b57f4f58099328dfb26c6a771d09fb20dbbae81d20cfb66141251ea063bd101b", size = 45109 },
]
[[package]]
name = "zipp"
version = "3.23.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e3/02/0f2892c661036d50ede074e376733dca2ae7c6eb617489437771209d4180/zipp-3.23.0.tar.gz", hash = "sha256:a07157588a12518c9d4034df3fbbee09c814741a33ff63c05fa29d26a2404166", size = 25547 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2e/54/647ade08bf0db230bfea292f893923872fd20be6ac6f53b2b936ba839d75/zipp-3.23.0-py3-none-any.whl", hash = "sha256:071652d6115ed432f5ce1d34c336c0adfd6a884660d1e9712a256d3d3bd4b14e", size = 10276 },
]