Compare commits
No commits in common. "main" and "animation_window" have entirely different histories.
main
...
animation_
36 changed files with 5098 additions and 12539 deletions
|
@ -96,7 +96,7 @@
|
|||
},
|
||||
"pred_state": {
|
||||
"PEDESTRIAN": {
|
||||
"position": [
|
||||
"velocity": [
|
||||
"x",
|
||||
"y"
|
||||
]
|
||||
|
|
45
README.md
45
README.md
|
@ -3,51 +3,24 @@
|
|||
## Install
|
||||
|
||||
* Run `bash build_opencv_with_gstreamer.sh` to build opencv with gstreamer support
|
||||
* Use `uv` to install
|
||||
* Use pyenv + poetry to install
|
||||
|
||||
## How to
|
||||
|
||||
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo. Also, [laserspace](https://git.rubenvandeven.com/security_vision/laserspace) is used to map the shapes (which are generated by `stage.py`) to lasers, as to use specific optimization techniques for the paths before sending them to the DAC.
|
||||
> See also the sibling repo [traptools](https://git.rubenvandeven.com/security_vision/traptools) for camera calibration and homography tools that are needed for this repo.
|
||||
|
||||
These are roughly the steps to go from datagathering to training
|
||||
|
||||
1. Make sure to have some recordings with a fixed camera. [UPDATE: not needed anymore, except for calibration & homography footage]
|
||||
* Recording can be done with `ffmpeg -rtsp_transport udp -i rtsp://USER:PASS@IP:554/Streaming/Channels/1.mp4 hof2-cam-$(date "+%Y%m%d-%H%M").mp4`
|
||||
2. Follow the steps in the auxilary [traptools](https://git.rubenvandeven.com/security_vision/traptools) repository to obtain (1) camera matrix, lens distortion, image dimensions, and (2+3) homography
|
||||
3. Run the tracker, e.g. `uv run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`
|
||||
* Note: You can run this right of the camera stream: `uv run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`, each recording adding a new file to the `raw` folder.
|
||||
4. Parse tracker data to Trajectron format: `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME` Optionally, smooth tracks: `--smooth-tracks`
|
||||
3. Run the tracker, e.g. `poetry run tracker --detector ultralytics --homography ../DATASETS/NAME/homography.json --video-src ../DATASETS/NAME/*.mp4 --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`
|
||||
* Note: You can run this right of the camera stream: `poetry run tracker --eval_device cuda:0 --detector ultralytics --video-src rtsp://USER:PW@ADDRESS/STREAM --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --save-for-training EXPERIMENTS/raw/NAME/`, each recording adding a new file to the `raw` folder.
|
||||
4. Parse tracker data to Trajectron format: `poetry run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME` Optionally, smooth tracks: `--smooth-tracks`
|
||||
* Optionally, add a map: ideally a RGB png: 3 layers of 0-255
|
||||
* `uv run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
|
||||
5. Train Trajectron model `uv run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
|
||||
* `poetry run process_data --src-dir EXPERIMENTS/raw/NAME --dst-dir EXPERIMENTS/trajectron-data/ --name NAME --smooth-tracks --camera-fps 12 --homography ../DATASETS/NAME/homography.json --calibration ../DATASETS/NAME/calibration.json --filter-displacement 2 --map-img-path ../DATASETS/NAME/map.png`
|
||||
5. Train Trajectron model `poetry run trajectron_train --eval_every 10 --vis_every 1 --train_data_dict NAME_train.pkl --eval_data_dict NAME_val.pkl --offline_scene_graph no --preprocess_workers 8 --log_dir EXPERIMENTS/models --log_tag _NAME --train_epochs 100 --conf EXPERIMENTS/config.json --batch_size 256 --data_dir EXPERIMENTS/trajectron-data `
|
||||
6. The run!
|
||||
* `uv run supervisord`
|
||||
<!-- * On a video file (you can use a wildcard) `DISPLAY=:1 uv run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
|
||||
* On a video file (you can use a wildcard) `DISPLAY=:1 poetry run trapserv --remote-log-addr 100.69.123.91 --eval_device cuda:0 --detector ultralytics --homography ../DATASETS/NAME/homography.json --eval_data_dict EXPERIMENTS/trajectron-data/hof2s-m_test.pkl --video-src ../DATASETS/NAME/*.mp4 --model_dir EXPERIMENTS/models/models_DATE_NAME/--smooth-predictions --smooth-tracks --num-samples 3 --render-window --calibration ../DATASETS/NAME/calibration.json` (the DISPLAY environment variable is used here to running over SSH connection and display on local monitor)
|
||||
* or on the RTSP stream. Which uses gstreamer to substantially reduce latency compared to the default ffmpeg bindings in OpenCV.
|
||||
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`. -->
|
||||
|
||||
|
||||
## Testnight 2025-06-13
|
||||
|
||||
Stappenplan:
|
||||
|
||||
* Hang lasers. Connect all cables etc.
|
||||
* `DISPLAY=:0 cargo run --example laser_frame_stream_gui`
|
||||
* Use numbers to pick a nice shape. Use this to make sure both lasers cover the right area. (if it doesn't work. Flip some switches in the gui, the laser output should now start)
|
||||
* In trap folder: `uv run supervisorctl start video`
|
||||
* In laserspace folder: `DISPLAY=:0 cargo run --bin render_lines_gui` and use gui to draw and tweak projection area
|
||||
* Use the save button to store configuration
|
||||
/*
|
||||
* in trap folder: `DISPLAY=:0 uv run trap_laser_calibration`
|
||||
* follow instructions:
|
||||
* camera points: 1-9 or cursor to create/select/move points
|
||||
* move laser: vim movement keys : hjkl, use shift to move faster
|
||||
* `c` to calibrate. Matrix is output to cli.
|
||||
* `q` to quit
|
||||
* saved to `laser_calib.json`, copy H field to `trap_rust/src/trap/laser.rs` (to e.g. TMP_STUDIO_CM_8)
|
||||
* Restart `render_lines_gui` with new homographies
|
||||
* `DISPLAY=:0 cargo run --bin render_lines_gui`
|
||||
*/
|
||||
* change video source in `supervisord.conf` and run `uv run supervisorctl update` to switch
|
||||
* **if tracking is slow and there's no prediction.**
|
||||
* `uv run python -c "import torch;print(torch.cuda.is_available())"`
|
||||
* To just have a single trajectory pulled from distribution use `--full-dist`. Also try `--z_mode`.
|
||||
|
|
|
@ -2,10 +2,10 @@
|
|||
# Default YOLO tracker settings for ByteTrack tracker https://github.com/ifzhang/ByteTrack
|
||||
|
||||
tracker_type: bytetrack # tracker type, ['botsort', 'bytetrack']
|
||||
track_high_thresh: 0.000001 # threshold for the first association
|
||||
track_low_thresh: 0.000001 # threshold for the second association
|
||||
new_track_thresh: 0.000001 # threshold for init new track if the detection does not match any tracks
|
||||
track_buffer: 10 # buffer to calculate the time when to remove tracks
|
||||
match_thresh: 0.99 # threshold for matching tracks
|
||||
track_high_thresh: 0.0001 # threshold for the first association
|
||||
track_low_thresh: 0.0001 # threshold for the second association
|
||||
new_track_thresh: 0.0001 # threshold for init new track if the detection does not match any tracks
|
||||
track_buffer: 50 # buffer to calculate the time when to remove tracks
|
||||
match_thresh: 0.95 # threshold for matching tracks
|
||||
fuse_score: True # Whether to fuse confidence scores with the iou distances before matching
|
||||
# min_box_area: 10 # threshold for min box areas(for tracker evaluation, not used for now)
|
||||
|
|
3930
poetry.lock
generated
Normal file
3930
poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load diff
|
@ -1,45 +1,11 @@
|
|||
[project]
|
||||
[tool.poetry]
|
||||
name = "trap"
|
||||
version = "0.1.0"
|
||||
description = "Art installation with trajectory prediction"
|
||||
authors = [{ name = "Ruben van de Ven", email = "git@rubenvandeven.com" }]
|
||||
requires-python = "~=3.10.4"
|
||||
authors = ["Ruben van de Ven <git@rubenvandeven.com>"]
|
||||
readme = "README.md"
|
||||
dependencies = [
|
||||
"trajectron-plus-plus",
|
||||
"torch==1.12.1",
|
||||
"torchvision==0.13.1",
|
||||
"deep-sort-realtime>=1.3.2,<2",
|
||||
"ultralytics~=8.3",
|
||||
"ffmpeg-python>=0.2.0,<0.3",
|
||||
"torchreid>=0.2.5,<0.3",
|
||||
"gdown>=4.7.1,<5",
|
||||
"pandas-helper-calc",
|
||||
"tsmoothie>=1.0.5,<2",
|
||||
"pyglet>=2.0.15,<3",
|
||||
"pyglet-cornerpin>=0.3.0,<0.4",
|
||||
"opencv-python",
|
||||
"setproctitle>=1.3.3,<2",
|
||||
"bytetracker",
|
||||
"jsonlines>=4.0.0,<5",
|
||||
"tensorboardx>=2.6.2.2,<3",
|
||||
"shapely>=2.1",
|
||||
#"shapely>=1,<2",
|
||||
"baumer-neoapi",
|
||||
"qrcode~=8.0",
|
||||
"pyusb>=1.3.1,<2",
|
||||
"ipywidgets>=8.1.5,<9",
|
||||
"foucault",
|
||||
"python-statemachine>=2.5.0",
|
||||
"facenet-pytorch>=2.5.3",
|
||||
"simplification>=0.7.12",
|
||||
"supervisor>=4.2.5",
|
||||
"superfsmon>=1.2.3",
|
||||
"noise>=1.2.2",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
start = "trap.conductofconduct:run"
|
||||
[tool.poetry.scripts]
|
||||
trapserv = "trap.plumber:start"
|
||||
tracker = "trap.tools:tracker_preprocess"
|
||||
compare = "trap.tools:tracker_compare"
|
||||
|
@ -47,26 +13,37 @@ process_data = "trap.process_data:main"
|
|||
blacklist = "trap.tools:blacklist_tracks"
|
||||
rewrite_tracks = "trap.tools:rewrite_raw_track_files"
|
||||
|
||||
trap_video_source = "trap.frame_emitter:FrameEmitter.parse_and_start"
|
||||
trap_tracker = "trap.tracker:Tracker.parse_and_start"
|
||||
trap_stage = "trap.stage:Stage.parse_and_start"
|
||||
trap_prediction = "trap.prediction_server:PredictionServer.parse_and_start"
|
||||
trap_render_cv = "trap.cv_renderer:CvRenderer.parse_and_start"
|
||||
trap_monitor = "trap.monitor:Monitor.parse_and_start" # migrate timer
|
||||
trap_laser_calibration = "trap.laser_calibration:LaserCalibration.parse_and_start" # migrate timer
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.10,<3.12,"
|
||||
|
||||
[tool.uv]
|
||||
trajectron-plus-plus = { path = "../Trajectron-plus-plus/", develop = true }
|
||||
#trajectron-plus-plus = { git = "https://git.rubenvandeven.com/security_vision/Trajectron-plus-plus/" }
|
||||
torch = [
|
||||
{ version="1.12.1" },
|
||||
# { url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp38-cp38-linux_x86_64.whl", markers = "python_version ~= '3.8' and sys_platform == 'linux'" },
|
||||
{ url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl", markers = "python_version ~= '3.10' and sys_platform == 'linux'" },
|
||||
]
|
||||
|
||||
[tool.uv.sources]
|
||||
trajectron-plus-plus = { path = "../Trajectron-plus-plus/", editable = true }
|
||||
torch = [{ url = "https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl", marker = "python_version ~= '3.10' and sys_platform == 'linux'" }]
|
||||
torchvision = [{ url = "https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp310-cp310-linux_x86_64.whl", marker = "python_version ~= '3.10' and sys_platform == 'linux'" }]
|
||||
pandas-helper-calc = { git = "https://github.com/scls19fr/pandas-helper-calc" }
|
||||
bytetracker = { git = "https://github.com/rubenvandeven/bytetrack-pip" }
|
||||
baumer-neoapi = { path = "../../Downloads/Baumer_neoAPI_1.5.0_lin_x86_64_python/wheel/baumer_neoapi-1.5.0-cp34.cp35.cp36.cp37.cp38.cp39.cp310.cp311.cp312-none-linux_x86_64.whl" }
|
||||
foucault = { git = "https://git.rubenvandeven.com/r/conductofconduct" }
|
||||
opencv-python = {path="./opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"}
|
||||
torchvision = [
|
||||
{ version="0.13.1" },
|
||||
# { url = "https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp38-cp38-linux_x86_64.whl", markers = "python_version ~= '3.8' and sys_platform == 'linux'" },
|
||||
{ url = "https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp310-cp310-linux_x86_64.whl", markers = "python_version ~= '3.10' and sys_platform == 'linux'" },
|
||||
]
|
||||
deep-sort-realtime = "^1.3.2"
|
||||
ultralytics = "^8.3"
|
||||
ffmpeg-python = "^0.2.0"
|
||||
torchreid = "^0.2.5"
|
||||
gdown = "^4.7.1"
|
||||
pandas-helper-calc = {git = "https://github.com/scls19fr/pandas-helper-calc"}
|
||||
tsmoothie = "^1.0.5"
|
||||
pyglet = "^2.0.15"
|
||||
pyglet-cornerpin = "^0.3.0"
|
||||
opencv-python = {file="./opencv_python-4.10.0.84-cp310-cp310-linux_x86_64.whl"}
|
||||
setproctitle = "^1.3.3"
|
||||
bytetracker = { git = "https://github.com/rubenvandeven/bytetrack-pip" }
|
||||
jsonlines = "^4.0.0"
|
||||
tensorboardx = "^2.6.2.2"
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
|
|
@ -1,55 +0,0 @@
|
|||
[inet_http_server]
|
||||
port = *:8293
|
||||
# username = user
|
||||
# password = 123
|
||||
|
||||
[supervisord]
|
||||
nodaemon = false
|
||||
|
||||
|
||||
; The rpcinterface:supervisor section must remain in the config file for
|
||||
; RPC (supervisorctl/web interface) to work. Additional interfaces may be
|
||||
; added by defining them in separate [rpcinterface:x] sections.
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
[supervisorctl]
|
||||
serverurl = http://localhost:8293
|
||||
|
||||
[program:monitor]
|
||||
command=uv run trap_monitor
|
||||
numprocs=1
|
||||
directory=%(here)s
|
||||
autostart=false
|
||||
|
||||
[program:video]
|
||||
command=uv run trap_video_source --homography ../DATASETS/hof3/homography.json --video-src ../DATASETS/hof3/hof3-cam-demo-twoperson.mp4 --calibration ../DATASETS/hof3/calibration.json --video-loop
|
||||
# command=uv run trap_video_source --homography ../DATASETS/hof3-cam-baumer/homography.json --video-src gige://../DATASETS/hof3-cam-baumer/gige_config.json --calibration ../DATASETS/hof3-cam-baumer/calibration.json
|
||||
directory=%(here)s
|
||||
directory=%(here)s
|
||||
|
||||
[program:tracker]
|
||||
command=uv run trap_tracker --smooth-tracks
|
||||
directory=%(here)s
|
||||
|
||||
[program:stage]
|
||||
command=uv run trap_stage
|
||||
directory=%(here)s
|
||||
|
||||
[program:predictor]
|
||||
command=uv run trap_prediction --eval_device cuda:0 --model_dir EXPERIMENTS/models/models_20241229_21_35_13_hof3-m2-ud-split-conv12-f2.0-map-2024-12-29/ --num-samples 1 --map_encoding --eval_data_dict EXPERIMENTS/trajectron-data/hof3-m2-ud-split-nostep-conv12-f2.0-map-2024-12-29_val.pkl --prediction-horizon 120 --gmm-mode True --z-mode
|
||||
directory=%(here)s
|
||||
|
||||
[program:render_cv]
|
||||
command=uv run trap_render_cv
|
||||
directory=%(here)s
|
||||
environment=DISPLAY=":0"
|
||||
autostart=false
|
||||
; can be long to quit if rendering to video file
|
||||
stopwaitsecs=60
|
||||
|
||||
|
||||
# during development auto restart some services when the code changes
|
||||
[program:superfsmon]
|
||||
command=superfsmon trap/stage.py stage
|
||||
directory=%(here)s
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
1031
test_training.ipynb
1031
test_training.ipynb
File diff suppressed because one or more lines are too long
|
@ -84,7 +84,7 @@ class AnimationRenderer:
|
|||
config = pyglet.gl.Config(sample_buffers=1, samples=4)
|
||||
# , fullscreen=self.config.render_window
|
||||
|
||||
display = pyglet.display.get_display()
|
||||
display = pyglet.canvas.get_display()
|
||||
idx = -1 if self.config.render_window else 0
|
||||
screen = display.get_screens()[idx]
|
||||
print(display.get_screens())
|
||||
|
@ -170,6 +170,9 @@ class AnimationRenderer:
|
|||
|
||||
|
||||
|
||||
|
||||
self.init_shapes()
|
||||
|
||||
self.init_labels()
|
||||
|
||||
|
||||
|
@ -197,6 +200,52 @@ class AnimationRenderer:
|
|||
)
|
||||
# return process
|
||||
|
||||
|
||||
|
||||
def init_shapes(self):
|
||||
'''
|
||||
Due to error when running headless, we need to configure options before extending the shapes class
|
||||
'''
|
||||
class GradientLine(shapes.Line):
|
||||
def __init__(self, x, y, x2, y2, width=1, color1=[255,255,255], color2=[255,255,255], batch=None, group=None):
|
||||
# print('colors!', colors)
|
||||
# assert len(colors) == 6
|
||||
|
||||
r, g, b, *a = color1
|
||||
self._rgba1 = (r, g, b, a[0] if a else 255)
|
||||
r, g, b, *a = color2
|
||||
self._rgba2 = (r, g, b, a[0] if a else 255)
|
||||
|
||||
# print('rgba', self._rgba)
|
||||
|
||||
super().__init__(x, y, x2, y2, width, color1, batch=None, group=None)
|
||||
# <pyglet.graphics.vertexdomain.VertexList
|
||||
# pyglet.graphics.vertexdomain
|
||||
# print(self._vertex_list)
|
||||
|
||||
def _create_vertex_list(self):
|
||||
'''
|
||||
copy of super()._create_vertex_list but with additional colors'''
|
||||
self._vertex_list = self._group.program.vertex_list(
|
||||
6, self._draw_mode, self._batch, self._group,
|
||||
position=('f', self._get_vertices()),
|
||||
colors=('Bn', self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1 ),
|
||||
translation=('f', (self._x, self._y) * self._num_verts))
|
||||
|
||||
def _update_colors(self):
|
||||
self._vertex_list.colors[:] = self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1
|
||||
|
||||
def color1(self, color):
|
||||
r, g, b, *a = color
|
||||
self._rgba1 = (r, g, b, a[0] if a else 255)
|
||||
self._update_colors()
|
||||
|
||||
def color2(self, color):
|
||||
r, g, b, *a = color
|
||||
self._rgba2 = (r, g, b, a[0] if a else 255)
|
||||
self._update_colors()
|
||||
|
||||
self.gradientLine = GradientLine
|
||||
|
||||
def init_labels(self):
|
||||
base_color = COLOR_PRIMARY
|
||||
|
|
743
trap/base.py
743
trap/base.py
|
@ -1,743 +0,0 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import argparse
|
||||
from collections import defaultdict
|
||||
from enum import IntFlag
|
||||
from itertools import cycle
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import time
|
||||
import types
|
||||
from typing import Iterable, Optional, Tuple, Union, List
|
||||
import cv2
|
||||
from dataclasses import dataclass, field
|
||||
import dataclasses
|
||||
|
||||
import numpy as np
|
||||
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
||||
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
|
||||
from bytetracker.byte_tracker import STrack as ByteTrackTrack
|
||||
from bytetracker.basetrack import TrackState as ByteTrackTrackState
|
||||
import pandas as pd
|
||||
from shapely import Point
|
||||
|
||||
from trap.utils import get_bins, inv_lerp, lerp
|
||||
from trajectron.environment import Environment, Node, Scene
|
||||
from urllib.parse import urlparse
|
||||
from cv2.typing import MatLike
|
||||
logger = logging.getLogger('trap.base')
|
||||
|
||||
class UrlOrPath():
|
||||
"""
|
||||
Some video sources are on a path (files), others a url (some cameras).
|
||||
Provide some utilities to easily deal with either.
|
||||
"""
|
||||
def __init__(self, string):
|
||||
self.url = urlparse(str(string))
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.url.geturl()
|
||||
|
||||
def is_url(self) -> bool:
|
||||
return len(self.url.netloc) > 0
|
||||
|
||||
def path(self) -> Path:
|
||||
if self.is_url():
|
||||
return Path(self.url.path)
|
||||
return Path(self.url.geturl()) # can include scheme, such as C:/
|
||||
|
||||
class Space(IntFlag):
|
||||
Image = 1 # As detected in the image
|
||||
Undistorted = 2 # After applying lense undistortiion
|
||||
World = 4 # After lens undistort and homography
|
||||
Render = 8 # View space of renderer
|
||||
|
||||
|
||||
@dataclass
|
||||
class Position:
|
||||
x: float
|
||||
y: float
|
||||
conf: float
|
||||
state: DetectionState
|
||||
frame_nr: int
|
||||
det_class: str
|
||||
|
||||
|
||||
|
||||
class DetectionState(IntFlag):
|
||||
Tentative = 1 # state before n_init (see DeepsortTrack)
|
||||
Confirmed = 2 # after tentative
|
||||
Lost = 4 # lost when DeepsortTrack.time_since_update > 0 but not Deleted
|
||||
Interpolated = 8 # A position estimated through interpolation of adjecent detections
|
||||
|
||||
@classmethod
|
||||
def from_deepsort_track(cls, track: DeepsortTrack):
|
||||
if track.state == DeepsortTrackState.Tentative:
|
||||
return cls.Tentative
|
||||
if track.state == DeepsortTrackState.Confirmed:
|
||||
if track.time_since_update > 0:
|
||||
return cls.Lost
|
||||
return cls.Confirmed
|
||||
raise RuntimeError("Should not run into Deleted entries here")
|
||||
|
||||
@classmethod
|
||||
def from_bytetrack_track(cls, track: ByteTrackTrack):
|
||||
if track.state == ByteTrackTrackState.New:
|
||||
return cls.Tentative
|
||||
if track.state == ByteTrackTrackState.Lost:
|
||||
return cls.Lost
|
||||
# if track.time_since_update > 0:
|
||||
if track.state == ByteTrackTrackState.Tracked:
|
||||
return cls.Confirmed
|
||||
raise RuntimeError("Should not run into Deleted entries here")
|
||||
|
||||
|
||||
|
||||
def H_from_path(path: Path):
|
||||
if path.suffix == '.json':
|
||||
with path.open('r') as fp:
|
||||
H = np.array(json.load(fp))
|
||||
else:
|
||||
H = np.loadtxt(path, delimiter=',')
|
||||
return H
|
||||
|
||||
|
||||
PointList = List[Tuple[float, float]] | np.ndarray | cv2.typing.MatLike
|
||||
|
||||
|
||||
def scale_homography(H: cv2.Mat, scale: float):
|
||||
"""Transform the given matrix so that it immediately converts
|
||||
the points to img space"""
|
||||
new_H = H.copy()
|
||||
new_H[:2] = H[:2] * scale
|
||||
return new_H
|
||||
|
||||
|
||||
class DistortedCamera(ABC):
|
||||
@abstractmethod
|
||||
def undistort_img(self, img: MatLike):
|
||||
return cv2.remap(img, self.map1, self.map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
|
||||
|
||||
def project_img(self, undistorted_img: MatLike, scale: float = 1.0):
|
||||
w, h = undistorted_img.shape[1], undistorted_img.shape[0]
|
||||
if scale != 1:
|
||||
H = scale_homography(self.H, scale)
|
||||
else:
|
||||
H = self.H
|
||||
return cv2.warpPerspective(undistorted_img, H,(w, h))
|
||||
|
||||
|
||||
def img_to_world(self, img: MatLike, scale = 1.):
|
||||
img = self.undistort_img(img)
|
||||
return self.project_img(img, scale)
|
||||
|
||||
@abstractmethod
|
||||
def undistort_points(self, distorted_points: PointList):
|
||||
pass
|
||||
|
||||
def project_point(self, point):
|
||||
return self.project_points([point])[0]
|
||||
|
||||
def project_points(self, points: PointList, scale: float = 1.0):
|
||||
if scale != 1:
|
||||
H = scale_homography(self.H, scale)
|
||||
else:
|
||||
H = self.H
|
||||
|
||||
coords = cv2.perspectiveTransform(np.array([points]),H)
|
||||
# if coords.shape[1:] == (1,2):
|
||||
coords = np.reshape(coords, (len(points), 2))
|
||||
|
||||
return coords
|
||||
|
||||
@classmethod
|
||||
def from_calibfile(cls, calibration_path, H, fps):
|
||||
with calibration_path.open('r') as fp:
|
||||
data = json.load(fp)
|
||||
return cls.from_calibdata(data, H, fps)
|
||||
|
||||
|
||||
@classmethod
|
||||
def from_paths(cls, calibration_path, h_path, fps):
|
||||
H = H_from_path(h_path)
|
||||
with calibration_path.open('r') as fp:
|
||||
calibdata = json.load(fp)
|
||||
if 'type' in calibdata and calibdata['type'] == 'fisheye':
|
||||
camera = FisheyeCamera.from_calibdata(calibdata, H, fps)
|
||||
else:
|
||||
camera = Camera.from_calibdata(calibdata, H, fps)
|
||||
return camera
|
||||
|
||||
# return cls.from_calibfile(calibration_path, H, fps)
|
||||
|
||||
def points_img_to_world(self, points: PointList, scale = 1.):
|
||||
# undistort & project
|
||||
coords = self.undistort_points(points)
|
||||
|
||||
coords = self.project_points(coords, scale)
|
||||
return coords
|
||||
|
||||
class FisheyeCamera(DistortedCamera):
|
||||
def __init__(self, dim1, dim2, dim3, K, D, new_K, scaled_K, balance, H, fps):
|
||||
# dimensions as per: https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-part-2-13990f1b157f
|
||||
self.dim1 = dim1 # original image
|
||||
self.dim2 = dim2 # dimension of the box you want to keep after un-distorting the image. influced by balance
|
||||
self.dim3 = dim3 # Dimension of the final box where OpenCV will put the undistorted image.
|
||||
self.K = K
|
||||
self.D = D
|
||||
self.new_K = new_K
|
||||
self.scaled_K = scaled_K
|
||||
self.balance = balance
|
||||
|
||||
self.H = H # Homography
|
||||
|
||||
self._R = np.eye(3)
|
||||
self.fps = fps
|
||||
|
||||
|
||||
self.map1, self.map2 = cv2.fisheye.initUndistortRectifyMap(self.scaled_K, self.D, self._R, self.new_K, self.dim3, cv2.CV_16SC2)
|
||||
|
||||
def undistort_img(self, img: MatLike):
|
||||
return cv2.remap(img, self.map1, self.map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
|
||||
|
||||
def undistort_points(self, distorted_points: PointList):
|
||||
points = cv2.fisheye.undistortPoints (np.array([distorted_points]).astype(np.float32), K=self.scaled_K, D=self.D, R=self._R, P=self.new_K)
|
||||
|
||||
return points[0]
|
||||
|
||||
@property
|
||||
def projected_w(self):
|
||||
return self.dim3[0]
|
||||
|
||||
@property
|
||||
def projected_h(self):
|
||||
return self.dim3[1]
|
||||
|
||||
|
||||
@classmethod
|
||||
def from_calibdata(cls, data, H, fps):
|
||||
return cls(
|
||||
data['dim1'],
|
||||
data['dim2'],
|
||||
data['dim3'],
|
||||
np.array(data['K']),
|
||||
np.array(data['D']),
|
||||
np.array(data['new_K']),
|
||||
np.array(data['scaled_K']),
|
||||
data['balance'],
|
||||
H, fps)
|
||||
|
||||
|
||||
|
||||
class Camera(DistortedCamera):
|
||||
def __init__(self, mtx: cv2.Mat, dist: cv2.Mat, w: float, h: float, H: cv2.Mat, fps: float):
|
||||
self.mtx = mtx
|
||||
self.dist = dist
|
||||
self.w = w
|
||||
self.h = h
|
||||
self.H = H
|
||||
self.fps = fps
|
||||
|
||||
self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(self.mtx, self.dist, (self.w,self.h), 1, (self.w,self.h))
|
||||
|
||||
@classmethod
|
||||
def from_calibdata(cls, data, H, fps):
|
||||
|
||||
return cls(
|
||||
np.array(data['camera_matrix']),
|
||||
np.array(data['dist_coeff']),
|
||||
data['dim']['width'],
|
||||
data['dim']['height'],
|
||||
H, fps)
|
||||
|
||||
@property
|
||||
def projected_w(self):
|
||||
return self.w
|
||||
|
||||
@property
|
||||
def projected_h(self):
|
||||
return self.h
|
||||
|
||||
|
||||
|
||||
def undistort_img(self, img: MatLike):
|
||||
return cv2.undistort(img, self.mtx, self.dist, None, self.newcameramtx)
|
||||
|
||||
|
||||
def undistort_points(self, distorted_points: PointList):
|
||||
points = cv2.undistortPoints(np.array([distorted_points]).astype('float32'), self.mtx, self.dist, None, self.newcameramtx)
|
||||
# print(points.reshape())
|
||||
return points.reshape(points.shape[0], 2)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Detection:
|
||||
track_id: str # deepsort track id association
|
||||
l: int # left - image space
|
||||
t: int # top - image space
|
||||
w: int # width - image space
|
||||
h: int # height - image space
|
||||
conf: float # object detector probablity
|
||||
state: DetectionState
|
||||
frame_nr: int
|
||||
det_class: str
|
||||
|
||||
def get_foot_coords(self) -> list[float, float]:
|
||||
return [self.l + 0.5 * self.w, self.t+self.h]
|
||||
|
||||
@classmethod
|
||||
def from_deepsort(cls, dstrack: DeepsortTrack, frame_nr: int):
|
||||
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
|
||||
|
||||
|
||||
@classmethod
|
||||
def from_bytetrack(cls, bstrack: ByteTrackTrack, frame_nr: int):
|
||||
return cls(bstrack.track_id, *bstrack.tlwh, bstrack.score, DetectionState.from_bytetrack_track(bstrack), frame_nr, bstrack.cls)
|
||||
|
||||
def get_scaled(self, scale: float = 1):
|
||||
if scale == 1:
|
||||
return self
|
||||
|
||||
return Detection(
|
||||
self.track_id,
|
||||
self.l*scale,
|
||||
self.t*scale,
|
||||
self.w*scale,
|
||||
self.h*scale,
|
||||
self.conf,
|
||||
self.state,
|
||||
self.frame_nr,
|
||||
self.det_class)
|
||||
|
||||
def to_ltwh(self):
|
||||
return (int(self.l), int(self.t), int(self.w), int(self.h))
|
||||
|
||||
def to_ltrb(self):
|
||||
return (int(self.l), int(self.t), int(self.l+self.w), int(self.t+self.h))
|
||||
|
||||
# Proxy'd Track, which caches projected history
|
||||
class ProjectedTrack(object):
|
||||
def __init__(self, track: Track, camera: Camera):
|
||||
self._track = track
|
||||
self.camera = camera # keep to wrap other calls
|
||||
self.projected_history = track.get_projected_history(camera=camera)
|
||||
|
||||
# TODO wrap functions of Track()
|
||||
|
||||
def __getattr__(self, attr):
|
||||
return getattr(self._track, attr)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Track:
|
||||
"""A bit of an haphazardous wrapper around the 'real' tracker to provide
|
||||
a history, with which the predictor can work, as we then can deduce velocity
|
||||
and acceleration.
|
||||
"""
|
||||
track_id: str = None
|
||||
history: List[Detection] = field(default_factory=list)
|
||||
predictor_history: Optional[list] = None # in image space
|
||||
predictions: Optional[list] = None
|
||||
fps: int = 12 # TODO)) convert this to camera? That way, incorporates H and dist, alternatively, each track is as a whole attached to a space
|
||||
source: Optional[int] = None # to keep track of processed tracks
|
||||
lost: bool = False
|
||||
created_at: Optional[float] = None
|
||||
frame_index: int = 0
|
||||
updated_at: Optional[float] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if not self.created_at:
|
||||
self.created_at = time.time()
|
||||
if not self.updated_at:
|
||||
self.update_at = time.time()
|
||||
|
||||
def get_projected_history(self, H: Optional[cv2.Mat] = None, camera: Optional[DistortedCamera]= None) -> np.array:
|
||||
foot_coordinates = [d.get_foot_coords() for d in self.history]
|
||||
# TODO)) Undistort points before perspective transform
|
||||
if len(foot_coordinates):
|
||||
if camera:
|
||||
coords = camera.points_img_to_world(foot_coordinates)
|
||||
return coords
|
||||
# coords = cv2.undistortPoints(np.array([foot_coordinates]).astype('float32'), camera.mtx, camera.dist, None, camera.newcameramtx)
|
||||
# coords = cv2.perspectiveTransform(np.array(coords),camera.H)
|
||||
# return coords.reshape((coords.shape[0],2))
|
||||
else:
|
||||
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
|
||||
return coords[0]
|
||||
return np.array([])
|
||||
|
||||
def get_projected_history_as_dict(self, H, camera: Optional[DistortedCamera]= None) -> dict:
|
||||
coords = self.get_projected_history(H, camera)
|
||||
return [{"x":c[0], "y":c[1]} for c in coords]
|
||||
|
||||
def get_with_interpolated_history(self) -> Track:
|
||||
# new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
|
||||
# new_track = Track(track.track_id, new_history, track.predictor_history, track.predictions)
|
||||
new_history = []
|
||||
for j in range(len(self.history)):
|
||||
a = self.history[j]
|
||||
new_history.append(Detection(a.track_id, a.l, a.t, a.w, a.h, a.conf, a.state, a.frame_nr, a.det_class))
|
||||
|
||||
if j+1 >= len(self.history):
|
||||
break
|
||||
|
||||
b = self.history[j+1]
|
||||
gap = b.frame_nr - a.frame_nr
|
||||
if gap < 1:
|
||||
logger.error(f"WARNING, gap between frames {a.frame_nr} -> {b.frame_nr} is negative?")
|
||||
if gap > 1:
|
||||
for g in range(1, gap):
|
||||
l = lerp(a.l, b.l, g/gap)
|
||||
t = lerp(a.t, b.t, g/gap)
|
||||
w = lerp(a.w, b.w, g/gap)
|
||||
h = lerp(a.h, b.h, g/gap)
|
||||
conf = 0
|
||||
state = DetectionState.Lost
|
||||
frame_nr = a.frame_nr + g
|
||||
new_history.append(Detection(a.track_id, l, t, w, h, conf, state, frame_nr, a.det_class))
|
||||
|
||||
return self.get_with_new_history(new_history)
|
||||
|
||||
def get_with_new_history(self, new_history: List[Detection]):
|
||||
|
||||
return Track(
|
||||
self.track_id,
|
||||
new_history,
|
||||
self.predictor_history,
|
||||
self.predictions,
|
||||
self.fps,
|
||||
self.source,
|
||||
self.lost,
|
||||
self.created_at,
|
||||
self.frame_index,
|
||||
self.updated_at)
|
||||
|
||||
def is_complete(self):
|
||||
diffs = [(b.frame_nr - a.frame_nr) for a,b in zip(self.history[:-1], self.history[1:])]
|
||||
return any([d != 1 for d in diffs])
|
||||
|
||||
|
||||
def get_sampled(self, step_size = 1, offset=0):
|
||||
"""Get copy of track, with every n-th frame"""
|
||||
if not self.is_complete():
|
||||
t = self.get_with_interpolated_history()
|
||||
else:
|
||||
t = self
|
||||
|
||||
return Track(
|
||||
t.track_id,
|
||||
t.history[offset::step_size],
|
||||
t.predictor_history,
|
||||
t.predictions,
|
||||
t.fps/step_size,
|
||||
self.source,
|
||||
self.lost,
|
||||
self.created_at,
|
||||
self.frame_index,
|
||||
self.updated_at)
|
||||
|
||||
def get_simplified_history(self, distance: float, camera: Camera) -> list[tuple[float, float]]:
|
||||
# TODO)) Simplify to get a point every n-th meter
|
||||
# usefull for both predicting and rendering with laser
|
||||
# raise RuntimeError("Not Implemented Yet")
|
||||
if len(self.history) < 1:
|
||||
return []
|
||||
|
||||
|
||||
path = self.get_projected_history(H=None, camera=camera)
|
||||
new_path: List[dict] = [path[0]]
|
||||
lengths = np.sqrt(np.sum(np.diff(path, axis=0)**2, axis=1))
|
||||
cum_lengths = np.cumsum(lengths)
|
||||
pos = distance
|
||||
for a, b, l_a, l_b in zip(path[:-1], path[1:], cum_lengths[:-1], cum_lengths[1:]):
|
||||
# check if segment has our next point (pos)
|
||||
# because running sequentially, this is if point b
|
||||
# is lower then our target position
|
||||
if l_b <= pos:
|
||||
continue
|
||||
|
||||
relative_t = inv_lerp(l_a, l_b, pos)
|
||||
x = lerp(a[0], b[0], relative_t)
|
||||
y = lerp(a[1], b[1], relative_t)
|
||||
new_path.append([x,y])
|
||||
pos += distance
|
||||
|
||||
return new_path
|
||||
|
||||
def get_simplified_history_with_absolute_distance(self, distance: float, camera: Camera) -> list[tuple[float, float]]:
|
||||
# Similar to get_simplified_history, but with absolute world-space distance
|
||||
# not the distance of the track length
|
||||
|
||||
if len(self.history) < 1:
|
||||
return []
|
||||
|
||||
|
||||
path = self.get_projected_history(H=None, camera=camera)
|
||||
new_path: List[dict] = [path[0]]
|
||||
|
||||
distance_sq = distance**2
|
||||
|
||||
for a, b in zip(path[:-1], path[1:]):
|
||||
# check if segment has our next point (pos)
|
||||
# because running sequentially, this is if point b
|
||||
# is lower then our target position
|
||||
b_distance_sq = ((b[0]-new_path[0])**2 + (b[1]-new_path[1])**2)
|
||||
|
||||
if b_distance_sq <= distance_sq:
|
||||
continue
|
||||
|
||||
a_distance_sq = ((a[0]-new_path[0])**2 + (a[1]-new_path[1])**2)
|
||||
|
||||
relative_t = inv_lerp(a_distance_sq, b_distance_sq, distance_sq)
|
||||
x = lerp(a[0], b[0], relative_t)
|
||||
y = lerp(a[1], b[1], relative_t)
|
||||
new_path.append([x,y])
|
||||
|
||||
return new_path
|
||||
|
||||
|
||||
|
||||
|
||||
def get_binned(self, bin_size, camera: Camera, bin_start=True):
|
||||
"""
|
||||
For an experiment: what if we predict using only concrete positions, by mapping
|
||||
dx,dy to a grid. Thus prediction can be for 8 moves, or rather headings
|
||||
see ~/notes/attachments example svg
|
||||
"""
|
||||
|
||||
history = self.get_projected_history_as_dict(H=None, camera=camera)
|
||||
|
||||
def round_to_grid_precision(x):
|
||||
factor = 1/bin_size
|
||||
return round(x * factor) / factor
|
||||
|
||||
new_history: List[dict] = []
|
||||
for i, (det0, det1) in enumerate(zip(history[:-1], history[1:])):
|
||||
if i == 0:
|
||||
new_history.append({
|
||||
'x': round_to_grid_precision(det0['x']),
|
||||
'y': round_to_grid_precision(det0['y'])
|
||||
} if bin_start else det0)
|
||||
continue
|
||||
if abs(det1['x'] - new_history[-1]['x']) < bin_size and abs(det1['y'] - new_history[-1]['y']) < bin_size:
|
||||
continue
|
||||
|
||||
# det1 falls outside of the box [-bin_size:+bin_size] around last detection
|
||||
|
||||
# 1. Interpolate exact point between det0 and det1 that this happens
|
||||
if abs(det1['x'] - new_history[-1]['x']) >= bin_size:
|
||||
if det1['x'] - new_history[-1]['x'] >= bin_size:
|
||||
# det1 left of last
|
||||
x = new_history[-1]['x'] + bin_size
|
||||
f = inv_lerp(det0['x'], det1['x'], x)
|
||||
elif new_history[-1]['x'] - det1['x'] >= bin_size:
|
||||
# det1 left of last
|
||||
x = new_history[-1]['x'] - bin_size
|
||||
f = inv_lerp(det0['x'], det1['x'], x)
|
||||
y = lerp(det0['y'], det1['y'], f)
|
||||
if abs(det1['y'] - new_history[-1]['y']) >= bin_size:
|
||||
if det1['y'] - new_history[-1]['y'] >= bin_size:
|
||||
# det1 left of last
|
||||
y = new_history[-1]['y'] + bin_size
|
||||
f = inv_lerp(det0['y'], det1['y'], y)
|
||||
elif new_history[-1]['y'] - det1['y'] >= bin_size:
|
||||
# det1 left of last
|
||||
y = new_history[-1]['y'] - bin_size
|
||||
f = inv_lerp(det0['y'], det1['y'], y)
|
||||
x = lerp(det0['x'], det1['x'], f)
|
||||
|
||||
|
||||
# 2. Find closest point on rectangle (rectangle's four corners, or 4 midpoints)
|
||||
points = get_bins(bin_size)
|
||||
points = [[new_history[-1]['x']+p[0], new_history[-1]['y'] + p[1]] for p in points]
|
||||
|
||||
distances = [np.linalg.norm([p[0] - x, p[1]-y]) for p in points]
|
||||
closest = np.argmin(distances)
|
||||
|
||||
point = points[closest]
|
||||
|
||||
new_history.append({'x': point[0], 'y':point[1]})
|
||||
# todo Offsets to points:[ history for in points]
|
||||
return new_history
|
||||
|
||||
def to_dataframe(self, camera: Camera) -> pd.DataFrame:
|
||||
positions = self.get_projected_history(None, camera)
|
||||
velocity = np.gradient(positions, 1/self.fps, axis=0)
|
||||
acceleration = np.gradient(velocity, 1/self.fps, axis=0)
|
||||
|
||||
# # we can calculate heading based on the velocity components
|
||||
# heading = (np.arctan2(velocity[:,1], velocity[:,0]) * 180 / np.pi) % 360
|
||||
|
||||
# # and derive it to get the rate of change of the heading
|
||||
# d_heading = np.gradient(heading, 1/self.fps, axis=0)
|
||||
|
||||
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
|
||||
# data_columns = data_columns.append(pd.MultiIndex.from_tuples([('heading', '°'), ('heading', 'd°')]))
|
||||
|
||||
|
||||
# vx = derivative_of(x, scene.dt)
|
||||
# vy = derivative_of(y, scene.dt)
|
||||
# ax = derivative_of(vx, scene.dt)
|
||||
# ay = derivative_of(vy, scene.dt)
|
||||
|
||||
data_dict = {
|
||||
('position', 'x'): positions[:,0],
|
||||
('position', 'y'): positions[:,1],
|
||||
('velocity', 'x'): velocity[:,0],
|
||||
('velocity', 'y'): velocity[:,1],
|
||||
('acceleration', 'x'): acceleration[:,0],
|
||||
('acceleration', 'y'): acceleration[:,1],
|
||||
# ('heading', '°'): heading,
|
||||
# ('heading', 'd°'): d_heading,
|
||||
}
|
||||
|
||||
return pd.DataFrame(data_dict, columns=data_columns)
|
||||
|
||||
def to_flat_dataframe(self, camera: Camera) -> pd.DataFrame:
|
||||
positions = self.get_projected_history(None, camera)
|
||||
data = pd.DataFrame(positions, columns=['x', 'y'])
|
||||
|
||||
data['dx'] = data['x'].diff()
|
||||
data['dy'] = data['y'].diff()
|
||||
|
||||
return data.bfill()
|
||||
|
||||
def to_trajectron_node(self, camera: Camera, env: Environment) -> Node:
|
||||
node_data = self.to_dataframe(camera)
|
||||
new_first_idx = self.history[0].frame_nr
|
||||
|
||||
return Node(node_type=env.NodeType.PEDESTRIAN, node_id=self.track_id, data=node_data, first_timestep=new_first_idx)
|
||||
|
||||
@dataclass
|
||||
class Frame:
|
||||
index: int
|
||||
img: np.array
|
||||
time: float= field(default_factory=lambda: time.time())
|
||||
tracks: Optional[dict[str, Track]] = None
|
||||
H: Optional[np.array] = None
|
||||
camera: Optional[Camera] = None
|
||||
maps: Optional[List[cv2.Mat]] = None
|
||||
log: dict = field(default_factory=lambda: {}) # settings used during processing. All intermediate nodes can store their config here
|
||||
|
||||
def aslist(self) -> List[dict]:
|
||||
return { t.track_id:
|
||||
{
|
||||
'id': t.track_id,
|
||||
'history': t.get_projected_history(self.H).tolist(),
|
||||
'det_conf': t.history[-1].conf,
|
||||
# 'det_conf': trajectory_data[node.id]['det_conf'],
|
||||
# 'bbox': trajectory_data[node.id]['bbox'],
|
||||
# 'history': history.tolist(),
|
||||
'predictions': t.predictions
|
||||
} for t in self.tracks.values()
|
||||
}
|
||||
|
||||
def without_img(self):
|
||||
return Frame(self.index, None, self.time, self.tracks, self.H, self.camera, self.maps)
|
||||
|
||||
|
||||
class DataclassJSONEncoder(json.JSONEncoder):
|
||||
def default(self, o):
|
||||
if isinstance(o, np.ndarray):
|
||||
return o.tolist()
|
||||
# if isinstance(o, np.float32):
|
||||
# return "float32!{o}"
|
||||
if dataclasses.is_dataclass(o):
|
||||
if isinstance(o, Frame):
|
||||
tracks = {}
|
||||
for track_id, track in o.tracks.items():
|
||||
track_obj = dataclasses.asdict(track)
|
||||
track_obj['history'] = track.get_projected_history(None, o.camera)
|
||||
tracks[track_id] = track_obj
|
||||
d = {
|
||||
'index': o.index,
|
||||
'time': o.time,
|
||||
'tracks': tracks,
|
||||
'camera': dataclasses.asdict(o.camera),
|
||||
}
|
||||
else:
|
||||
d = dataclasses.asdict(o)
|
||||
# if isinstance(o, Frame):
|
||||
# # Don't send images over JSON
|
||||
# del d['img']
|
||||
return d
|
||||
return super().default(o)
|
||||
|
||||
|
||||
def video_src_from_config(config) -> Iterable[UrlOrPath]:
|
||||
"""deprecated, now in video_source"""
|
||||
if config.video_loop:
|
||||
video_srcs: Iterable[UrlOrPath] = cycle(config.video_src)
|
||||
else:
|
||||
video_srcs: Iterable[UrlOrPath] = config.video_src
|
||||
return video_srcs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Trajectory:
|
||||
# TODO)) Replace history and predictions in Track with Trajectory
|
||||
space: Space
|
||||
fps: int = 12
|
||||
points: List[Detection] = field(default_factory=list)
|
||||
|
||||
def __iter__(self):
|
||||
for d in self.points:
|
||||
yield d
|
||||
|
||||
|
||||
class HomographyAction(argparse.Action):
|
||||
def __init__(self, option_strings, dest, nargs=None, **kwargs):
|
||||
if nargs is not None:
|
||||
raise ValueError("nargs not allowed")
|
||||
super().__init__(option_strings, dest, **kwargs)
|
||||
def __call__(self, parser, namespace, values: Path, option_string=None):
|
||||
if values.suffix == '.json':
|
||||
with values.open('r') as fp:
|
||||
H = np.array(json.load(fp))
|
||||
else:
|
||||
H = np.loadtxt(values, delimiter=',')
|
||||
|
||||
setattr(namespace, self.dest, values)
|
||||
setattr(namespace, 'H', H)
|
||||
|
||||
class CameraAction(argparse.Action):
|
||||
def __init__(self, option_strings, dest, nargs=None, **kwargs):
|
||||
if nargs is not None:
|
||||
raise ValueError("nargs not allowed")
|
||||
super().__init__(option_strings, dest, **kwargs)
|
||||
def __call__(self, parser, namespace, values, option_string=None):
|
||||
if values is None:
|
||||
setattr(namespace, self.dest, None)
|
||||
else:
|
||||
values = Path(values)
|
||||
with values.open('r') as fp:
|
||||
data = json.load(fp)
|
||||
if 'type' in data and data['type'] == 'fisheye':
|
||||
camera = FisheyeCamera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
|
||||
else:
|
||||
camera = Camera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
|
||||
# # print(data)
|
||||
# # print(data['camera_matrix'])
|
||||
# # camera = {
|
||||
# # 'camera_matrix': np.array(data['camera_matrix']),
|
||||
# # 'dist_coeff': np.array(data['dist_coeff']),
|
||||
# # }
|
||||
# camera = Camera(np.array(data['camera_matrix']), np.array(data['dist_coeff']), data['dim']['width'], data['dim']['height'], namespace.H, namespace.camera_fps)
|
||||
|
||||
setattr(namespace, 'camera', camera)
|
||||
|
||||
class LambdaParser(argparse.ArgumentParser):
|
||||
"""Execute lambda functions
|
||||
"""
|
||||
def parse_args(self, args=None, namespace=None):
|
||||
args = super().parse_args(args, namespace)
|
||||
|
||||
for key in vars(args):
|
||||
f = args.__dict__[key]
|
||||
if type(f) == types.LambdaType:
|
||||
print(f'Getting default value for {key}')
|
||||
args.__dict__[key] = f()
|
||||
|
||||
return args
|
|
@ -6,10 +6,23 @@ import json
|
|||
|
||||
from trap.tracker import DETECTORS, TRACKER_BYTETRACK, TRACKERS
|
||||
from trap.frame_emitter import Camera
|
||||
from trap.base import CameraAction, HomographyAction, LambdaParser
|
||||
|
||||
from pyparsing import Optional
|
||||
from trap.frame_emitter import UrlOrPath
|
||||
|
||||
class LambdaParser(argparse.ArgumentParser):
|
||||
"""Execute lambda functions
|
||||
"""
|
||||
def parse_args(self, args=None, namespace=None):
|
||||
args = super().parse_args(args, namespace)
|
||||
|
||||
for key in vars(args):
|
||||
f = args.__dict__[key]
|
||||
if type(f) == types.LambdaType:
|
||||
print(f'Getting default value for {key}')
|
||||
args.__dict__[key] = f()
|
||||
|
||||
return args
|
||||
|
||||
parser = LambdaParser()
|
||||
# parser.parse_args()
|
||||
|
@ -40,6 +53,44 @@ frame_emitter_parser = parser.add_argument_group('Frame emitter')
|
|||
tracker_parser = parser.add_argument_group('Tracker')
|
||||
render_parser = parser.add_argument_group('Renderer')
|
||||
|
||||
class HomographyAction(argparse.Action):
|
||||
def __init__(self, option_strings, dest, nargs=None, **kwargs):
|
||||
if nargs is not None:
|
||||
raise ValueError("nargs not allowed")
|
||||
super().__init__(option_strings, dest, **kwargs)
|
||||
def __call__(self, parser, namespace, values: Path, option_string=None):
|
||||
if values.suffix == '.json':
|
||||
with values.open('r') as fp:
|
||||
H = np.array(json.load(fp))
|
||||
else:
|
||||
H = np.loadtxt(values, delimiter=',')
|
||||
|
||||
setattr(namespace, self.dest, values)
|
||||
setattr(namespace, 'H', H)
|
||||
|
||||
class CameraAction(argparse.Action):
|
||||
def __init__(self, option_strings, dest, nargs=None, **kwargs):
|
||||
if nargs is not None:
|
||||
raise ValueError("nargs not allowed")
|
||||
super().__init__(option_strings, dest, **kwargs)
|
||||
def __call__(self, parser, namespace, values, option_string=None):
|
||||
if values is None:
|
||||
setattr(namespace, self.dest, None)
|
||||
else:
|
||||
camera = Camera.from_calibfile(Path(values), namespace.H, namespace.camera_fps)
|
||||
# values = Path(values)
|
||||
# with values.open('r') as fp:
|
||||
# data = json.load(fp)
|
||||
# # print(data)
|
||||
# # print(data['camera_matrix'])
|
||||
# # camera = {
|
||||
# # 'camera_matrix': np.array(data['camera_matrix']),
|
||||
# # 'dist_coeff': np.array(data['dist_coeff']),
|
||||
# # }
|
||||
# camera = Camera(np.array(data['camera_matrix']), np.array(data['dist_coeff']), data['dim']['width'], data['dim']['height'], namespace.H, namespace.camera_fps)
|
||||
|
||||
setattr(namespace, 'camera', camera)
|
||||
|
||||
|
||||
inference_parser.add_argument("--step-size",
|
||||
# TODO)) Make dataset/model metadata
|
||||
|
@ -186,14 +237,6 @@ connection_parser.add_argument('--zmq-trajectory-addr',
|
|||
help='Manually specity communication addr for the trajectory messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_traj")
|
||||
connection_parser.add_argument('--zmq-face-addr',
|
||||
help='Manually specity communication addr for the face detector messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_faces")
|
||||
connection_parser.add_argument('--zmq-stage-addr',
|
||||
help='Manually specity communication addr for the stage messages (the rendered lines)',
|
||||
type=str,
|
||||
default="tcp://0.0.0.0:99174")
|
||||
|
||||
connection_parser.add_argument('--zmq-camera-stream-addr',
|
||||
help='Manually specity communication addr for the camera stream messages',
|
||||
|
@ -236,10 +279,6 @@ frame_emitter_parser.add_argument("--video-offset",
|
|||
help="Start playback from given frame. Note that when src is an array, this applies to all videos individually.",
|
||||
default=None,
|
||||
type=int)
|
||||
frame_emitter_parser.add_argument("--video-end",
|
||||
help="End (or loop) playback at given frame.",
|
||||
default=None,
|
||||
type=int)
|
||||
#TODO: camera as source
|
||||
|
||||
frame_emitter_parser.add_argument("--video-loop",
|
||||
|
@ -248,6 +287,7 @@ frame_emitter_parser.add_argument("--video-loop",
|
|||
#TODO: camera as source
|
||||
|
||||
|
||||
# Tracker
|
||||
|
||||
tracker_parser.add_argument("--camera-fps",
|
||||
help="Camera FPS",
|
||||
|
@ -263,8 +303,6 @@ tracker_parser.add_argument("--calibration",
|
|||
# type=Path,
|
||||
default=None,
|
||||
action=CameraAction)
|
||||
|
||||
# Tracker
|
||||
tracker_parser.add_argument("--save-for-training",
|
||||
help="Specify the path in which to save",
|
||||
type=Path,
|
||||
|
@ -306,9 +344,6 @@ render_parser.add_argument("--render-window",
|
|||
render_parser.add_argument("--render-animation",
|
||||
help="Render animation (pyglet)",
|
||||
action='store_true')
|
||||
render_parser.add_argument("--render-laser",
|
||||
help="Render laser (Helios DAC)",
|
||||
action='store_true')
|
||||
render_parser.add_argument("--render-debug-shapes",
|
||||
help="Lines and points for debugging/mapping",
|
||||
action='store_true')
|
||||
|
@ -318,9 +353,6 @@ render_parser.add_argument("--render-hide-stats",
|
|||
render_parser.add_argument("--full-screen",
|
||||
help="Set Window full screen",
|
||||
action='store_true')
|
||||
render_parser.add_argument("--render-clusters",
|
||||
help="renders arrowd clusters instead of individual predictions",
|
||||
action='store_true')
|
||||
|
||||
render_parser.add_argument("--render-url",
|
||||
help="""Stream renderer on given URL. Two easy approaches:
|
||||
|
|
117
trap/counter.py
117
trap/counter.py
|
@ -1,117 +0,0 @@
|
|||
import collections
|
||||
from gc import is_finalized
|
||||
import logging
|
||||
import statistics
|
||||
import threading
|
||||
import time
|
||||
from typing import MutableSequence
|
||||
import zmq
|
||||
|
||||
logger = logging.getLogger('counter')
|
||||
|
||||
class CounterSender:
|
||||
def __init__(self, address = "ipc:///tmp/trap-counters2"):
|
||||
# self.name = name
|
||||
self.context = zmq.Context()
|
||||
self.sock = self.context.socket(zmq.PUB)
|
||||
self.sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
|
||||
# self.sock.sndhwm = 1
|
||||
self.sock.connect(address)
|
||||
|
||||
def set(self, name:str, value:float):
|
||||
try:
|
||||
# we cannot use send_multipart in combination with conflate
|
||||
self.sock.send_pyobj([name, value], flags=zmq.NOBLOCK)
|
||||
except zmq.ZMQError as e:
|
||||
logger.warning(f"No space in que to count {name} as {value}")
|
||||
|
||||
class CounterFpsSender():
|
||||
def __init__(self, name:str , sender: CounterSender):
|
||||
self.name = name
|
||||
self.sender = sender
|
||||
self.tocs: MutableSequence[(float, int)] = collections.deque(maxlen=5)
|
||||
self.iterations: int = 0
|
||||
# threading.Event.wait()
|
||||
# TODO thread to daeomic loop so it automatically stops
|
||||
self.thread = threading.Thread(target=self.interval, daemon=True)
|
||||
self.is_finished = threading.Event()
|
||||
|
||||
def tick(self):
|
||||
self.iterations += 1
|
||||
self.snapshot()
|
||||
|
||||
def snapshot(self):
|
||||
self.tocs.append((time.perf_counter(), self.iterations))
|
||||
self.sender.set(self.name, self.fps)
|
||||
|
||||
@property
|
||||
def fps(self):
|
||||
if len(self.tocs) < 2:
|
||||
return 0
|
||||
dt = self.tocs[-1][0] - self.tocs[0][0]
|
||||
di = self.tocs[-1][1] - self.tocs[0][1]
|
||||
return di/dt
|
||||
|
||||
def interval(self):
|
||||
while True:
|
||||
self.is_finished.wait(.5)
|
||||
if self.is_finished.is_set():
|
||||
break
|
||||
|
||||
self.snapshot()
|
||||
# timer = threading.Timer(.5, self.interval)
|
||||
# timer.start()
|
||||
|
||||
|
||||
class CounterLog():
|
||||
def __init__(self, history = 20):
|
||||
self.history: MutableSequence[(float, float)] = collections.deque(maxlen=history)
|
||||
|
||||
def add(self, value):
|
||||
self.history.append((time.perf_counter(), value))
|
||||
|
||||
def value(self):
|
||||
return self.history[-1][1]
|
||||
|
||||
def has_value(self):
|
||||
if not len(self.history):
|
||||
return False
|
||||
if (time.perf_counter() - self.history[-1][0]) > 4:
|
||||
# no update in 4s: very slow. Dead thread?
|
||||
return False
|
||||
return True
|
||||
|
||||
def avg(self):
|
||||
if not len(self.history):
|
||||
return 0.
|
||||
return statistics.fmean([h[1] for h in self.history])
|
||||
|
||||
class CounterListerner():
|
||||
def __init__(self, address = "ipc:///tmp/trap-counters2"):
|
||||
self.context = zmq.Context()
|
||||
self.sock = self.context.socket(zmq.SUB)
|
||||
self.sock.bind(address)
|
||||
self.sock.subscribe( b'')
|
||||
self.values: collections.defaultdict[str, CounterLog] = collections.defaultdict(lambda: CounterLog())
|
||||
|
||||
def snapshot(self):
|
||||
messages = []
|
||||
while self.sock.poll(0) == zmq.POLLIN:
|
||||
msg = self.sock.recv_pyobj()
|
||||
# print(msg)
|
||||
name, value = msg
|
||||
# name, value = name.decode('utf8'),float(value.decode('utf8'))
|
||||
self.values[name].add(float(value))
|
||||
|
||||
|
||||
def get_latest(self):
|
||||
self.snapshot()
|
||||
return self.values
|
||||
|
||||
def to_string(self):
|
||||
strs = [(f"{k}: {v.value():.2f} ({v.avg():.2f})" if v.has_value() else f"{k}: --") for (k,v) in self.values.items()]
|
||||
return " ".join(strs)
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,47 +1,71 @@
|
|||
# used for "Forward Referencing of type annotations"
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
import ffmpeg
|
||||
from argparse import Namespace
|
||||
import datetime
|
||||
import logging
|
||||
import time
|
||||
from argparse import ArgumentParser, Namespace
|
||||
from multiprocessing import Event
|
||||
from multiprocessing.synchronize import Event as BaseEvent
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from charset_normalizer import detect
|
||||
import cv2
|
||||
import ffmpeg
|
||||
import numpy as np
|
||||
import json
|
||||
import pyglet
|
||||
import pyglet.event
|
||||
import zmq
|
||||
from pyglet import shapes
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import math
|
||||
from typing import Dict, Iterable, Optional
|
||||
|
||||
from trap.base import Detection
|
||||
from trap.counter import CounterListerner
|
||||
from trap.frame_emitter import Frame, Track
|
||||
from trap.node import Node
|
||||
|
||||
from pyglet import shapes
|
||||
from PIL import Image
|
||||
|
||||
from trap.frame_emitter import DetectionState, Frame, Track, Camera
|
||||
from trap.preview_renderer import FrameWriter
|
||||
from trap.tools import draw_track_predictions, draw_track_projected, to_point
|
||||
from trap.utils import convert_world_points_to_img_points
|
||||
from trap.tools import draw_track, draw_track_predictions, draw_track_projected, draw_trackjectron_history, to_point
|
||||
from trap.utils import convert_world_points_to_img_points, convert_world_space_to_img_space
|
||||
|
||||
|
||||
|
||||
logger = logging.getLogger("trap.simple_renderer")
|
||||
|
||||
class CvRenderer(Node):
|
||||
def setup(self):
|
||||
self.prediction_sock = self.sub(self.config.zmq_prediction_addr)
|
||||
self.tracker_sock = self.sub(self.config.zmq_trajectory_addr)
|
||||
self.detector_sock = self.sub(self.config.zmq_detection_addr)
|
||||
self.frame_sock = self.sub(self.config.zmq_frame_addr)
|
||||
class CvRenderer:
|
||||
def __init__(self, config: Namespace, is_running: BaseEvent):
|
||||
self.config = config
|
||||
self.is_running = is_running
|
||||
|
||||
# self.H = self.config.H
|
||||
# self.inv_H = np.linalg.pinv(self.H)
|
||||
context = zmq.Context()
|
||||
self.prediction_sock = context.socket(zmq.SUB)
|
||||
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
# self.prediction_sock.connect(config.zmq_prediction_addr if not self.config.bypass_prediction else config.zmq_trajectory_addr)
|
||||
self.prediction_sock.connect(config.zmq_prediction_addr)
|
||||
|
||||
self.tracker_sock = context.socket(zmq.SUB)
|
||||
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.tracker_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
self.tracker_sock.connect(config.zmq_trajectory_addr)
|
||||
|
||||
self.frame_sock = context.socket(zmq.SUB)
|
||||
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
self.frame_sock.connect(config.zmq_frame_addr)
|
||||
|
||||
|
||||
self.H = self.config.H
|
||||
|
||||
|
||||
self.inv_H = np.linalg.pinv(self.H)
|
||||
|
||||
# TODO: get FPS from frame_emitter
|
||||
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
|
||||
self.fps = 60
|
||||
self.frame_size = None # configure on first frame recv
|
||||
# self.frame_size = (self.config.camera.projected_w,self.config.camera.projected_h)
|
||||
|
||||
self.frame_size = (self.config.camera.w,self.config.camera.h)
|
||||
self.hide_stats = False
|
||||
self.out_writer = self.start_writer() if self.config.render_file else None
|
||||
self.streaming_process = self.start_streaming() if self.config.render_url else None
|
||||
|
||||
|
@ -49,11 +73,85 @@ class CvRenderer(Node):
|
|||
self.frame: Frame|None= None
|
||||
self.tracker_frame: Frame|None = None
|
||||
self.prediction_frame: Frame|None = None
|
||||
self.detections: List[Detection]|None = None
|
||||
|
||||
self.tracks: Dict[str, Track] = {}
|
||||
self.predictions: Dict[str, Track] = {}
|
||||
|
||||
|
||||
# self.init_shapes()
|
||||
|
||||
# self.init_labels()
|
||||
|
||||
|
||||
def init_shapes(self):
|
||||
'''
|
||||
Due to error when running headless, we need to configure options before extending the shapes class
|
||||
'''
|
||||
class GradientLine(shapes.Line):
|
||||
def __init__(self, x, y, x2, y2, width=1, color1=[255,255,255], color2=[255,255,255], batch=None, group=None):
|
||||
# print('colors!', colors)
|
||||
# assert len(colors) == 6
|
||||
|
||||
r, g, b, *a = color1
|
||||
self._rgba1 = (r, g, b, a[0] if a else 255)
|
||||
r, g, b, *a = color2
|
||||
self._rgba2 = (r, g, b, a[0] if a else 255)
|
||||
|
||||
# print('rgba', self._rgba)
|
||||
|
||||
super().__init__(x, y, x2, y2, width, color1, batch=None, group=None)
|
||||
# <pyglet.graphics.vertexdomain.VertexList
|
||||
# pyglet.graphics.vertexdomain
|
||||
# print(self._vertex_list)
|
||||
|
||||
def _create_vertex_list(self):
|
||||
'''
|
||||
copy of super()._create_vertex_list but with additional colors'''
|
||||
self._vertex_list = self._group.program.vertex_list(
|
||||
6, self._draw_mode, self._batch, self._group,
|
||||
position=('f', self._get_vertices()),
|
||||
colors=('Bn', self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1 ),
|
||||
translation=('f', (self._x, self._y) * self._num_verts))
|
||||
|
||||
def _update_colors(self):
|
||||
self._vertex_list.colors[:] = self._rgba1+ self._rgba2 + self._rgba2 + self._rgba1 + self._rgba2 +self._rgba1
|
||||
|
||||
def color1(self, color):
|
||||
r, g, b, *a = color
|
||||
self._rgba1 = (r, g, b, a[0] if a else 255)
|
||||
self._update_colors()
|
||||
|
||||
def color2(self, color):
|
||||
r, g, b, *a = color
|
||||
self._rgba2 = (r, g, b, a[0] if a else 255)
|
||||
self._update_colors()
|
||||
|
||||
self.gradientLine = GradientLine
|
||||
|
||||
def init_labels(self):
|
||||
base_color = (255,)*4
|
||||
color_predictor = (255,255,0, 255)
|
||||
color_info = (255,0, 255, 255)
|
||||
color_tracker = (0,255, 255, 255)
|
||||
|
||||
options = []
|
||||
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
|
||||
options.append(f"{option}: {self.config.__dict__[option]}")
|
||||
|
||||
self.labels = {
|
||||
'waiting': pyglet.text.Label("Waiting for prediction"),
|
||||
'frame_idx': pyglet.text.Label("", x=20, y=self.window.height - 17, color=base_color, batch=self.batch_overlay),
|
||||
'tracker_idx': pyglet.text.Label("", x=90, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
|
||||
'pred_idx': pyglet.text.Label("", x=110, y=self.window.height - 17, color=color_predictor, batch=self.batch_overlay),
|
||||
'frame_time': pyglet.text.Label("t", x=140, y=self.window.height - 17, color=base_color, batch=self.batch_overlay),
|
||||
'frame_latency': pyglet.text.Label("", x=235, y=self.window.height - 17, color=color_info, batch=self.batch_overlay),
|
||||
'tracker_time': pyglet.text.Label("", x=300, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
|
||||
'pred_time': pyglet.text.Label("", x=360, y=self.window.height - 17, color=color_predictor, batch=self.batch_overlay),
|
||||
'track_len': pyglet.text.Label("", x=800, y=self.window.height - 17, color=color_tracker, batch=self.batch_overlay),
|
||||
'options1': pyglet.text.Label(options.pop(-1), x=20, y=30, color=base_color, batch=self.batch_overlay),
|
||||
'options2': pyglet.text.Label(" | ".join(options), x=20, y=10, color=base_color, batch=self.batch_overlay),
|
||||
}
|
||||
|
||||
def refresh_labels(self, dt: float):
|
||||
"""Every frame"""
|
||||
|
||||
|
@ -72,6 +170,130 @@ class CvRenderer(Node):
|
|||
self.labels['pred_time'].text = f"{self.prediction_frame.time - time.time():.3f}s"
|
||||
# self.labels['track_len'].text = f"{len(self.prediction_frame.tracks)} tracks"
|
||||
|
||||
|
||||
# cv2.putText(img, f"{frame.index:06d}", (20,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
# cv2.putText(img, f"{frame.time - first_time:.3f}s", (120,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
|
||||
# if prediction_frame:
|
||||
# # render Δt and Δ frames
|
||||
# cv2.putText(img, f"{prediction_frame.index - frame.index}", (90,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||
# cv2.putText(img, f"{prediction_frame.time - time.time():.2f}s", (200,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||
# cv2.putText(img, f"{len(prediction_frame.tracks)} tracks", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
# cv2.putText(img, f"h: {np.average([len(t.history or []) for t in prediction_frame.tracks.values()]):.2f}", (580,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||
# cv2.putText(img, f"ph: {np.average([len(t.predictor_history or []) for t in prediction_frame.tracks.values()]):.2f}", (660,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||
# cv2.putText(img, f"p: {np.average([len(t.predictions or []) for t in prediction_frame.tracks.values()]):.2f}", (740,17), cv2.FONT_HERSHEY_PLAIN, 1, info_color, 1)
|
||||
|
||||
# options = []
|
||||
# for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
|
||||
# options.append(f"{option}: {config.__dict__[option]}")
|
||||
|
||||
|
||||
# cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
# cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
|
||||
|
||||
|
||||
def check_frames(self, dt):
|
||||
new_tracks = False
|
||||
try:
|
||||
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
if not self.first_time:
|
||||
self.first_time = self.frame.time
|
||||
img = cv2.GaussianBlur(self.frame.img, (15, 15), 0)
|
||||
img = cv2.flip(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), 0)
|
||||
img = pyglet.image.ImageData(self.frame_size[0], self.frame_size[1], 'RGB', img.tobytes())
|
||||
# don't draw in batch, so that it is the background
|
||||
self.video_sprite = pyglet.sprite.Sprite(img=img, batch=self.batch_bg)
|
||||
self.video_sprite.opacity = 100
|
||||
except zmq.ZMQError as e:
|
||||
# idx = frame.index if frame else "NONE"
|
||||
# logger.debug(f"reuse video frame {idx}")
|
||||
pass
|
||||
try:
|
||||
self.prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
new_tracks = True
|
||||
except zmq.ZMQError as e:
|
||||
pass
|
||||
try:
|
||||
self.tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
new_tracks = True
|
||||
except zmq.ZMQError as e:
|
||||
pass
|
||||
|
||||
|
||||
|
||||
def on_key_press(self, symbol, modifiers):
|
||||
print('A key was pressed, use f to hide')
|
||||
if symbol == ord('f'):
|
||||
self.window.set_fullscreen(not self.window.fullscreen)
|
||||
if symbol == ord('h'):
|
||||
self.hide_stats = not self.hide_stats
|
||||
|
||||
def check_running(self, dt):
|
||||
if not self.is_running.is_set():
|
||||
self.window.close()
|
||||
self.event_loop.exit()
|
||||
|
||||
def on_close(self):
|
||||
self.is_running.clear()
|
||||
|
||||
def on_refresh(self, dt: float):
|
||||
# update shapes
|
||||
# self.bg =
|
||||
for track_id, track in self.drawn_tracks.items():
|
||||
track.update_drawn_positions(dt)
|
||||
|
||||
|
||||
self.refresh_labels(dt)
|
||||
|
||||
# self.shape1 = shapes.Circle(700, 150, 100, color=(50, 0, 30), batch=self.batch_anim)
|
||||
# self.shape3 = shapes.Circle(800, 150, 100, color=(100, 225, 30), batch=self.batch_anim)
|
||||
pass
|
||||
|
||||
def on_draw(self):
|
||||
self.window.clear()
|
||||
|
||||
self.batch_bg.draw()
|
||||
|
||||
for track in self.drawn_tracks.values():
|
||||
for shape in track.shapes:
|
||||
shape.draw() # for some reason the batches don't work
|
||||
for track in self.drawn_tracks.values():
|
||||
for shapes in track.pred_shapes:
|
||||
for shape in shapes:
|
||||
shape.draw()
|
||||
# self.batch_anim.draw()
|
||||
self.batch_overlay.draw()
|
||||
|
||||
|
||||
# pyglet.graphics.draw(3, pyglet.gl.GL_LINE, ("v2i", (100,200, 600,800)), ('c3B', (255,255,255, 255,255,255)))
|
||||
|
||||
|
||||
|
||||
if not self.hide_stats:
|
||||
self.fps_display.draw()
|
||||
|
||||
# if streaming, capture buffer and send
|
||||
try:
|
||||
if self.streaming_process or self.out_writer:
|
||||
buf = pyglet.image.get_buffer_manager().get_color_buffer()
|
||||
img_data = buf.get_image_data()
|
||||
data = img_data.get_data() # alternative: .get_data("RGBA", image_data.pitch)
|
||||
img = np.asanyarray(data).reshape((img_data.height, img_data.width, 4))
|
||||
img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGB)
|
||||
img = np.flip(img, 0)
|
||||
# img = cv2.flip(img, cv2.0)
|
||||
|
||||
# cv2.imshow('frame', img)
|
||||
# cv2.waitKey(1)
|
||||
if self.streaming_process:
|
||||
self.streaming_process.stdin.write(img.tobytes())
|
||||
if self.out_writer:
|
||||
self.out_writer.write(img)
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
|
||||
def start_writer(self):
|
||||
if not self.config.output_dir.exists():
|
||||
raise FileNotFoundError("Path does not exist")
|
||||
|
@ -80,16 +302,16 @@ class CvRenderer(Node):
|
|||
filename = self.config.output_dir / f"render_predictions-{date_str}-{self.config.detector}.mp4"
|
||||
logger.info(f"Write to {filename}")
|
||||
|
||||
return FrameWriter(str(filename), self.fps, None)
|
||||
return FrameWriter(str(filename), self.fps, self.frame_size)
|
||||
|
||||
# fourcc = cv2.VideoWriter_fourcc(*'vp09')
|
||||
fourcc = cv2.VideoWriter_fourcc(*'vp09')
|
||||
|
||||
# return cv2.VideoWriter(str(filename), fourcc, self.fps, self.frame_size)
|
||||
return cv2.VideoWriter(str(filename), fourcc, self.fps, self.frame_size)
|
||||
|
||||
def start_streaming(self, frame_size=(1920,1080)):
|
||||
def start_streaming(self):
|
||||
return (
|
||||
ffmpeg
|
||||
.input('pipe:', format='rawvideo',codec="rawvideo", pix_fmt='bgr24', s='{}x{}'.format(*frame_size))
|
||||
.input('pipe:', format='rawvideo',codec="rawvideo", pix_fmt='bgr24', s='{}x{}'.format(*self.frame_size))
|
||||
.output(
|
||||
self.config.render_url,
|
||||
#codec = "copy", # use same codecs of the original video
|
||||
|
@ -109,7 +331,10 @@ class CvRenderer(Node):
|
|||
)
|
||||
# return process
|
||||
|
||||
def run(self):
|
||||
|
||||
|
||||
|
||||
def run(self, timer_counter):
|
||||
frame = None
|
||||
prediction_frame = None
|
||||
tracker_frame = None
|
||||
|
@ -119,13 +344,14 @@ class CvRenderer(Node):
|
|||
|
||||
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
|
||||
# https://gist.github.com/ronekko/dc3747211543165108b11073f929b85e
|
||||
cv2.moveWindow("frame", 0, -1)
|
||||
if self.config.full_screen:
|
||||
cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
|
||||
# bgsub = cv2.createBackgroundSubtractorMOG2(120, 50, detectShadows=True)
|
||||
cv2.moveWindow("frame", 1920, -1)
|
||||
cv2.setWindowProperty("frame",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
|
||||
|
||||
while self.is_running.is_set():
|
||||
i+=1
|
||||
with timer_counter.get_lock():
|
||||
timer_counter.value+=1
|
||||
|
||||
while self.run_loop():
|
||||
i += 1
|
||||
|
||||
# zmq_ev = self.frame_sock.poll(timeout=2000)
|
||||
# if not zmq_ev:
|
||||
|
@ -163,30 +389,20 @@ class CvRenderer(Node):
|
|||
except zmq.ZMQError as e:
|
||||
logger.debug(f'reuse tracks')
|
||||
|
||||
try:
|
||||
self.detections = self.detector_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
# print('detections')
|
||||
except zmq.ZMQError as e:
|
||||
# print('no detections')
|
||||
# idx = frame.index if frame else "NONE"
|
||||
# logger.debug(f"reuse video frame {idx}")
|
||||
pass
|
||||
|
||||
if first_time is None:
|
||||
first_time = frame.time
|
||||
|
||||
# img = frame.img
|
||||
img = decorate_frame(frame, tracker_frame, prediction_frame, first_time, self.config, self.tracks, self.predictions, self.detections, self.config.render_clusters)
|
||||
img = decorate_frame(frame, tracker_frame, prediction_frame, first_time, self.config, self.tracks, self.predictions)
|
||||
|
||||
logger.debug(f"write frame {frame.time - first_time:.3f}s")
|
||||
if self.out_writer:
|
||||
self.out_writer.write(img)
|
||||
if self.streaming_process:
|
||||
self.streaming_process.stdin.write(img.tobytes())
|
||||
if not self.config.no_window:
|
||||
if self.config.render_window:
|
||||
cv2.imshow('frame',cv2.resize(img, (1920, 1080)))
|
||||
# cv2.imshow('frame',img)
|
||||
cv2.waitKey(10)
|
||||
cv2.waitKey(1)
|
||||
|
||||
# clear out old tracks & predictions:
|
||||
|
||||
|
@ -210,56 +426,6 @@ class CvRenderer(Node):
|
|||
self.streaming_process.wait()
|
||||
|
||||
logger.info('stopped')
|
||||
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls):
|
||||
render_parser = ArgumentParser()
|
||||
render_parser.add_argument('--zmq-frame-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame")
|
||||
render_parser.add_argument('--zmq-trajectory-addr',
|
||||
help='Manually specity communication addr for the trajectory messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_traj")
|
||||
|
||||
render_parser.add_argument('--zmq-detection-addr',
|
||||
help='Manually specity communication addr for the detection messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_dets")
|
||||
|
||||
render_parser.add_argument('--zmq-prediction-addr',
|
||||
help='Manually specity communication addr for the prediction messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_preds")
|
||||
|
||||
render_parser.add_argument("--render-file",
|
||||
help="Render a video file previewing the prediction, and its delay compared to the current frame",
|
||||
action='store_true')
|
||||
render_parser.add_argument("--no-window",
|
||||
help="Disable a previewing to a window",
|
||||
action='store_true')
|
||||
|
||||
render_parser.add_argument("--full-screen",
|
||||
help="Set Window full screen",
|
||||
action='store_true')
|
||||
render_parser.add_argument("--render-clusters",
|
||||
help="renders arrowd clusters instead of individual predictions",
|
||||
action='store_true')
|
||||
|
||||
render_parser.add_argument("--render-url",
|
||||
help="""Stream renderer on given URL. Two easy approaches:
|
||||
- using zmq wrapper one can specify the LISTENING ip. To listen to any incoming connection: zmq:tcp://0.0.0.0:5556
|
||||
- alternatively, using e.g. UDP one needs to specify the IP of the client. E.g. udp://100.69.123.91:5556/stream
|
||||
Note that with ZMQ you can have multiple clients connecting simultaneously. E.g. using `ffplay zmq:tcp://100.109.175.82:5556`
|
||||
When using udp, connecting can be done using `ffplay udp://100.109.175.82:5556/stream`
|
||||
""",
|
||||
type=str,
|
||||
default=None)
|
||||
|
||||
return render_parser
|
||||
|
||||
# colorset = itertools.product([0,255], repeat=3) # but remove white
|
||||
# colorset = [(0, 0, 0),
|
||||
# (0, 0, 255),
|
||||
|
@ -289,20 +455,16 @@ def get_animation_position(track: Track, current_frame: Frame):
|
|||
|
||||
|
||||
|
||||
def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame, first_time: float, config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track], detections: Optional[List[Detection]], as_clusters = True) -> np.array:
|
||||
scale = 100
|
||||
# Deprecated
|
||||
def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame, first_time: float, config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track]) -> np.array:
|
||||
# TODO: replace opencv with QPainter to support alpha? https://doc.qt.io/qtforpython-5/PySide2/QtGui/QPainter.html#PySide2.QtGui.PySide2.QtGui.QPainter.drawImage
|
||||
# or https://github.com/pygobject/pycairo?tab=readme-ov-file
|
||||
# or https://pyglet.readthedocs.io/en/latest/programming_guide/shapes.html
|
||||
# and use http://code.astraw.com/projects/motmot/pygarrayimage.html or https://gist.github.com/nkymut/1cb40ea6ae4de0cf9ded7332f1ca0d55
|
||||
# or https://api.arcade.academy/en/stable/index.html (supports gradient color in line -- "Arcade is built on top of Pyglet and OpenGL.")
|
||||
dst_img = frame.camera.img_to_world(frame.img, scale)
|
||||
# mask = bg_subtractor.apply(dst_img)
|
||||
# mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2RGB).astype(float) / 255
|
||||
# dst_img = dst_img * mask
|
||||
|
||||
# undistorted_img = cv2.undistort(frame.img, config.camera.mtx, config.camera.dist, None, config.camera.newcameramtx)
|
||||
# dst_img = cv2.warpPerspective(undistorted_img,convert_world_space_to_img_space(config.camera.H),(config.camera.w,config.camera.h))
|
||||
|
||||
undistorted_img = cv2.undistort(frame.img, config.camera.mtx, config.camera.dist, None, config.camera.newcameramtx)
|
||||
dst_img = cv2.warpPerspective(undistorted_img,convert_world_space_to_img_space(config.camera.H),(config.camera.w,config.camera.h))
|
||||
# dst_img2 = cv2.warpPerspective(undistorted_img,convert_world_space_to_img_space(config.camera.H), None)
|
||||
# cv2.imwrite('/home/ruben/suspicion/DATASETS/hof3/camera2.png', dst_img2)
|
||||
|
||||
|
@ -323,29 +485,12 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
|
|||
# cv2.imwrite(str(self.config.output_dir / "orig.png"), warpedFrame)
|
||||
cv2.rectangle(img, (0,0), (img.shape[1],25), (0,0,0), -1)
|
||||
|
||||
if detections:
|
||||
for detection in detections:
|
||||
points = [
|
||||
detection.get_foot_coords(),
|
||||
[detection.l, detection.t],
|
||||
[detection.l + detection.w, detection.t + detection.h],
|
||||
]
|
||||
points = frame.camera.points_img_to_world(points, scale)
|
||||
points = [to_point(p) for p in points] # to int
|
||||
|
||||
cv2.rectangle(img, points[1], points[2], (255,255,0), 2)
|
||||
cv2.circle(img, points[0], 5, (255,255,0), 2)
|
||||
|
||||
|
||||
def conversion(points):
|
||||
return convert_world_points_to_img_points(points, scale)
|
||||
|
||||
if not tracker_frame:
|
||||
cv2.putText(img, f"and track", (650,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
|
||||
else:
|
||||
for track_id, track in tracks.items():
|
||||
inv_H = np.linalg.pinv(tracker_frame.H)
|
||||
draw_track_projected(img, track, int(track_id), frame.camera, conversion)
|
||||
draw_track_projected(img, track, int(track_id), config.camera, convert_world_points_to_img_points)
|
||||
|
||||
if not prediction_frame:
|
||||
cv2.putText(img, f"Waiting for prediction...", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
|
||||
|
@ -353,10 +498,10 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
|
|||
else:
|
||||
for track_id, track in predictions.items():
|
||||
inv_H = np.linalg.pinv(prediction_frame.H)
|
||||
# For debugging:
|
||||
# draw_trackjectron_history(img, track, int(track.track_id), conversion)
|
||||
# draw_track(img, track, int(track_id))
|
||||
draw_trackjectron_history(img, track, int(track.track_id), convert_world_points_to_img_points)
|
||||
anim_position = get_animation_position(track, frame)
|
||||
draw_track_predictions(img, track, int(track.track_id)+1, frame.camera, conversion, anim_position=anim_position, as_clusters=as_clusters)
|
||||
draw_track_predictions(img, track, int(track.track_id)+1, config.camera, convert_world_points_to_img_points, anim_position=anim_position)
|
||||
cv2.putText(img, f"{len(track.predictor_history) if track.predictor_history else 'none'}", to_point(track.history[0].get_foot_coords()), cv2.FONT_HERSHEY_COMPLEX, 1, (255,255,255), 1)
|
||||
if prediction_frame.maps:
|
||||
for i, m in enumerate(prediction_frame.maps):
|
||||
|
@ -386,8 +531,6 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
|
|||
cv2.putText(img, f"{frame.time - first_time: >10.2f}s", (150,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
cv2.putText(img, f"{frame.time - time.time():.2f}s", (250,17), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
|
||||
options = []
|
||||
|
||||
if prediction_frame:
|
||||
# render Δt and Δ frames
|
||||
cv2.putText(img, f"{tracker_frame.index - frame.index}", (90,17), cv2.FONT_HERSHEY_PLAIN, 1, tracker_color, 1)
|
||||
|
@ -398,14 +541,14 @@ def decorate_frame(frame: Frame, tracker_frame: Frame, prediction_frame: Frame,
|
|||
cv2.putText(img, f"h: {np.average([len(t.history or []) for t in prediction_frame.tracks.values()]):.2f}", (700,17), cv2.FONT_HERSHEY_PLAIN, 1, tracker_color, 1)
|
||||
cv2.putText(img, f"ph: {np.average([len(t.predictor_history or []) for t in prediction_frame.tracks.values()]):.2f}", (780,17), cv2.FONT_HERSHEY_PLAIN, 1, predictor_color, 1)
|
||||
cv2.putText(img, f"p: {np.average([len(t.predictions or []) for t in prediction_frame.tracks.values()]):.2f}", (860,17), cv2.FONT_HERSHEY_PLAIN, 1, predictor_color, 1)
|
||||
|
||||
options = []
|
||||
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
|
||||
options.append(f"{option}: {config.__dict__[option]}")
|
||||
|
||||
|
||||
|
||||
for option, value in prediction_frame.log['predictor'].items():
|
||||
options.append(f"{option}: {value}")
|
||||
|
||||
if len(options):
|
||||
cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
cv2.putText(img, options.pop(-1), (20,img.shape[0]-30), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
cv2.putText(img, " | ".join(options), (20,img.shape[0]-10), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
|
||||
return img
|
||||
|
||||
|
|
|
@ -1,170 +0,0 @@
|
|||
from argparse import Namespace
|
||||
from collections import defaultdict
|
||||
import csv
|
||||
from dataclasses import dataclass, field
|
||||
import json
|
||||
import logging
|
||||
from math import nan
|
||||
from multiprocessing import Event
|
||||
import multiprocessing
|
||||
from pathlib import Path
|
||||
import pickle
|
||||
import time
|
||||
from typing import DefaultDict, Dict, Optional, List
|
||||
import jsonlines
|
||||
import numpy as np
|
||||
import torch
|
||||
import torchvision
|
||||
import ultralytics
|
||||
import zmq
|
||||
import cv2
|
||||
|
||||
from facenet_pytorch import InceptionResnetV1, MTCNN
|
||||
|
||||
from trap.base import Frame
|
||||
|
||||
logger = logging.getLogger('trap.face_detector')
|
||||
|
||||
class FaceDetector:
|
||||
def __init__(self, config: Namespace):
|
||||
self.config = config
|
||||
|
||||
|
||||
self.context = zmq.Context()
|
||||
self.frame_sock = self.context.socket(zmq.SUB)
|
||||
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
self.frame_sock.connect(self.config.zmq_frame_addr)
|
||||
|
||||
|
||||
|
||||
self.face_socket = self.context.socket(zmq.PUB)
|
||||
self.face_socket.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
|
||||
self.face_socket.bind(self.config.zmq_face_addr)
|
||||
|
||||
|
||||
# # TODO: config device
|
||||
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
|
||||
|
||||
def track(self, is_running: Event, timer_counter: int = 0):
|
||||
"""
|
||||
Live tracking of frames coming in over zmq
|
||||
"""
|
||||
|
||||
self.is_running = is_running
|
||||
|
||||
prev_frame_i = -1
|
||||
|
||||
# For a model pretrained on CASIA-Webface
|
||||
# model = InceptionResnetV1(pretrained='casia-webface').eval().to(self.device)
|
||||
# mtcnn = MTCNN(
|
||||
# image_size=160, margin=0, min_face_size=10,
|
||||
# thresholds=[0.3, 0.3, 0.3], factor=0.709, post_process=True,
|
||||
# device=self.device, keep_all=True
|
||||
|
||||
# )
|
||||
# modelpath = Path("face_detection_yunet_2023mar_int8bq.onnx")
|
||||
modelpath = Path("face_detection_yunet_2023mar_int8.onnx")
|
||||
# model = YuNet(modelPath=args.model,
|
||||
# inputSize=[320, 320],
|
||||
# confThreshold=args.conf_threshold,
|
||||
# nmsThreshold=args.nms_threshold,
|
||||
# topK=args.top_k,
|
||||
# backendId=backend_id,
|
||||
# targetId=target_id)
|
||||
detector = cv2.FaceDetectorYN.create(
|
||||
str(modelpath),
|
||||
"",
|
||||
(320, 320),
|
||||
.3,
|
||||
.3,
|
||||
5000,
|
||||
cv2.dnn.DNN_BACKEND_CUDA,
|
||||
target_id=cv2.dnn.DNN_TARGET_CUDA
|
||||
)
|
||||
|
||||
|
||||
|
||||
while self.is_running.is_set():
|
||||
|
||||
with timer_counter.get_lock():
|
||||
timer_counter.value += 1
|
||||
|
||||
poll_time = time.time()
|
||||
zmq_ev = self.frame_sock.poll(timeout=2000)
|
||||
if not zmq_ev:
|
||||
logger.warning('skip poll after 2000ms')
|
||||
# when there's no data after timeout, loop so that is_running is checked
|
||||
continue
|
||||
|
||||
start_time = time.time()
|
||||
frame: Frame = self.frame_sock.recv_pyobj() # frame delivery in current setup: 0.012-0.03s
|
||||
|
||||
# print(time.time()- frame.time)
|
||||
|
||||
if frame.index > (prev_frame_i+1):
|
||||
logger.warning(f"Dropped {frame.index - prev_frame_i - 1} frames ({frame.index=}, {prev_frame_i=}) -- poll time {start_time-poll_time:.5f}")
|
||||
|
||||
height, width, channels = frame.img.shape
|
||||
|
||||
detector.setInputSize((width//2, height//2))
|
||||
|
||||
img = cv2.resize(frame.img, (width//2, height//2))
|
||||
|
||||
faces = detector.detect(img)
|
||||
|
||||
prev_frame_i = frame.index
|
||||
|
||||
# print(f"send to {self.trajectory_socket}, {self.config.zmq_trajectory_addr}")
|
||||
self.face_socket.send_pyobj(faces) # ditch image for faster passthrough
|
||||
|
||||
|
||||
logger.info('Stopping')
|
||||
|
||||
|
||||
|
||||
def run_detector(config: Namespace, is_running: Event, timer_counter):
|
||||
router = FaceDetector(config)
|
||||
router.track(is_running, timer_counter)
|
||||
|
||||
def run():
|
||||
# Frame emitter
|
||||
import argparse
|
||||
argparser = argparse.ArgumentParser()
|
||||
argparser.add_argument('--zmq-frame-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame")
|
||||
argparser.add_argument('--zmq-trajectory-addr',
|
||||
help='Manually specity communication addr for the trajectory messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_traj")
|
||||
|
||||
argparser.add_argument("--save-for-training",
|
||||
help="Specify the path in which to save",
|
||||
type=Path,
|
||||
default=None)
|
||||
argparser.add_argument("--detector",
|
||||
help="Specify the detector to use",
|
||||
type=str,
|
||||
default=DETECTOR_YOLOv8,
|
||||
choices=DETECTORS)
|
||||
argparser.add_argument("--tracker",
|
||||
help="Specify the detector to use",
|
||||
type=str,
|
||||
default=TRACKER_BYTETRACK,
|
||||
choices=TRACKERS)
|
||||
argparser.add_argument("--smooth-tracks",
|
||||
help="Smooth the tracker tracks before sending them to the predictor",
|
||||
action='store_true')
|
||||
config = argparser.parse_args()
|
||||
is_running = multiprocessing.Event()
|
||||
is_running.set()
|
||||
timer_counter = timer.Timer('frame_emitter')
|
||||
|
||||
router = Tracker(config)
|
||||
router.track(is_running, timer_counter.iterations)
|
||||
is_running.clear()
|
||||
|
|
@ -1,131 +1,560 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from argparse import Namespace
|
||||
from dataclasses import dataclass, field
|
||||
import dataclasses
|
||||
from enum import IntFlag
|
||||
from itertools import cycle
|
||||
import json
|
||||
import logging
|
||||
import pickle
|
||||
from argparse import ArgumentParser, Namespace
|
||||
from multiprocessing import Event
|
||||
from pathlib import Path
|
||||
import pickle
|
||||
import sys
|
||||
import time
|
||||
from typing import Iterable, List, Optional
|
||||
import numpy as np
|
||||
import cv2
|
||||
import pandas as pd
|
||||
import zmq
|
||||
import os
|
||||
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
||||
from deep_sort_realtime.deep_sort.track import TrackState as DeepsortTrackState
|
||||
from bytetracker.byte_tracker import STrack as ByteTrackTrack
|
||||
from bytetracker.basetrack import TrackState as ByteTrackTrackState
|
||||
from trajectron.environment import Environment, Node, Scene
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from trap import node
|
||||
from trap.base import *
|
||||
from trap.base import LambdaParser
|
||||
from trap.gemma import ImgMovementFilter
|
||||
from trap.preview_renderer import FrameWriter
|
||||
from trap.video_sources import get_video_source
|
||||
from trap.utils import get_bins
|
||||
from trap.utils import inv_lerp, lerp
|
||||
|
||||
logger = logging.getLogger('trap.frame_emitter')
|
||||
|
||||
class DataclassJSONEncoder(json.JSONEncoder):
|
||||
def default(self, o):
|
||||
if isinstance(o, np.ndarray):
|
||||
return o.tolist()
|
||||
if dataclasses.is_dataclass(o):
|
||||
if isinstance(o, Frame):
|
||||
tracks = {}
|
||||
for track_id, track in o.tracks.items():
|
||||
track_obj = dataclasses.asdict(track)
|
||||
track_obj['history'] = track.get_projected_history(None, o.camera)
|
||||
tracks[track_id] = track_obj
|
||||
d = {
|
||||
'index': o.index,
|
||||
'time': o.time,
|
||||
'tracks': tracks,
|
||||
'camera': dataclasses.asdict(o.camera),
|
||||
}
|
||||
else:
|
||||
d = dataclasses.asdict(o)
|
||||
# if isinstance(o, Frame):
|
||||
# # Don't send images over JSON
|
||||
# del d['img']
|
||||
return d
|
||||
return super().default(o)
|
||||
|
||||
|
||||
class UrlOrPath():
|
||||
def __init__(self, string):
|
||||
self.url = urlparse(str(string))
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.url.geturl()
|
||||
|
||||
def is_url(self) -> bool:
|
||||
return len(self.url.netloc) > 0
|
||||
|
||||
def path(self) -> Path:
|
||||
if self.is_url():
|
||||
return Path(self.url.path)
|
||||
return Path(self.url.geturl()) # can include scheme, such as C:/
|
||||
|
||||
class Space(IntFlag):
|
||||
Image = 1 # As detected in the image
|
||||
Undistorted = 2 # After applying lense undistortiion
|
||||
World = 4 # After lens undistort and homography
|
||||
Render = 8 # View space of renderer
|
||||
|
||||
class DetectionState(IntFlag):
|
||||
Tentative = 1 # state before n_init (see DeepsortTrack)
|
||||
Confirmed = 2 # after tentative
|
||||
Lost = 4 # lost when DeepsortTrack.time_since_update > 0 but not Deleted
|
||||
Interpolated = 8 # A position estimated through interpolation of adjecent detections
|
||||
|
||||
@classmethod
|
||||
def from_deepsort_track(cls, track: DeepsortTrack):
|
||||
if track.state == DeepsortTrackState.Tentative:
|
||||
return cls.Tentative
|
||||
if track.state == DeepsortTrackState.Confirmed:
|
||||
if track.time_since_update > 0:
|
||||
return cls.Lost
|
||||
return cls.Confirmed
|
||||
raise RuntimeError("Should not run into Deleted entries here")
|
||||
|
||||
@classmethod
|
||||
def from_bytetrack_track(cls, track: ByteTrackTrack):
|
||||
if track.state == ByteTrackTrackState.New:
|
||||
return cls.Tentative
|
||||
if track.state == ByteTrackTrackState.Lost:
|
||||
return cls.Lost
|
||||
# if track.time_since_update > 0:
|
||||
if track.state == ByteTrackTrackState.Tracked:
|
||||
return cls.Confirmed
|
||||
raise RuntimeError("Should not run into Deleted entries here")
|
||||
|
||||
def H_from_path(path: Path):
|
||||
if path.suffix == '.json':
|
||||
with path.open('r') as fp:
|
||||
H = np.array(json.load(fp))
|
||||
else:
|
||||
H = np.loadtxt(path, delimiter=',')
|
||||
return H
|
||||
|
||||
@dataclass
|
||||
class Camera:
|
||||
mtx: cv2.Mat
|
||||
dist: cv2.Mat
|
||||
w: float
|
||||
h: float
|
||||
H: cv2.Mat # homography
|
||||
|
||||
newcameramtx: cv2.Mat = field(init=False)
|
||||
roi: cv2.typing.Rect = field(init=False)
|
||||
fps: float
|
||||
|
||||
def __post_init__(self):
|
||||
self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(self.mtx, self.dist, (self.w,self.h), 1, (self.w,self.h))
|
||||
|
||||
@classmethod
|
||||
def from_calibfile(cls, calibration_path, H, fps):
|
||||
with calibration_path.open('r') as fp:
|
||||
data = json.load(fp)
|
||||
# print(data)
|
||||
# print(data['camera_matrix'])
|
||||
# camera = {
|
||||
# 'camera_matrix': np.array(data['camera_matrix']),
|
||||
# 'dist_coeff': np.array(data['dist_coeff']),
|
||||
# }
|
||||
return cls(
|
||||
np.array(data['camera_matrix']),
|
||||
np.array(data['dist_coeff']),
|
||||
data['dim']['width'],
|
||||
data['dim']['height'],
|
||||
H, fps)
|
||||
|
||||
@classmethod
|
||||
def from_paths(cls, calibration_path, h_path, fps):
|
||||
H = H_from_path(h_path)
|
||||
return cls.from_calibfile(calibration_path, H, fps)
|
||||
|
||||
# def __init__(self, mtx, dist, w, h, H):
|
||||
# self.mtx = mtx
|
||||
# self.dist = dist
|
||||
# self.w = w
|
||||
# self.h = h
|
||||
# self.newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
|
||||
# self.H = H # homography
|
||||
|
||||
@dataclass
|
||||
class Position:
|
||||
x: float
|
||||
y: float
|
||||
conf: float
|
||||
state: DetectionState
|
||||
frame_nr: int
|
||||
det_class: str
|
||||
|
||||
@dataclass
|
||||
class Detection:
|
||||
track_id: str # deepsort track id association
|
||||
l: int # left - image space
|
||||
t: int # top - image space
|
||||
w: int # width - image space
|
||||
h: int # height - image space
|
||||
conf: float # object detector probablity
|
||||
state: DetectionState
|
||||
frame_nr: int
|
||||
det_class: str
|
||||
|
||||
def get_foot_coords(self) -> list[float, float]:
|
||||
return [self.l + 0.5 * self.w, self.t+self.h]
|
||||
|
||||
@classmethod
|
||||
def from_deepsort(cls, dstrack: DeepsortTrack, frame_nr: int):
|
||||
return cls(dstrack.track_id, *dstrack.to_ltwh(), dstrack.det_conf, DetectionState.from_deepsort_track(dstrack), frame_nr, dstrack.det_class)
|
||||
|
||||
|
||||
class FrameEmitter(node.Node):
|
||||
@classmethod
|
||||
def from_bytetrack(cls, bstrack: ByteTrackTrack, frame_nr: int):
|
||||
return cls(bstrack.track_id, *bstrack.tlwh, bstrack.score, DetectionState.from_bytetrack_track(bstrack), frame_nr, bstrack.cls)
|
||||
|
||||
def get_scaled(self, scale: float = 1):
|
||||
if scale == 1:
|
||||
return self
|
||||
|
||||
return Detection(
|
||||
self.track_id,
|
||||
self.l*scale,
|
||||
self.t*scale,
|
||||
self.w*scale,
|
||||
self.h*scale,
|
||||
self.conf,
|
||||
self.state,
|
||||
self.frame_nr,
|
||||
self.det_class)
|
||||
|
||||
def to_ltwh(self):
|
||||
return (int(self.l), int(self.t), int(self.w), int(self.h))
|
||||
|
||||
def to_ltrb(self):
|
||||
return (int(self.l), int(self.t), int(self.l+self.w), int(self.t+self.h))
|
||||
|
||||
@dataclass
|
||||
class Trajectory:
|
||||
# TODO)) Replace history and predictions in Track with Trajectory
|
||||
space: Space
|
||||
fps: int = 12
|
||||
points: List[Detection] = field(default_factory=list)
|
||||
|
||||
def __iter__(self):
|
||||
for d in self.points:
|
||||
yield d
|
||||
|
||||
|
||||
@dataclass
|
||||
class Track:
|
||||
"""A bit of an haphazardous wrapper around the 'real' tracker to provide
|
||||
a history, with which the predictor can work, as we then can deduce velocity
|
||||
and acceleration.
|
||||
"""
|
||||
track_id: str = None
|
||||
history: List[Detection] = field(default_factory=list)
|
||||
predictor_history: Optional[list] = None # in image space
|
||||
predictions: Optional[list] = None
|
||||
fps: int = 12 # TODO)) convert this to camera? That way, incorporates H and dist, alternatively, each track is as a whole attached to a space
|
||||
source: Optional[int] = None # to keep track of processed tracks
|
||||
|
||||
def get_projected_history(self, H: Optional[cv2.Mat] = None, camera: Optional[Camera]= None) -> np.array:
|
||||
foot_coordinates = [d.get_foot_coords() for d in self.history]
|
||||
# TODO)) Undistort points before perspective transform
|
||||
if len(foot_coordinates):
|
||||
if camera:
|
||||
coords = cv2.undistortPoints(np.array([foot_coordinates]).astype('float32'), camera.mtx, camera.dist, None, camera.newcameramtx)
|
||||
coords = cv2.perspectiveTransform(np.array(coords),camera.H)
|
||||
return coords.reshape((coords.shape[0],2))
|
||||
else:
|
||||
coords = cv2.perspectiveTransform(np.array([foot_coordinates]),H)
|
||||
return coords[0]
|
||||
return np.array([])
|
||||
|
||||
def get_projected_history_as_dict(self, H, camera: Optional[Camera]= None) -> dict:
|
||||
coords = self.get_projected_history(H, camera)
|
||||
return [{"x":c[0], "y":c[1]} for c in coords]
|
||||
|
||||
def get_with_interpolated_history(self) -> Track:
|
||||
# new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
|
||||
# new_track = Track(track.track_id, new_history, track.predictor_history, track.predictions)
|
||||
new_history = []
|
||||
for j in range(len(self.history)):
|
||||
a = self.history[j]
|
||||
new_history.append(Detection(a.track_id, a.l, a.t, a.w, a.h, a.conf, a.state, a.frame_nr, a.det_class))
|
||||
|
||||
if j+1 >= len(self.history):
|
||||
break
|
||||
|
||||
b = self.history[j+1]
|
||||
gap = b.frame_nr - a.frame_nr
|
||||
if gap < 1:
|
||||
logger.error(f"WARNING, gap between frames {a.frame_nr} -> {b.frame_nr} is negative?")
|
||||
if gap > 1:
|
||||
for g in range(1, gap):
|
||||
l = lerp(a.l, b.l, g/gap)
|
||||
t = lerp(a.t, b.t, g/gap)
|
||||
w = lerp(a.w, b.w, g/gap)
|
||||
h = lerp(a.h, b.h, g/gap)
|
||||
conf = 0
|
||||
state = DetectionState.Lost
|
||||
frame_nr = a.frame_nr + g
|
||||
new_history.append(Detection(a.track_id, l, t, w, h, conf, state, frame_nr, a.det_class))
|
||||
|
||||
return Track(
|
||||
self.track_id,
|
||||
new_history,
|
||||
self.predictor_history,
|
||||
self.predictions,
|
||||
self.fps)
|
||||
|
||||
def is_complete(self):
|
||||
diffs = [(b.frame_nr - a.frame_nr) for a,b in zip(self.history[:-1], self.history[1:])]
|
||||
return any([d != 1 for d in diffs])
|
||||
|
||||
|
||||
def get_sampled(self, step_size = 1, offset=0):
|
||||
if not self.is_complete():
|
||||
t = self.get_with_interpolated_history()
|
||||
else:
|
||||
t = self
|
||||
|
||||
return Track(
|
||||
t.track_id,
|
||||
t.history[offset::step_size],
|
||||
t.predictor_history,
|
||||
t.predictions,
|
||||
t.fps/step_size)
|
||||
|
||||
def get_binned(self, bin_size, camera: Camera, bin_start=True):
|
||||
"""
|
||||
For an experiment: what if we predict using only concrete positions, by mapping
|
||||
dx,dy to a grid. Thus prediction can be for 8 moves, or rather headings
|
||||
see ~/notes/attachments example svg
|
||||
"""
|
||||
|
||||
history = self.get_projected_history_as_dict(H=None, camera=camera)
|
||||
|
||||
def round_to_grid_precision(x):
|
||||
factor = 1/bin_size
|
||||
return round(x * factor) / factor
|
||||
|
||||
new_history: List[dict] = []
|
||||
for i, (det0, det1) in enumerate(zip(history[:-1], history[1:])):
|
||||
if i == 0:
|
||||
new_history.append({
|
||||
'x': round_to_grid_precision(det0['x']),
|
||||
'y': round_to_grid_precision(det0['y'])
|
||||
} if bin_start else det0)
|
||||
continue
|
||||
if abs(det1['x'] - new_history[-1]['x']) < bin_size and abs(det1['y'] - new_history[-1]['y']) < bin_size:
|
||||
continue
|
||||
|
||||
# det1 falls outside of the box [-bin_size:+bin_size] around last detection
|
||||
|
||||
# 1. Interpolate exact point between det0 and det1 that this happens
|
||||
if abs(det1['x'] - new_history[-1]['x']) >= bin_size:
|
||||
if det1['x'] - new_history[-1]['x'] >= bin_size:
|
||||
# det1 left of last
|
||||
x = new_history[-1]['x'] + bin_size
|
||||
f = inv_lerp(det0['x'], det1['x'], x)
|
||||
elif new_history[-1]['x'] - det1['x'] >= bin_size:
|
||||
# det1 left of last
|
||||
x = new_history[-1]['x'] - bin_size
|
||||
f = inv_lerp(det0['x'], det1['x'], x)
|
||||
y = lerp(det0['y'], det1['y'], f)
|
||||
if abs(det1['y'] - new_history[-1]['y']) >= bin_size:
|
||||
if det1['y'] - new_history[-1]['y'] >= bin_size:
|
||||
# det1 left of last
|
||||
y = new_history[-1]['y'] + bin_size
|
||||
f = inv_lerp(det0['y'], det1['y'], y)
|
||||
elif new_history[-1]['y'] - det1['y'] >= bin_size:
|
||||
# det1 left of last
|
||||
y = new_history[-1]['y'] - bin_size
|
||||
f = inv_lerp(det0['y'], det1['y'], y)
|
||||
x = lerp(det0['x'], det1['x'], f)
|
||||
|
||||
|
||||
# 2. Find closest point on rectangle (rectangle's four corners, or 4 midpoints)
|
||||
points = get_bins(bin_size)
|
||||
points = [[new_history[-1]['x']+p[0], new_history[-1]['y'] + p[1]] for p in points]
|
||||
|
||||
distances = [np.linalg.norm([p[0] - x, p[1]-y]) for p in points]
|
||||
closest = np.argmin(distances)
|
||||
|
||||
point = points[closest]
|
||||
|
||||
new_history.append({'x': point[0], 'y':point[1]})
|
||||
# todo Offsets to points:[ history for in points]
|
||||
return new_history
|
||||
|
||||
|
||||
def to_trajectron_node(self, camera: Camera, env: Environment) -> Node:
|
||||
positions = self.get_projected_history(None, camera)
|
||||
velocity = np.gradient(positions, 1/self.fps, axis=0)
|
||||
acceleration = np.gradient(velocity, 1/self.fps, axis=0)
|
||||
|
||||
new_first_idx = self.history[0].frame_nr
|
||||
|
||||
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
|
||||
|
||||
|
||||
# vx = derivative_of(x, scene.dt)
|
||||
# vy = derivative_of(y, scene.dt)
|
||||
# ax = derivative_of(vx, scene.dt)
|
||||
# ay = derivative_of(vy, scene.dt)
|
||||
|
||||
data_dict = {
|
||||
('position', 'x'): positions[:,0],
|
||||
('position', 'y'): positions[:,1],
|
||||
('velocity', 'x'): velocity[:,0],
|
||||
('velocity', 'y'): velocity[:,1],
|
||||
('acceleration', 'x'): acceleration[:,0],
|
||||
('acceleration', 'y'): acceleration[:,1]
|
||||
}
|
||||
|
||||
node_data = pd.DataFrame(data_dict, columns=data_columns)
|
||||
|
||||
return Node(node_type=env.NodeType.PEDESTRIAN, node_id=self.track_id, data=node_data, first_timestep=new_first_idx)
|
||||
|
||||
|
||||
|
||||
|
||||
@dataclass
|
||||
class Frame:
|
||||
index: int
|
||||
img: np.array
|
||||
time: float= field(default_factory=lambda: time.time())
|
||||
tracks: Optional[dict[str, Track]] = None
|
||||
H: Optional[np.array] = None
|
||||
camera: Optional[Camera] = None
|
||||
maps: Optional[List[cv2.Mat]] = None
|
||||
|
||||
def aslist(self) -> [dict]:
|
||||
return { t.track_id:
|
||||
{
|
||||
'id': t.track_id,
|
||||
'history': t.get_projected_history(self.H).tolist(),
|
||||
'det_conf': t.history[-1].conf,
|
||||
# 'det_conf': trajectory_data[node.id]['det_conf'],
|
||||
# 'bbox': trajectory_data[node.id]['bbox'],
|
||||
# 'history': history.tolist(),
|
||||
'predictions': t.predictions
|
||||
} for t in self.tracks.values()
|
||||
}
|
||||
|
||||
def without_img(self):
|
||||
return Frame(self.index, None, self.time, self.tracks, self.H, self.camera, self.maps)
|
||||
|
||||
def video_src_from_config(config) -> UrlOrPath:
|
||||
if config.video_loop:
|
||||
video_srcs: Iterable[UrlOrPath] = cycle(config.video_src)
|
||||
else:
|
||||
video_srcs: Iterable[UrlOrPath] = config.video_src
|
||||
return video_srcs
|
||||
|
||||
class FrameEmitter:
|
||||
'''
|
||||
Emit frame in a separate threat so they can be throttled,
|
||||
or thrown away when the rest of the system cannot keep up
|
||||
'''
|
||||
def setup(self) -> None:
|
||||
self.frame_sock = self.pub(self.config.zmq_frame_addr)
|
||||
self.frame_noimg_sock = self.pub(self.config.zmq_frame_noimg_addr)
|
||||
def __init__(self, config: Namespace, is_running: Event) -> None:
|
||||
self.config = config
|
||||
self.is_running = is_running
|
||||
|
||||
context = zmq.Context()
|
||||
# TODO: to make things faster, a multiprocessing.Array might be a tad faster: https://stackoverflow.com/a/65201859
|
||||
self.frame_sock = context.socket(zmq.PUB)
|
||||
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. make sure to set BEFORE connect/bind
|
||||
self.frame_sock.bind(config.zmq_frame_addr)
|
||||
|
||||
self.frame_noimg_sock = context.socket(zmq.PUB)
|
||||
self.frame_noimg_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. make sure to set BEFORE connect/bind
|
||||
self.frame_noimg_sock.bind(config.zmq_frame_noimg_addr)
|
||||
|
||||
logger.info(f"Connection socket {self.config.zmq_frame_addr}")
|
||||
logger.info(f"Connection socket {self.config.zmq_frame_noimg_addr}")
|
||||
logger.info(f"Connection socket {config.zmq_frame_addr}")
|
||||
logger.info(f"Connection socket {config.zmq_frame_noimg_addr}")
|
||||
|
||||
self.video_srcs = self.config.video_src
|
||||
|
||||
|
||||
self.video_srcs = video_src_from_config(self.config)
|
||||
|
||||
|
||||
def run(self):
|
||||
offset = int(self.config.video_offset or 0)
|
||||
source = get_video_source(self.video_srcs, self.config.camera, offset, self.config.video_end, self.config.video_loop)
|
||||
video_gen = enumerate(source, start = offset)
|
||||
def emit_video(self, timer_counter):
|
||||
i = 0
|
||||
delay_generation = False
|
||||
for video_path in self.video_srcs:
|
||||
logger.info(f"Play from '{str(video_path)}'")
|
||||
if str(video_path).isdigit():
|
||||
# numeric input is a CV camera
|
||||
video = cv2.VideoCapture(int(str(video_path)))
|
||||
# TODO: make config variables
|
||||
video.set(cv2.CAP_PROP_FRAME_WIDTH, int(self.config.camera.w))
|
||||
video.set(cv2.CAP_PROP_FRAME_HEIGHT, int(self.config.camera.h))
|
||||
print("exposure!", video.get(cv2.CAP_PROP_AUTO_EXPOSURE))
|
||||
video.set(cv2.CAP_PROP_FPS, 5)
|
||||
fps=5
|
||||
elif video_path.url.scheme == 'rtsp':
|
||||
gst = f"rtspsrc location={video_path} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=0 drop=true"
|
||||
logger.info(f"Capture gstreamer (gst-launch-1.0): {gst}")
|
||||
video = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
|
||||
fps=12
|
||||
else:
|
||||
# os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "fflags;nobuffer|flags;low_delay|avioflags;direct|rtsp_transport;udp"
|
||||
video = cv2.VideoCapture(str(video_path))
|
||||
delay_generation = True
|
||||
fps = video.get(cv2.CAP_PROP_FPS)
|
||||
target_frame_duration = 1./fps
|
||||
logger.info(f"Emit frames at {fps} fps")
|
||||
|
||||
# writer = FrameWriter(self.config.record, None, None) if self.config.record else nullcontext
|
||||
print(self.config.record)
|
||||
writer = FrameWriter(str(self.config.record), None, None) if self.config.record else None
|
||||
try:
|
||||
processor = ImgMovementFilter()
|
||||
while self.run_loop():
|
||||
if self.config.video_offset:
|
||||
logger.info(f"Start at frame {self.config.video_offset}")
|
||||
video.set(cv2.CAP_PROP_POS_FRAMES, self.config.video_offset)
|
||||
i = self.config.video_offset
|
||||
|
||||
try:
|
||||
i, img = next(video_gen)
|
||||
except StopIteration as e:
|
||||
logger.info("Video source ended")
|
||||
|
||||
# if '-' in video_path.path().stem:
|
||||
# path_stem = video_path.stem[:video_path.stem.rfind('-')]
|
||||
# else:
|
||||
# path_stem = video_path.stem
|
||||
# path_stem += "-homography"
|
||||
# homography_path = video_path.with_stem(path_stem).with_suffix('.txt')
|
||||
# logger.info(f'check homography file {homography_path}')
|
||||
# if homography_path.exists():
|
||||
# logger.info(f'Found custom homography file! Using {homography_path}')
|
||||
# video_H = np.loadtxt(homography_path, delimiter=',')
|
||||
# else:
|
||||
# video_H = None
|
||||
video_H = self.config.camera.H
|
||||
|
||||
prev_time = time.time()
|
||||
|
||||
|
||||
while self.is_running.is_set():
|
||||
with timer_counter.get_lock():
|
||||
timer_counter.value += 1
|
||||
|
||||
ret, img = video.read()
|
||||
|
||||
# seek to 0 if video has finished. Infinite loop
|
||||
if not ret:
|
||||
# now loading multiple files
|
||||
break
|
||||
# video.set(cv2.CAP_PROP_POS_FRAMES, 0)
|
||||
# ret, img = video.read()
|
||||
# assert ret is not False # not really error proof...
|
||||
|
||||
frame = Frame(i, img=img, H=self.config.camera.H, camera=self.config.camera)
|
||||
|
||||
# frame.img = processor.apply(frame.img)
|
||||
|
||||
if "DATASETS/hof/" in str(video_path):
|
||||
# hack to mask out area
|
||||
cv2.rectangle(img, (0,0), (800,200), (0,0,0), -1)
|
||||
|
||||
frame = Frame(index=i, img=img, H=self.config.H, camera=self.config.camera)
|
||||
# TODO: this is very dirty, need to find another way.
|
||||
# perhaps multiprocessing Array?
|
||||
self.frame_noimg_sock.send(pickle.dumps(frame.without_img()))
|
||||
self.frame_sock.send(pickle.dumps(frame))
|
||||
|
||||
if writer:
|
||||
writer.write(frame.img)
|
||||
finally:
|
||||
if writer:
|
||||
writer.release()
|
||||
# only delay consuming the next frame when using a file.
|
||||
# Otherwise, go ASAP
|
||||
if delay_generation:
|
||||
# defer next loop
|
||||
now = time.time()
|
||||
time_diff = (now - prev_time)
|
||||
if time_diff < target_frame_duration:
|
||||
time.sleep(target_frame_duration - time_diff)
|
||||
now += target_frame_duration - time_diff
|
||||
|
||||
prev_time = now
|
||||
|
||||
i += 1
|
||||
|
||||
if not self.is_running.is_set():
|
||||
# if not running, also break out of infinite generator loop
|
||||
break
|
||||
|
||||
|
||||
logger.info("Stopping")
|
||||
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls) -> ArgumentParser:
|
||||
argparser = LambdaParser()
|
||||
argparser.add_argument('--zmq-frame-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame")
|
||||
|
||||
argparser.add_argument('--zmq-frame-noimg-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame2")
|
||||
|
||||
argparser.add_argument("--video-src",
|
||||
help="source video to track from can be either a relative or absolute path, or a url, like an RTSP resource, or use gige://RELATIVE_PATH_TO_GIGE_CONFIG_JSON",
|
||||
type=UrlOrPath,
|
||||
nargs='+',
|
||||
default=lambda: [UrlOrPath(p) for p in Path('../DATASETS/VIRAT_subset_0102x/').glob('*.mp4')])
|
||||
argparser.add_argument("--video-offset",
|
||||
help="Start playback from given frame. Note that when src is an array, this applies to all videos individually.",
|
||||
default=0,
|
||||
type=int)
|
||||
argparser.add_argument("--video-end",
|
||||
help="End (or loop) playback at given frame.",
|
||||
default=None,
|
||||
type=int)
|
||||
argparser.add_argument("--record",
|
||||
help="Record source video to given filename",
|
||||
default=None,
|
||||
type=Path)
|
||||
|
||||
argparser.add_argument("--video-loop",
|
||||
help="By default it emitter will run only once. This allows it to loop the video file to keep testing.",
|
||||
action='store_true')
|
||||
|
||||
argparser.add_argument("--camera-fps",
|
||||
help="Camera FPS",
|
||||
type=int,
|
||||
default=12)
|
||||
argparser.add_argument("--homography",
|
||||
help="File with homography params [Deprecated]",
|
||||
type=Path,
|
||||
default='../DATASETS/VIRAT_subset_0102x/VIRAT_0102_homography_img2world.txt',
|
||||
action=HomographyAction)
|
||||
argparser.add_argument("--calibration",
|
||||
help="File with camera intrinsics and lens distortion params (calibration.json)",
|
||||
# type=Path,
|
||||
required=True,
|
||||
# default=None,
|
||||
action=CameraAction)
|
||||
return argparser
|
||||
|
||||
|
||||
|
||||
|
||||
def run_frame_emitter(config: Namespace, is_running: Event, timer_counter: int):
|
||||
router = FrameEmitter(config, is_running)
|
||||
router.run(timer_counter)
|
||||
is_running.clear()
|
||||
router.emit_video(timer_counter)
|
||||
is_running.clear()
|
631
trap/helios.py
631
trap/helios.py
|
@ -1,631 +0,0 @@
|
|||
# code by phar: https://github.com/phar/heliospy
|
||||
|
||||
import usb.core
|
||||
import usb.util
|
||||
import struct
|
||||
import time
|
||||
import queue
|
||||
from trap.hersey import *
|
||||
from threading import Thread
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
|
||||
HELIOS_VID = 0x1209
|
||||
HELIOS_PID = 0xE500
|
||||
EP_BULK_OUT = 0x02
|
||||
EP_BULK_IN = 0x81
|
||||
EP_INT_OUT = 0x06
|
||||
EP_INT_IN = 0x83
|
||||
|
||||
INTERFACE_INT = 0
|
||||
INTERFACE_BULK = 1
|
||||
INTERFACE_ISO = 2
|
||||
|
||||
HELIOS_MAX_POINTS = 0x1000
|
||||
HELIOS_MAX_RATE = 0xFFFF
|
||||
HELIOS_MIN_RATE = 7
|
||||
|
||||
HELIOS_SUCCESS = 1
|
||||
|
||||
# Functions return negative values if something went wrong
|
||||
# Attempted to perform an action before calling OpenDevices()
|
||||
HELIOS_ERROR_NOT_INITIALIZED =-1
|
||||
# Attempted to perform an action with an invalid device number
|
||||
HELIOS_ERROR_INVALID_DEVNUM = -2
|
||||
# WriteFrame() called with null pointer to points
|
||||
HELIOS_ERROR_NULL_POINTS = -3
|
||||
# WriteFrame() called with a frame containing too many points
|
||||
HELIOS_ERROR_TOO_MANY_POINTS = -4
|
||||
# WriteFrame() called with pps higher than maximum allowed
|
||||
HELIOS_ERROR_PPS_TOO_HIGH = -5
|
||||
# WriteFrame() called with pps lower than minimum allowed
|
||||
HELIOS_ERROR_PPS_TOO_LOW = -6
|
||||
|
||||
# Errors from the HeliosDacDevice class begin at -1000
|
||||
# Attempted to perform an operation on a closed DAC device
|
||||
HELIOS_ERROR_DEVICE_CLOSED = -1000
|
||||
# Attempted to send a new frame with HELIOS_FLAGS_DONT_BLOCK before previous DoFrame() completed
|
||||
HELIOS_ERROR_DEVICE_FRAME_READY = -1001
|
||||
#/ Operation failed because SendControl() failed (if operation failed because of libusb_interrupt_transfer failure, the error code will be a libusb error instead)
|
||||
HELIOS_ERROR_DEVICE_SEND_CONTROL = -1002
|
||||
# Received an unexpected result from a call to SendControl()
|
||||
HELIOS_ERROR_DEVICE_RESULT = -1003
|
||||
# Attempted to call SendControl() with a null buffer pointer
|
||||
HELIOS_ERROR_DEVICE_NULL_BUFFER = -1004
|
||||
# Attempted to call SendControl() with a control signal that is too long
|
||||
HELIOS_ERROR_DEVICE_SIGNAL_TOO_LONG = -1005
|
||||
|
||||
HELIOS_ERROR_LIBUSB_BASE = -5000
|
||||
|
||||
HELIOS_FLAGS_DEFAULT = 0
|
||||
HELIOS_FLAGS_START_IMMEDIATELY = (1 << 0)
|
||||
HELIOS_FLAGS_SINGLE_MODE = (1 << 1)
|
||||
HELIOS_FLAGS_DONT_BLOCK = (1 << 2)
|
||||
|
||||
|
||||
HELIOS_CMD_STOP =0x0001
|
||||
HELIOS_CMD_SHUTTER =0x0002
|
||||
HELIOS_CMD_GET_STATUS =0x0003
|
||||
HELIOS_GET_FWVERSION =0x0004
|
||||
HELIOS_CMD_GET_NAME =0x0005
|
||||
HELIOS_CMD_SET_NAME =0x0006
|
||||
HELIOS_SET_SDK_VERSION =0x0007
|
||||
HELIOS_CMD_ERASE_FIRMWARE =0x00de
|
||||
|
||||
HELIOS_SDK_VERSION = 6
|
||||
|
||||
class HeliosPoint():
|
||||
def __init__(self,x,y,c = 0xff0000,i= 255,blank=False):
|
||||
self.x = x
|
||||
self.y = y
|
||||
self.c = 0x010203
|
||||
self.i = i
|
||||
self.blank = blank
|
||||
|
||||
def __str__(self):
|
||||
return "HeleiosPoint(%d, %d,0x%0x,%d,%d)" % (self.x, self.y, self.c,self.i, self.blank)
|
||||
|
||||
class HeliosDAC():
|
||||
def __init__(self,queuethread=True, debug=0):
|
||||
self.debug=debug
|
||||
self.closed = 1
|
||||
self.frameReady = 0
|
||||
self.framebuffer = ""
|
||||
self.threadqueue = queue.Queue(maxsize=20)
|
||||
self.nextframebuffer = ""
|
||||
self.adcbits = 12
|
||||
self.dev = usb.core.find(idVendor=HELIOS_VID, idProduct=HELIOS_PID)
|
||||
self.cfg = self.dev.get_active_configuration()
|
||||
self.intf = self.cfg[(0,1,2)]
|
||||
self.dev.reset()
|
||||
self.palette = [( 0, 0, 0 ), # Black/blanked (fixed)
|
||||
( 255, 255, 255 ), # White (fixed)
|
||||
( 255, 0, 0 ), # Red (fixed)
|
||||
( 255, 255, 0 ), # Yellow (fixed)
|
||||
( 0, 255, 0 ), # Green (fixed)
|
||||
( 0, 255, 255 ), # Cyan (fixed)
|
||||
( 0, 0, 255 ), # Blue (fixed)
|
||||
( 255, 0, 255 ), # Magenta (fixed)
|
||||
( 255, 128, 128 ), # Light red
|
||||
( 255, 140, 128 ),
|
||||
( 255, 151, 128 ),
|
||||
( 255, 163, 128 ),
|
||||
( 255, 174, 128 ),
|
||||
( 255, 186, 128 ),
|
||||
( 255, 197, 128 ),
|
||||
( 255, 209, 128 ),
|
||||
( 255, 220, 128 ),
|
||||
( 255, 232, 128 ),
|
||||
( 255, 243, 128 ),
|
||||
( 255, 255, 128 ), # Light yellow
|
||||
( 243, 255, 128 ),
|
||||
( 232, 255, 128 ),
|
||||
( 220, 255, 128 ),
|
||||
( 209, 255, 128 ),
|
||||
( 197, 255, 128 ),
|
||||
( 186, 255, 128 ),
|
||||
( 174, 255, 128 ),
|
||||
( 163, 255, 128 ),
|
||||
( 151, 255, 128 ),
|
||||
( 140, 255, 128 ),
|
||||
( 128, 255, 128 ), # Light green
|
||||
( 128, 255, 140 ),
|
||||
( 128, 255, 151 ),
|
||||
( 128, 255, 163 ),
|
||||
( 128, 255, 174 ),
|
||||
( 128, 255, 186 ),
|
||||
( 128, 255, 197 ),
|
||||
( 128, 255, 209 ),
|
||||
( 128, 255, 220 ),
|
||||
( 128, 255, 232 ),
|
||||
( 128, 255, 243 ),
|
||||
( 128, 255, 255 ), # Light cyan
|
||||
( 128, 243, 255 ),
|
||||
( 128, 232, 255 ),
|
||||
( 128, 220, 255 ),
|
||||
( 128, 209, 255 ),
|
||||
( 128, 197, 255 ),
|
||||
( 128, 186, 255 ),
|
||||
( 128, 174, 255 ),
|
||||
( 128, 163, 255 ),
|
||||
( 128, 151, 255 ),
|
||||
( 128, 140, 255 ),
|
||||
( 128, 128, 255 ), # Light blue
|
||||
( 140, 128, 255 ),
|
||||
( 151, 128, 255 ),
|
||||
( 163, 128, 255 ),
|
||||
( 174, 128, 255 ),
|
||||
( 186, 128, 255 ),
|
||||
( 197, 128, 255 ),
|
||||
( 209, 128, 255 ),
|
||||
( 220, 128, 255 ),
|
||||
( 232, 128, 255 ),
|
||||
( 243, 128, 255 ),
|
||||
( 255, 128, 255 ), # Light magenta
|
||||
( 255, 128, 243 ),
|
||||
( 255, 128, 232 ),
|
||||
( 255, 128, 220 ),
|
||||
( 255, 128, 209 ),
|
||||
( 255, 128, 197 ),
|
||||
( 255, 128, 186 ),
|
||||
( 255, 128, 174 ),
|
||||
( 255, 128, 163 ),
|
||||
( 255, 128, 151 ),
|
||||
( 255, 128, 140 ),
|
||||
( 255, 0, 0 ), # Red (cycleable)
|
||||
( 255, 23, 0 ),
|
||||
( 255, 46, 0 ),
|
||||
( 255, 70, 0 ),
|
||||
( 255, 93, 0 ),
|
||||
( 255, 116, 0 ),
|
||||
( 255, 139, 0 ),
|
||||
( 255, 162, 0 ),
|
||||
( 255, 185, 0 ),
|
||||
( 255, 209, 0 ),
|
||||
( 255, 232, 0 ),
|
||||
( 255, 255, 0 ), #Yellow (cycleable)
|
||||
( 232, 255, 0 ),
|
||||
( 209, 255, 0 ),
|
||||
( 185, 255, 0 ),
|
||||
( 162, 255, 0 ),
|
||||
( 139, 255, 0 ),
|
||||
( 116, 255, 0 ),
|
||||
( 93, 255, 0 ),
|
||||
( 70, 255, 0 ),
|
||||
( 46, 255, 0 ),
|
||||
( 23, 255, 0 ),
|
||||
( 0, 255, 0 ), # Green (cycleable)
|
||||
( 0, 255, 23 ),
|
||||
( 0, 255, 46 ),
|
||||
( 0, 255, 70 ),
|
||||
( 0, 255, 93 ),
|
||||
( 0, 255, 116 ),
|
||||
( 0, 255, 139 ),
|
||||
( 0, 255, 162 ),
|
||||
( 0, 255, 185 ),
|
||||
( 0, 255, 209 ),
|
||||
( 0, 255, 232 ),
|
||||
( 0, 255, 255 ), # Cyan (cycleable)
|
||||
( 0, 232, 255 ),
|
||||
( 0, 209, 255 ),
|
||||
( 0, 185, 255 ),
|
||||
( 0, 162, 255 ),
|
||||
( 0, 139, 255 ),
|
||||
( 0, 116, 255 ),
|
||||
( 0, 93, 255 ),
|
||||
( 0, 70, 255 ),
|
||||
( 0, 46, 255 ),
|
||||
( 0, 23, 255 ),
|
||||
( 0, 0, 255 ), # Blue (cycleable)
|
||||
( 23, 0, 255 ),
|
||||
( 46, 0, 255 ),
|
||||
( 70, 0, 255 ),
|
||||
( 93, 0, 255 ),
|
||||
( 116, 0, 255 ),
|
||||
( 139, 0, 255 ),
|
||||
( 162, 0, 255 ),
|
||||
( 185, 0, 255 ),
|
||||
( 209, 0, 255 ),
|
||||
( 232, 0, 255 ),
|
||||
( 255, 0, 255 ), # Magenta (cycleable)
|
||||
( 255, 0, 232 ),
|
||||
( 255, 0, 209 ),
|
||||
( 255, 0, 185 ),
|
||||
( 255, 0, 162 ),
|
||||
( 255, 0, 139 ),
|
||||
( 255, 0, 116 ),
|
||||
( 255, 0, 93 ),
|
||||
( 255, 0, 70 ),
|
||||
( 255, 0, 46 ),
|
||||
( 255, 0, 23 ),
|
||||
( 128, 0, 0 ), # Dark red
|
||||
( 128, 12, 0 ),
|
||||
( 128, 23, 0 ),
|
||||
( 128, 35, 0 ),
|
||||
( 128, 47, 0 ),
|
||||
( 128, 58, 0 ),
|
||||
( 128, 70, 0 ),
|
||||
( 128, 81, 0 ),
|
||||
( 128, 93, 0 ),
|
||||
( 128, 105, 0 ),
|
||||
( 128, 116, 0 ),
|
||||
( 128, 128, 0 ), # Dark yellow
|
||||
( 116, 128, 0 ),
|
||||
( 105, 128, 0 ),
|
||||
( 93, 128, 0 ),
|
||||
( 81, 128, 0 ),
|
||||
( 70, 128, 0 ),
|
||||
( 58, 128, 0 ),
|
||||
( 47, 128, 0 ),
|
||||
( 35, 128, 0 ),
|
||||
( 23, 128, 0 ),
|
||||
( 12, 128, 0 ),
|
||||
( 0, 128, 0 ), # Dark green
|
||||
( 0, 128, 12 ),
|
||||
( 0, 128, 23 ),
|
||||
( 0, 128, 35 ),
|
||||
( 0, 128, 47 ),
|
||||
( 0, 128, 58 ),
|
||||
( 0, 128, 70 ),
|
||||
( 0, 128, 81 ),
|
||||
( 0, 128, 93 ),
|
||||
( 0, 128, 105 ),
|
||||
( 0, 128, 116 ),
|
||||
( 0, 128, 128 ), # Dark cyan
|
||||
( 0, 116, 128 ),
|
||||
( 0, 105, 128 ),
|
||||
( 0, 93, 128 ),
|
||||
( 0, 81, 128 ),
|
||||
( 0, 70, 128 ),
|
||||
( 0, 58, 128 ),
|
||||
( 0, 47, 128 ),
|
||||
( 0, 35, 128 ),
|
||||
( 0, 23, 128 ),
|
||||
( 0, 12, 128 ),
|
||||
( 0, 0, 128 ), # Dark blue
|
||||
( 12, 0, 128 ),
|
||||
( 23, 0, 128 ),
|
||||
( 35, 0, 128 ),
|
||||
( 47, 0, 128 ),
|
||||
( 58, 0, 128 ),
|
||||
( 70, 0, 128 ),
|
||||
( 81, 0, 128 ),
|
||||
( 93, 0, 128 ),
|
||||
( 105, 0, 128 ),
|
||||
( 116, 0, 128 ),
|
||||
( 128, 0, 128 ), # Dark magenta
|
||||
( 128, 0, 116 ),
|
||||
( 128, 0, 105 ),
|
||||
( 128, 0, 93 ),
|
||||
( 128, 0, 81 ),
|
||||
( 128, 0, 70 ),
|
||||
( 128, 0, 58 ),
|
||||
( 128, 0, 47 ),
|
||||
( 128, 0, 35 ),
|
||||
( 128, 0, 23 ),
|
||||
( 128, 0, 12 ),
|
||||
( 255, 192, 192 ), # Very light red
|
||||
( 255, 64, 64 ), # Light-medium red
|
||||
( 192, 0, 0 ), # Medium-dark red
|
||||
( 64, 0, 0 ), # Very dark red
|
||||
( 255, 255, 192 ), # Very light yellow
|
||||
( 255, 255, 64 ), # Light-medium yellow
|
||||
( 192, 192, 0 ), # Medium-dark yellow
|
||||
( 64, 64, 0 ), # Very dark yellow
|
||||
( 192, 255, 192 ), # Very light green
|
||||
( 64, 255, 64 ), # Light-medium green
|
||||
( 0, 192, 0 ), # Medium-dark green
|
||||
( 0, 64, 0 ), # Very dark green
|
||||
( 192, 255, 255 ), # Very light cyan
|
||||
( 64, 255, 255 ), # Light-medium cyan
|
||||
( 0, 192, 192 ), # Medium-dark cyan
|
||||
( 0, 64, 64 ), # Very dark cyan
|
||||
( 192, 192, 255 ), # Very light blue
|
||||
( 64, 64, 255 ), # Light-medium blue
|
||||
( 0, 0, 192 ), # Medium-dark blue
|
||||
( 0, 0, 64 ), # Very dark blue
|
||||
( 255, 192, 255 ), # Very light magenta
|
||||
( 255, 64, 255 ), # Light-medium magenta
|
||||
( 192, 0, 192 ), # Medium-dark magenta
|
||||
( 64, 0, 64 ), # Very dark magenta
|
||||
( 255, 96, 96 ), # Medium skin tone
|
||||
( 255, 255, 255 ), # White (cycleable)
|
||||
( 245, 245, 245 ),
|
||||
( 235, 235, 235 ),
|
||||
( 224, 224, 224 ), # Very light gray (7/8 intensity)
|
||||
( 213, 213, 213 ),
|
||||
( 203, 203, 203 ),
|
||||
( 192, 192, 192 ), # Light gray (3/4 intensity)
|
||||
( 181, 181, 181 ),
|
||||
( 171, 171, 171 ),
|
||||
( 160, 160, 160 ), # Medium-light gray (5/8 int.)
|
||||
( 149, 149, 149 ),
|
||||
( 139, 139, 139 ),
|
||||
( 128, 128, 128 ), # Medium gray (1/2 intensity)
|
||||
( 117, 117, 117 ),
|
||||
( 107, 107, 107 ),
|
||||
( 96, 96, 96 ), # Medium-dark gray (3/8 int.)
|
||||
( 85, 85, 85 ),
|
||||
( 75, 75, 75 ),
|
||||
( 64, 64, 64 ), # Dark gray (1/4 intensity)
|
||||
( 53, 53, 53 ),
|
||||
( 43, 43, 43 ),
|
||||
( 32, 32, 32 ), # Very dark gray (1/8 intensity)
|
||||
( 21, 21, 21 ),
|
||||
( 11, 11, 11 )] # Black
|
||||
|
||||
self.dev.set_interface_altsetting(interface = 0, alternate_setting = 1)
|
||||
|
||||
if self.dev.is_kernel_driver_active(0) is True:
|
||||
self.dev.detach_kernel_driver(0)
|
||||
# claim the device
|
||||
usb.util.claim_interface(self.dev, 0)
|
||||
|
||||
if self.dev is None:
|
||||
raise ValueError('Device not found')
|
||||
else:
|
||||
if self.debug:
|
||||
print(self.dev)
|
||||
|
||||
try:
|
||||
transferResult = self.intf[0].read(32,1)
|
||||
except:
|
||||
if self.debug:
|
||||
print("no lingering data")
|
||||
|
||||
if self.debug:
|
||||
print(self.GetName())
|
||||
print(self.getHWVersion())
|
||||
self.setSDKVersion()
|
||||
self.closed = False
|
||||
if queuethread:
|
||||
self.runQueueThread()
|
||||
|
||||
def runQueueThread(self):
|
||||
worker = Thread(target=self.doframe_thread_loop)
|
||||
worker.setDaemon(True)
|
||||
worker.start()
|
||||
|
||||
def doframe_thread_loop(self):
|
||||
while self.closed == 0:
|
||||
if self.closed:
|
||||
return;
|
||||
self.DoFrame();
|
||||
|
||||
def getHWVersion(self):
|
||||
self.intf[1].write(struct.pack("<H",HELIOS_GET_FWVERSION))
|
||||
transferResult = self.intf[0].read(32)
|
||||
if transferResult[0] == 0x84:
|
||||
return struct.unpack("<L",transferResult[1:])[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def setSDKVersion(self, version = HELIOS_SDK_VERSION):
|
||||
self.intf[1].write(struct.pack("<H",(version << 8) | HELIOS_SET_SDK_VERSION))
|
||||
return
|
||||
|
||||
def setShutter(self, shutter=False):
|
||||
self.SendControl(struct.pack("<H",(shutter << 8) | HELIOS_CMD_SHUTTER))
|
||||
return
|
||||
|
||||
def setName(self, name):
|
||||
self.SendControl(struct.pack("<H", HELIOS_CMD_SET_NAME) + name[:30] + b"\x00")
|
||||
return
|
||||
|
||||
def newFrame(self,pps, pntobjlist, flags = HELIOS_FLAGS_DEFAULT):
|
||||
if self.closed:
|
||||
return HELIOS_ERROR_DEVICE_CLOSED;
|
||||
|
||||
if ( len(pntobjlist) > HELIOS_MAX_POINTS):
|
||||
return HELIOS_ERROR_TOO_MANY_POINTS
|
||||
|
||||
if (pps > HELIOS_MAX_RATE):
|
||||
return HELIOS_ERROR_PPS_TOO_HIGH
|
||||
|
||||
if (pps < HELIOS_MIN_RATE):
|
||||
return HELIOS_ERROR_PPS_TOO_LOW
|
||||
|
||||
#this is a bug workaround, the mcu won't correctly receive transfers with these sizes
|
||||
ppsActual = pps;
|
||||
numOfPointsActual = len(pntobjlist)
|
||||
if (((len(pntobjlist)-45) % 64) == 0):
|
||||
numOfPointsActual-=1
|
||||
ppsActual = int((pps * numOfPointsActual / len(pntobjlist) + 0.5))
|
||||
|
||||
pntobjlist = pntobjlist[:numOfPointsActual]
|
||||
nextframebuffer = b""
|
||||
for pnt in pntobjlist:
|
||||
a = (pnt.x >> 4) & 0xff
|
||||
b = ((pnt.x & 0x0F) << 4) | (pnt.y >> 8)
|
||||
c = pnt.y & 0xFF
|
||||
if pnt.blank == False:
|
||||
r = (pnt.c & 0xff0000) >> 16
|
||||
g = (pnt.c & 0xff00) >> 8
|
||||
b = (pnt.c & 0xff)
|
||||
i = pnt.i
|
||||
else:
|
||||
r = 0
|
||||
g = 0
|
||||
b = 0
|
||||
i = 0
|
||||
nextframebuffer += struct.pack("BBBBBBB", a,b,c,r,g,b,i)
|
||||
nextframebuffer += struct.pack("BBBBB", (ppsActual & 0xFF),(ppsActual >> 8) ,(len(pntobjlist) & 0xFF),(len(pntobjlist) >> 8),flags)
|
||||
self.threadqueue.put(nextframebuffer)
|
||||
|
||||
def DoFrame(self):
|
||||
if (self.closed):
|
||||
return HELIOS_ERROR_DEVICE_CLOSED;
|
||||
self.nextframebuffer = self.threadqueue.get(block=True)
|
||||
self.intf[3].write(self.nextframebuffer)
|
||||
t = time.time()
|
||||
while(self.getStatus()[1] == 0): #wait for the laser
|
||||
pass
|
||||
return self.getStatus()
|
||||
|
||||
def GetName(self):
|
||||
self.SendControl(struct.pack("<H",HELIOS_CMD_GET_NAME))
|
||||
x = self.intf[0].read(32)[:16]
|
||||
if x[0] == 0x85:
|
||||
return "".join([chr(t) for t in x[1:]])
|
||||
else:
|
||||
return None
|
||||
|
||||
def SendControl(self, buffer):
|
||||
if (buffer == None):
|
||||
return HELIOS_ERROR_DEVICE_NULL_BUFFER;
|
||||
if (len(buffer) > 32):
|
||||
return HELIOS_ERROR_DEVICE_SIGNAL_TOO_LONG;
|
||||
self.intf[1].write(buffer)
|
||||
|
||||
def stop(self):
|
||||
self.SendControl(struct.pack("<H",0x0001), 2)
|
||||
time.sleep(.1)
|
||||
return
|
||||
|
||||
def getStatus(self):
|
||||
self.SendControl(struct.pack("<H",0x0003))
|
||||
ret = self.intf[0].read(32)
|
||||
if self.debug:
|
||||
print(ret)
|
||||
return ret
|
||||
|
||||
def generateText(self,text,xpos,ypos,cindex=0,scale=1.0):
|
||||
pointstream = []
|
||||
ctr = 0
|
||||
for c in text:
|
||||
lastx = xpos
|
||||
lasty = ypos
|
||||
blank = True
|
||||
for x,y in HERSHEY_FONT[ord(c)-32]:
|
||||
if (x == -1) and (y == -1):
|
||||
# pointstream.append(HeliosPoint(lastx,lasty,blank=blank))
|
||||
blank = True
|
||||
else:
|
||||
lastx = int((x + (ctr * HERSHEY_WIDTH)) * scale)
|
||||
lasty = int(y * scale)
|
||||
blank = False
|
||||
pointstream.append(HeliosPoint(lastx,lasty,self.palette[cindex],blank=blank))
|
||||
ctr += 1
|
||||
|
||||
return pointstream
|
||||
|
||||
def loadILDfile(self,filename, xscale=1.0, yscale=1.0):
|
||||
f = open(filename,"rb")
|
||||
headerstruct = ">4s3xB8s8sHHHBx"
|
||||
moreframes = True
|
||||
frames = []
|
||||
while moreframes:
|
||||
(magic, format, fname, cname, rcnt, num, total_frames, projectorid) = struct.unpack(headerstruct,f.read(struct.calcsize(headerstruct)))
|
||||
if magic == b"ILDA":
|
||||
pointlist = []
|
||||
palette = []
|
||||
x = y = z = red = green = blue = 0
|
||||
blank = 1
|
||||
lastpoint = 0
|
||||
if rcnt > 0:
|
||||
for i in range(rcnt):
|
||||
if format in [0,1,4,5]:
|
||||
if format == 0:
|
||||
fmt = ">hhhBB"
|
||||
(x,y,z,status,cindex) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
|
||||
|
||||
elif format == 1:
|
||||
fmt = ">hhBB"
|
||||
(x,y,status,cindex) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
|
||||
|
||||
elif format == 4:
|
||||
(x,y,z,status,red,green,blue) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
|
||||
|
||||
elif format == 5:
|
||||
fmt = ">hhhBBBB"
|
||||
(x,y,status,red,green,blue) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
|
||||
|
||||
blank = (status & 0x40) > 0
|
||||
lastpoint = (status & 0x80) > 0
|
||||
lessadcbits = (16 - self.adcbits)
|
||||
x = int((x >> lessadcbits) * xscale)
|
||||
y = int((y >> lessadcbits) * yscale)
|
||||
pointlist.append(HeliosPoint(x,y,self.palette[cindex],blank=blank))
|
||||
|
||||
elif format == 2:
|
||||
fmt = ">BBB"
|
||||
(r,g,b) = struct.unpack(fmt,f.read(struct.calcsize(fmt)))
|
||||
palette.append((r<<16) | (g<<8) | b)
|
||||
|
||||
if format == 2:
|
||||
frames.append((("palette",fname,cname, num),palette))
|
||||
else:
|
||||
frames.append((("frame",fname,cname,num),pointlist))
|
||||
|
||||
else:
|
||||
moreframes = 0
|
||||
else:
|
||||
moreframes = 0
|
||||
|
||||
return frames
|
||||
|
||||
def plot(self, pntlist):
|
||||
fig, ax = plt.subplots() # Create a figure containing a single axes.
|
||||
xlst = []
|
||||
ylst = []
|
||||
for p in pntlist:
|
||||
if p.blank == False:
|
||||
xlst.append(p.x)
|
||||
ylst.append(p.y)
|
||||
ax.plot(xlst,ylst)
|
||||
plt.show()
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
a = HeliosDAC()
|
||||
|
||||
# a.runQueueThread()
|
||||
|
||||
# cal = a.generateText("hello World", 20,20,scale=10)
|
||||
## print(cal)
|
||||
# a.plot(cal)
|
||||
#
|
||||
# while(1):
|
||||
# a.newFrame(2000,cal)
|
||||
# a.DoFrame()
|
||||
|
||||
# cal = a.generateText("hello World", 0, 0,scale=10)
|
||||
# pps = 20000
|
||||
# while(1):
|
||||
# a.newFrame(pps,cal)
|
||||
# a.DoFrame()
|
||||
|
||||
|
||||
# cal = a.loadILDfile("ildatest.ild")
|
||||
# while(1):
|
||||
# for (t,n1,n2,c),f in cal:
|
||||
# print("playing %s,%s, %d" % (n1,n2,c))
|
||||
# a.newFrame(5000,f)
|
||||
# a.DoFrame()
|
||||
|
||||
# a.plot(f)
|
||||
pps = 200
|
||||
while(1):
|
||||
a.newFrame(pps,[HeliosPoint(0,200, c=(255,255,255)), #draw a square
|
||||
HeliosPoint(200,200, c=(255,255,255)),
|
||||
HeliosPoint(200,0, c=(255,255,255)),
|
||||
HeliosPoint(0,0, c=(255,255,255))])
|
||||
a.DoFrame()
|
||||
|
||||
# while(1):
|
||||
## a.newFrame(1000,[HeliosPoint(16000,16000)])
|
||||
# a.newFrame(100,[HeliosPoint(16000-2500,16000),HeliosPoint(16000,16000),HeliosPoint(16000+2500,16000),HeliosPoint(16000,16000),HeliosPoint(16000,16000+2500),HeliosPoint(16000,16000),HeliosPoint(16000,16000-2500),HeliosPoint(16000,16000)])
|
||||
# a.DoFrame()
|
||||
|
||||
|
||||
# while(1):
|
||||
# a.newFrame(1000,[HeliosPoint(0,200),
|
||||
# HeliosPoint(200,200),
|
||||
# HeliosPoint(200,0),
|
||||
# HeliosPoint(0,0),
|
||||
# ])
|
||||
# a.DoFrame()
|
||||
|
|
@ -1,253 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Example for using Helios DAC libraries in python (using C library with ctypes)
|
||||
|
||||
NB: If you haven't set up udev rules you need to use sudo to run the program for it to detect the DAC.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import ctypes
|
||||
import json
|
||||
import math
|
||||
from typing import Optional
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
def lerp(a: float, b: float, t: float) -> float:
|
||||
"""Linear interpolate on the scale given by a to b, using t as the point on that scale.
|
||||
Examples
|
||||
--------
|
||||
50 == lerp(0, 100, 0.5)
|
||||
4.2 == lerp(1, 5, 0.8)
|
||||
"""
|
||||
return (1 - t) * a + t * b
|
||||
|
||||
|
||||
class LaserFrame():
|
||||
def __init__(self, paths: list[LaserPath]):
|
||||
self.paths = paths
|
||||
|
||||
# def closest_path(cls, point, paths):
|
||||
# distances = [min(p.last()-)]
|
||||
|
||||
# def optimise_paths_lazy(self, last_point = None):
|
||||
# """Quick way to optimise order of paths
|
||||
# last_point can be the ending point of previous frame.
|
||||
# """
|
||||
# ordered_paths = []
|
||||
# if not last_point:
|
||||
# ordered_paths.append(self.paths.pop(0))
|
||||
|
||||
# last_point = endpoint
|
||||
# pass
|
||||
|
||||
def get_points_interpolated_by_distance(self, point_interval, last_point: Optional[LaserPoint] = None) -> list[LaserPoint]:
|
||||
"""
|
||||
Interpolate the gaps between paths (NOT THE PATHS THEMSELVES)
|
||||
point_interval is the maximum interval at which a new point should be added
|
||||
"""
|
||||
points: list[LaserPoint] = []
|
||||
for path in self.paths:
|
||||
if last_point:
|
||||
a = last_point
|
||||
b = path.first()
|
||||
dx = b.x - a.x
|
||||
dy = b.y - a.y
|
||||
distance = np.linalg.norm([dx,dy])
|
||||
steps = int(distance // point_interval)
|
||||
for step in range(steps+1): # have both 0 and 1 in the lerp for empty points
|
||||
t = step/(steps+1)
|
||||
x = int(lerp(a.x, b.x, t))
|
||||
y = int(lerp(a.y, b.y, t))
|
||||
points.append(LaserPoint(x,y, (0,0,0), 0, True))
|
||||
# print('append', steps)
|
||||
|
||||
points.extend(path.points)
|
||||
|
||||
last_point = path.last()
|
||||
|
||||
return points
|
||||
|
||||
|
||||
|
||||
|
||||
class LaserPath():
|
||||
def __init__(self, points: list[LaserPoint] = []):
|
||||
# if len(points) < 1:
|
||||
# raise RuntimeError("LaserPath should have some points")
|
||||
|
||||
self.points = points
|
||||
|
||||
def last(self):
|
||||
return self.points[-1]
|
||||
|
||||
def first(self):
|
||||
return self.points[0]
|
||||
|
||||
class LaserPoint():
|
||||
def __init__(self,x,y,c: Color = (255,0,0),i= 255,blank=False):
|
||||
self.x = x
|
||||
self.y = y
|
||||
self.c = c
|
||||
self._i = i
|
||||
self.blank = blank
|
||||
|
||||
@property
|
||||
def color(self):
|
||||
if self.blank: return (0,0,0)
|
||||
return self.c
|
||||
|
||||
@property
|
||||
def i(self):
|
||||
return 0 if self.blank else self._i
|
||||
|
||||
def circle_points(cx, cy, r, c: Color):
|
||||
# r = 100
|
||||
steps = r
|
||||
pointlist: list[LaserPoint] = []
|
||||
for i in range(steps):
|
||||
x = int(cx + math.cos(i * (2*math.pi)/steps) * r)
|
||||
y = int(cy + math.sin(i * (2*math.pi)/steps)* r)
|
||||
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
|
||||
|
||||
return pointlist
|
||||
|
||||
def cross_points(cx, cy, r, c: Color):
|
||||
# r = 100
|
||||
steps = r
|
||||
pointlist: list[LaserPoint] = []
|
||||
for i in range(steps):
|
||||
x = int(cx)
|
||||
y = int(cy + r - i * 2 * r/steps)
|
||||
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
|
||||
path = LaserPath(pointlist)
|
||||
pointlist: list[LaserPoint] = []
|
||||
for i in range(steps):
|
||||
y = int(cy)
|
||||
x = int(cx + r - i * 2 * r/steps)
|
||||
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
|
||||
path2 = LaserPath(pointlist)
|
||||
|
||||
return [path, path2]
|
||||
|
||||
Color = tuple[int, int, int]
|
||||
|
||||
#Define point structure
|
||||
class HeliosPoint(ctypes.Structure):
|
||||
#_pack_=1
|
||||
_fields_ = [('x', ctypes.c_uint16),
|
||||
('y', ctypes.c_uint16),
|
||||
('r', ctypes.c_uint8),
|
||||
('g', ctypes.c_uint8),
|
||||
('b', ctypes.c_uint8),
|
||||
('i', ctypes.c_uint8)]
|
||||
|
||||
#Load and initialize library
|
||||
HeliosLib = ctypes.cdll.LoadLibrary("./libHeliosDacAPI.so")
|
||||
numDevices = HeliosLib.OpenDevices()
|
||||
print("Found ", numDevices, "Helios DACs")
|
||||
|
||||
# #Create sample frames
|
||||
# frames = [0 for x in range(100)]
|
||||
# frameType = HeliosPoint * 1000
|
||||
# x = 0
|
||||
# y = 0
|
||||
# for i in range(100):
|
||||
# y = round(i * 0xFFF / 100)
|
||||
# # y = round(50*0xFFF/100)
|
||||
# frames[i] = frameType()
|
||||
# for j in range(1000):
|
||||
# if (j < 500):
|
||||
# x = round(j * 0xFFF / 500)
|
||||
# offset = 0
|
||||
# else:
|
||||
# offset = 0
|
||||
# x = round(0xFFF - ((j - 500) * 0xFFF / 500))
|
||||
|
||||
# # frames[i][j] = HeliosPoint(int(x),int(y+offset),0,(x%155),0,255)
|
||||
# frames[i][j] = HeliosPoint(int(x),int(y+offset),0,100,0,255)
|
||||
|
||||
pct =0xfff/100
|
||||
r=50
|
||||
|
||||
# TODO)) scriptje met sliders
|
||||
|
||||
paths = [
|
||||
# LaserPath(circle_points(10*pct, 45*pct, r, (100,0,100))),
|
||||
# *cross_points(10*pct, 45*pct, r, (100,0,100)), # magenta
|
||||
*cross_points(13.7*pct, 38.9*pct, r, (100,0,100)), # magenta # punt 10
|
||||
*cross_points(44.3*pct, 47.0*pct, r, (0,100,0)), # groen # punt 0
|
||||
*cross_points(82.5*pct, 12.7*pct, r, (100,100,100)), # wit # punt 4
|
||||
*cross_points(89*pct, 49*pct, r, (0,100,100)), # cyan # punt 2
|
||||
*cross_points(36*pct, 81.7*pct, r, (100,100,0)), # geel # punt 7
|
||||
]
|
||||
|
||||
calibration_points = [
|
||||
(13.7*pct, 38.9*pct, 10,),
|
||||
(44.3*pct, 47.0*pct, 0),
|
||||
(82.5*pct, 12.7*pct, 4),
|
||||
(89*pct, 49*pct, 2),
|
||||
(36*pct, 81.7*pct, 7),
|
||||
]
|
||||
|
||||
with open('/home/ruben/suspicion/DATASETS/hof3/irl_points.json') as fp:
|
||||
irl_points = json.load(fp)
|
||||
|
||||
src_points = []
|
||||
dst_points=[]
|
||||
for x, y, index in calibration_points:
|
||||
src_points.append(irl_points[index])
|
||||
dst_points.append([x,y])
|
||||
|
||||
print(src_points)
|
||||
H, status = cv2.findHomography(np.array(src_points), np.array(dst_points))
|
||||
print("LASER HOMOGRAPHY MATRIX")
|
||||
print(H)
|
||||
dst_img_points = cv2.perspectiveTransform(np.array([[irl_points[1]]]), H)
|
||||
print(dst_img_points)
|
||||
|
||||
|
||||
paths.extend([
|
||||
*cross_points(dst_img_points[0][0][0], dst_img_points[0][0][1], r, (100,100,0)), # geel # punt 7
|
||||
])
|
||||
|
||||
|
||||
frame = LaserFrame(paths)
|
||||
|
||||
pointlist = frame.get_points_interpolated_by_distance(3)
|
||||
|
||||
|
||||
print(len(pointlist))
|
||||
#Play frames on DAC
|
||||
i=0
|
||||
while True:
|
||||
|
||||
|
||||
frameType = HeliosPoint * len(pointlist)
|
||||
frame = frameType()
|
||||
|
||||
# print(len(pointlist), last_laser_point.x, last_laser_point.y)
|
||||
|
||||
for j, point in enumerate(pointlist):
|
||||
frame[j] = HeliosPoint(point.x, point.y, point.color[0],point.color[1], point.color[2], point.i)
|
||||
|
||||
# Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
|
||||
statusAttempts=0
|
||||
|
||||
while (statusAttempts < 512 and HeliosLib.GetStatus(0) != 1):
|
||||
statusAttempts += 1
|
||||
|
||||
HeliosLib.WriteFrame(0, 50000, 0, ctypes.pointer(frame), len(pointlist))
|
||||
|
||||
# for i in range(250):
|
||||
# i+=1
|
||||
# for j in range(numDevices):
|
||||
# statusAttempts = 0
|
||||
# # Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
|
||||
# while (statusAttempts < 512 and HeliosLib.GetStatus(j) != 1):
|
||||
# statusAttempts += 1
|
||||
# HeliosLib.WriteFrame(j, 50000, 0, ctypes.pointer(frames[i % 100]), 1000) #Send the frame
|
||||
|
||||
|
||||
HeliosLib.CloseDevices()
|
196
trap/hersey.py
196
trap/hersey.py
|
@ -1,196 +0,0 @@
|
|||
# part of heliospy, see helios.py
|
||||
|
||||
HERSHEY_HEIGHT = 28
|
||||
HERSHEY_WIDTH = 28
|
||||
HERSHEY_FONT = [
|
||||
#Ascii 32
|
||||
[(0,16),(-1, -1)],
|
||||
#Ascii 33
|
||||
[(8,10),(5, 21),(5, 7),(-1, -1),(5, 2),(4, 1),(5, 0),(6, 1),(5, 2),(-1, -1)],
|
||||
#Ascii 34
|
||||
[(5,16),(4, 21),(4, 14),(-1, -1),(12, 21),(12, 14),(-1, -1)],
|
||||
#Ascii 35
|
||||
[(11,21),(11, 25),(4, -7),(-1, -1),(17, 25),(10, -7),(-1, -1),(4, 12),(18, 12),(-1, -1),(3, 6),(17, 6),(-1, -1)],
|
||||
#Ascii 36
|
||||
[(26,20),(8, 25),(8, -4),(-1, -1),(12, 25),(12, -4),(-1, -1),(17, 18),(15, 20),(12, 21),(8, 21),(5, 20),(3, 18),(3, 16),(4, 14),(5, 13),(7, 12),(13, 10),(15, 9),(16, 8),(17, 6),(17, 3),(15, 1),(12, 0),(8, 0),(5, 1),(3, 3),(-1, -1)],
|
||||
#Ascii 37
|
||||
[(31,24),(21, 21),(3, 0),(-1, -1),(8, 21),(10, 19),(10, 17),(9, 15),(7, 14),(5, 14),(3, 16),(3, 18),(4, 20),(6, 21),(8, 21),(10, 20),(13, 19),(16, 19),(19, 20),(21, 21),(-1, -1),(17, 7),(15, 6),(14, 4),(14, 2),(16, 0),(18, 0),(20, 1),(21, 3),(21, 5),(19, 7),(17, 7),(-1, -1)],
|
||||
#Ascii 38
|
||||
[(34,26),(23, 12),(23, 13),(22, 14),(21, 14),(20, 13),(19, 11),(17, 6),(15, 3),(13, 1),(11, 0),(7, 0),(5, 1),(4, 2),(3, 4),(3, 6),(4, 8),(5, 9),(12, 13),(13, 14),(14, 16),(14, 18),(13, 20),(11, 21),(9, 20),(8, 18),(8, 16),(9, 13),(11, 10),(16, 3),(18, 1),(20, 0),(22, 0),(23, 1),(23, 2),(-1, -1)],
|
||||
#Ascii 39
|
||||
[(7,10),(5, 19),(4, 20),(5, 21),(6, 20),(6, 18),(5, 16),(4, 15),(-1, -1)],
|
||||
#Ascii 40
|
||||
[(10,14),(11, 25),(9, 23),(7, 20),(5, 16),(4, 11),(4, 7),(5, 2),(7, -2),(9, -5),(11, -7),(-1, -1)],
|
||||
#Ascii 41
|
||||
[(10,14),(3, 25),(5, 23),(7, 20),(9, 16),(10, 11),(10, 7),(9, 2),(7, -2),(5, -5),(3, -7),(-1, -1)],
|
||||
#Ascii 42
|
||||
[(8,16),(8, 21),(8, 9),(-1, -1),(3, 18),(13, 12),(-1, -1),(13, 18),(3, 12),(-1, -1)],
|
||||
#Ascii 43
|
||||
[(5,26),(13, 18),(13, 0),(-1, -1),(4, 9),(22, 9),(-1, -1)],
|
||||
#Ascii 44
|
||||
[(8,10),(6, 1),(5, 0),(4, 1),(5, 2),(6, 1),(6, -1),(5, -3),(4, -4),(-1, -1)],
|
||||
#Ascii 45
|
||||
[(2,26),(4, 9),(22, 9),(-1, -1)],
|
||||
#Ascii 46
|
||||
[(5,10),(5, 2),(4, 1),(5, 0),(6, 1),(5, 2),(-1, -1)],
|
||||
#Ascii 47`
|
||||
[(2,22),(20, 25),(2, -7),(-1, -1)],
|
||||
#Ascii 48
|
||||
[(17,20),(9, 21),(6, 20),(4, 17),(3, 12),(3, 9),(4, 4),(6, 1),(9, 0),(11, 0),(14, 1),(16, 4),(17, 9),(17, 12),(16, 17),(14, 20),(11, 21),(9, 21),(-1, -1)],
|
||||
#Ascii 49
|
||||
[(4,20),(6, 17),(8, 18),(11, 21),(11, 0),(-1, -1)],
|
||||
#Ascii 50
|
||||
[(14,20),(4, 16),(4, 17),(5, 19),(6, 20),(8, 21),(12, 21),(14, 20),(15, 19),(16, 17),(16, 15),(15, 13),(13, 10),(3, 0),(17, 0),(-1, -1)],
|
||||
#Ascii 51
|
||||
[(15,20),(5, 21),(16, 21),(10, 13),(13, 13),(15, 12),(16, 11),(17, 8),(17, 6),(16, 3),(14, 1),(11, 0),(8, 0),(5, 1),(4, 2),(3, 4),(-1, -1)],
|
||||
#Ascii 52
|
||||
[(6,20),(13, 21),(3, 7),(18, 7),(-1, -1),(13, 21),(13, 0),(-1, -1)],
|
||||
#Ascii 53
|
||||
[(17,20),(15, 21),(5, 21),(4, 12),(5, 13),(8, 14),(11, 14),(14, 13),(16, 11),(17, 8),(17, 6),(16, 3),(14, 1),(11, 0),(8, 0),(5, 1),(4, 2),(3, 4),(-1, -1)],
|
||||
#Ascii 54
|
||||
[(23,20),(16, 18),(15, 20),(12, 21),(10, 21),(7, 20),(5, 17),(4, 12),(4, 7),(5, 3),(7, 1),(10, 0),(11, 0),(14, 1),(16, 3),(17, 6),(17, 7),(16, 10),(14, 12),(11, 13),(10, 13),(7, 12),(5, 10),(4, 7),(-1, -1)],
|
||||
#Ascii 55
|
||||
[(5,20),(17, 21),(7, 0),(-1, -1),(3, 21),(17, 21),(-1, -1)],
|
||||
#Ascii 56
|
||||
[(29,20),(8, 21),(5, 20),(4, 18),(4, 16),(5, 14),(7, 13),(11, 12),(14, 11),(16, 9),(17, 7),(17, 4),(16, 2),(15, 1),(12, 0),(8, 0),(5, 1),(4, 2),(3, 4),(3, 7),(4, 9),(6, 11),(9, 12),(13, 13),(15, 14),(16, 16),(16, 18),(15, 20),(12, 21),(8, 21),(-1, -1)],
|
||||
#Ascii 57
|
||||
[(23,20),(16, 14),(15, 11),(13, 9),(10, 8),(9, 8),(6, 9),(4, 11),(3, 14),(3, 15),(4, 18),(6, 20),(9, 21),(10, 21),(13, 20),(15, 18),(16, 14),(16, 9),(15, 4),(13, 1),(10, 0),(8, 0),(5, 1),(4, 3),(-1, -1)],
|
||||
#Ascii 58
|
||||
[(11,10),(5, 14),(4, 13),(5, 12),(6, 13),(5, 14),(-1, -1),(5, 2),(4, 1),(5, 0),(6, 1),(5, 2),(-1, -1)],
|
||||
#Ascii 59
|
||||
[(14,10),(5, 14),(4, 13),(5, 12),(6, 13),(5, 14),(-1, -1),(6, 1),(5, 0),(4, 1),(5, 2),(6, 1),(6, -1),(5, -3),(4, -4),(-1, -1)],
|
||||
#Ascii 60
|
||||
[(3,24),(20, 18),(4, 9),(20, 0),(-1, -1)],
|
||||
#Ascii 61
|
||||
[(5,26),(4, 12),(22, 12),(-1, -1),(4, 6),(22, 6),(-1, -1)],
|
||||
#Ascii 62
|
||||
[(3,24),(4, 18),(20, 9),(4, 0),(-1, -1)],
|
||||
#Ascii 63
|
||||
[(20,18),(3, 16),(3, 17),(4, 19),(5, 20),(7, 21),(11, 21),(13, 20),(14, 19),(15, 17),(15, 15),(14, 13),(13, 12),(9, 10),(9, 7),(-1, -1),(9, 2),(8, 1),(9, 0),(10, 1),(9, 2),(-1, -1)],
|
||||
#Ascii 64
|
||||
[(55,27),(18, 13),(17, 15),(15, 16),(12, 16),(10, 15),(9, 14),(8, 11),(8, 8),(9, 6),(11, 5),(14, 5),(16, 6),(17, 8),(-1, -1),(12, 16),(10, 14),(9, 11),(9, 8),(10, 6),(11, 5),(-1, -1),(18, 16),(17, 8),(17, 6),(19, 5),(21, 5),(23, 7),(24, 10),(24, 12),(23, 15),(22, 17),(20, 19),(18, 20),(15, 21),(12, 21),(9, 20),(7, 19),(5, 17),(4, 15),(3, 12),(3, 9),(4, 6),(5, 4),(7, 2),(9, 1),(12, 0),(15, 0),(18, 1),(20, 2),(21, 3),(-1, -1),(19, 16),(18, 8),(18, 6),(19, 5),(8, 18),(-1,-1)],
|
||||
#Ascii 65
|
||||
[(8,18), (9,21), (1, 0),(-1,-1), (9,21),(17, 0),(-1,-1),( 4, 7),(14, 7),(-1,-1)],
|
||||
#Ascii 66
|
||||
[(23,21),(4, 21),(4, 0),(-1, -1),(4, 21),(13, 21),(16, 20),(17, 19),(18, 17),(18, 15),(17, 13),(16, 12),(13, 11),(-1, -1),(4, 11),(13, 11),(16, 10),(17, 9),(18, 7),(18, 4),(17, 2),(16, 1),(13, 0),(4, 0),(-1, -1)],
|
||||
#Ascii 67
|
||||
[(18,21),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(-1, -1)],
|
||||
#Ascii 68
|
||||
[(15,21),(4, 21),(4, 0),(-1, -1),(4, 21),(11, 21),(14, 20),(16, 18),(17, 16),(18, 13),(18, 8),(17, 5),(16, 3),(14, 1),(11, 0),(4, 0),(-1, -1)],
|
||||
#Ascii 69
|
||||
[(11,19),(4, 21),(4, 0),(-1, -1),(4, 21),(17, 21),(-1, -1),(4, 11),(12, 11),(-1, -1),(4, 0),(17, 0),(-1, -1)],
|
||||
#Ascii 70
|
||||
[(8,18),(4, 21),(4, 0),(-1, -1),(4, 21),(17, 21),(-1, -1),(4, 11),(12, 11),(-1, -1)],
|
||||
#Ascii 71
|
||||
[(22,21),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(18, 8),(-1, -1),(13, 8),(18, 8),(-1, -1)],
|
||||
#Ascii 72
|
||||
[(8,22),(4, 21),(4, 0),(-1, -1),(18, 21),(18, 0),(-1, -1),(4, 11),(18, 11),(-1, -1)],
|
||||
#Ascii 73
|
||||
[(2,8),(4, 21),(4, 0),(-1, -1)],
|
||||
#Ascii 74
|
||||
[(10,16),(12, 21),(12, 5),(11, 2),(10, 1),(8, 0),(6, 0),(4, 1),(3, 2),(2, 5),(2, 7),(-1, -1)],
|
||||
#Ascii 75
|
||||
[(8,21),(4, 21),(4, 0),(-1, -1),(18, 21),(4, 7),(-1, -1),(9, 12),(18, 0),(-1, -1)],
|
||||
#Ascii 76
|
||||
[(5,17),(4, 21),(4, 0),(-1, -1),(4, 0),(16, 0),(-1, -1)],
|
||||
#Ascii 77
|
||||
[(11,24),(4, 21),(4, 0),(-1, -1),(4, 21),(12, 0),(-1, -1),(20, 21),(12, 0),(-1, -1),(20, 21),(20, 0),(-1, -1)],
|
||||
#Ascii 78
|
||||
[(8,22),(4, 21),(4, 0),(-1, -1),(4, 21),(18, 0),(-1, -1),(18, 21),(18, 0),(-1, -1)],
|
||||
#Ascii 79
|
||||
[(21,22),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(19, 8),(19, 13),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(-1, -1)],
|
||||
#Ascii 80
|
||||
[(13,21),(4, 21),(4, 0),(-1, -1),(4, 21),(13, 21),(16, 20),(17, 19),(18, 17),(18, 14),(17, 12),(16, 11),(13, 10),(4, 10),(-1, -1)],
|
||||
#Ascii 81
|
||||
[(24,22),(9, 21),(7, 20),(5, 18),(4, 16),(3, 13),(3, 8),(4, 5),(5, 3),(7, 1),(9, 0),(13, 0),(15, 1),(17, 3),(18, 5),(19, 8),(19, 13),(18, 16),(17, 18),(15, 20),(13, 21),(9, 21),(-1, -1),(12, 4),(18, -2),(-1, -1)],
|
||||
#Ascii 82
|
||||
[(16,21),(4, 21),(4, 0),(-1, -1),(4, 21),(13, 21),(16, 20),(17, 19),(18, 17),(18, 15),(17, 13),(16, 12),(13, 11),(4, 11),(-1, -1),(11, 11),(18, 0),(-1, -1)],
|
||||
#Ascii 83
|
||||
[(20,20),(17, 18),(15, 20),(12, 21),(8, 21),(5, 20),(3, 18),(3, 16),(4, 14),(5, 13),(7, 12),(13, 10),(15, 9),(16, 8),(17, 6),(17, 3),(15, 1),(12, 0),(8, 0),(5, 1),(3, 3),(-1, -1)],
|
||||
#Ascii 8,4
|
||||
[(5,16),(8, 21),(8, 0),(-1, -1),(1, 21),(15, 21),(-1, -1)],
|
||||
#Ascii 85
|
||||
[(10,22),(4, 21),(4, 6),(5, 3),(7, 1),(10, 0),(12, 0),(15, 1),(17, 3),(18, 6),(18, 21),(-1, -1)],
|
||||
#Ascii 86
|
||||
[(5,18),(1, 21),(9, 0),(-1, -1),(17, 21),(9, 0),(-1, -1)],
|
||||
#Ascii 87
|
||||
[(11,24),(2, 21),(7, 0),(-1, -1),(12, 21),(7, 0),(-1, -1),(12, 21),(17, 0),(-1, -1),(22, 21),(17, 0),(-1, -1)],
|
||||
#Ascii 88
|
||||
[(5,20),(3, 21),(17, 0),(-1, -1),(17, 21),(3, 0),(-1, -1)],
|
||||
#Ascii 89
|
||||
[(6,18),(1, 21),(9, 11),(9, 0),(-1, -1),(17, 21),(9, 11),(-1, -1)],
|
||||
#Ascii 90
|
||||
[(8,20),(17, 21),(3, 0),(-1, -1),(3, 21),(17, 21),(-1, -1),(3, 0),(17, 0),(-1, -1)],
|
||||
#Ascii 91
|
||||
[(11,14),(4, 25),(4, -7),(-1, -1),(5, 25),(5, -7),(-1, -1),(4, 25),(11, 25),(-1, -1),(4, -7),(11, -7),(-1, -1)],
|
||||
#Ascii 92
|
||||
[(2,14),(0, 21),(14, -3),(-1, -1)],
|
||||
#Ascii 93
|
||||
[(11,14),(9, 25),(9, -7),(-1, -1),(10, 25),(10, -7),(-1, -1),(3, 25),(10, 25),(-1, -1),(3, -7),(10, -7),(-1, -1)],
|
||||
#Ascii 94
|
||||
[(10,16),(6, 15),(8, 18),(10, 15),(-1, -1),(3, 12),(8, 17),(13, 12),(-1, -1),(8, 17),(8, 0),(-1, -1)],
|
||||
#Ascii 95
|
||||
[(2,16),(0, -2),(16, -2),(-1, -1)],
|
||||
#Ascii 96
|
||||
[(7,10),(6, 21),(5, 20),(4, 18),(4, 16),(5, 15),(6, 16),(5, 17),(-1, -1)],
|
||||
#Ascii 97
|
||||
[(17,19),(15, 14),(15, 0),(-1, -1),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
|
||||
#Ascii 98
|
||||
[(17,19),(4, 21),(4, 0),(-1, -1),(4, 11),(6, 13),(8, 14),(11, 14),(13, 13),(15, 11),(16, 8),(16, 6),(15, 3),(13, 1),(11, 0),(8, 0),(6, 1),(4, 3),(-1, -1)],
|
||||
#Ascii 99
|
||||
[(14,18),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
|
||||
#Ascii 100
|
||||
[(17,19),(15, 21),(15, 0),(-1, -1),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
|
||||
#Ascii 101
|
||||
[(17,18),(3, 8),(15, 8),(15, 10),(14, 12),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
|
||||
#Ascii 102
|
||||
[(8,12),(10, 21),(8, 21),(6, 20),(5, 17),(5, 0),(-1, -1),(2, 14),(9, 14),(-1, -1)],
|
||||
#Ascii 103
|
||||
[(22,19),(15, 14),(15, -2),(14, -5),(13, -6),(11, -7),(8, -7),(6, -6),(-1, -1),(15, 11),(13, 13),(11, 14),(8, 14),(6, 13),(4, 11),(3, 8),(3, 6),(4, 3),(6, 1),(8, 0),(11, 0),(13, 1),(15, 3),(-1, -1)],
|
||||
#Ascii 104
|
||||
[(10,19),(4, 21),(4, 0),(-1, -1),(4, 10),(7, 13),(9, 14),(12, 14),(14, 13),(15, 10),(15, 0),(-1, -1)],
|
||||
#Ascii 105
|
||||
[(8,8),(3, 21),(4, 20),(5, 21),(4, 22),(3, 21),(-1, -1),(4, 14),(4, 0),(-1, -1)],
|
||||
#Ascii 106
|
||||
[(11,10),(5, 21),(6, 20),(7, 21),(6, 22),(5, 21),(-1, -1),(6, 14),(6, -3),(5, -6),(3, -7),(1, -7),(-1, -1)],
|
||||
#Ascii 107
|
||||
[(8,17),(4, 21),(4, 0),(-1, -1),(14, 14),(4, 4),(-1, -1),(8, 8),(15, 0),(-1, -1)],
|
||||
#Ascii 108
|
||||
[(2,8),(4, 21),(4, 0),(-1, -1),(18, 30),(-1,-1)],
|
||||
#Ascii 109
|
||||
[(18,30), (4,14),(4, 0),(-1,-1),(4,10),(7,13),(9,14),(12,14),(14,13),(15,10),(15, 0),(-1,-1),(15,10),(18,13),(20,14),(23,14),(25,13),(26,10),(26, 0),(-1,-1)],
|
||||
#Ascii 110
|
||||
[(10,19),(4, 14),(4, 0),(-1, -1),(4, 10),(7, 13),(9, 14),(12, 14),(14, 13),(15, 10),(15, 0),(-1, -1),(17, 19),(-1,-1)],
|
||||
#Ascii 111 */
|
||||
[(17,19),(8,14), (6,13), (4,11), (3, 8), (3, 6), (4, 3), (6, 1), (8, 0),(11, 0),(13, 1),(15, 3),(16,6),(16, 8),(15,11),(13,13),(11,14), (8,14), (-1,-1),(-1,-1)],
|
||||
#Ascii 112
|
||||
[(17,19),(4, 14),(4, -7),(-1, -1),(4, 11),(6, 13),(8, 14),(11, 14),(13, 13),(15, 11),(16, 8),(16, 6),(15, 3),(13, 1),(11, 0),(8, 0),(6, 1),(4, 3),(-1, -1),(17, 19),(-1,-1)],
|
||||
#Ascii 113,
|
||||
[(17,19), (15,14),(15,-7),(-1,-1),(15,11),(13,13),(11,14), (8,14), (6,13), (4,11), (3, 8), (3, 6), (4,3), (6, 1), (8, 0),(11, 0),(13, 1),(15, 3), (-1,-1), (-1,-1)],
|
||||
#Ascii 114
|
||||
[(8,13),(4, 14),(4, 0),(-1, -1),(4, 8),(5, 11),(7, 13),(9, 14),(12, 14),(-1, -1)],
|
||||
#Ascii 115
|
||||
[(17,17),(14, 11),(13, 13),(10, 14),(7, 14),(4, 13),(3, 11),(4, 9),(6, 8),(11, 7),(13, 6),(14, 4),(14, 3),(13, 1),(10, 0),(7, 0),(4, 1),(3, 3),(-1, -1)],
|
||||
#Ascii 116
|
||||
[(8,12),(5, 21),(5, 4),(6, 1),(8, 0),(10, 0),(-1, -1),(2, 14),(9, 14),(-1, -1)],
|
||||
#Ascii 117
|
||||
[(10,19),(4, 14),(4, 4),(5, 1),(7, 0),(10, 0),(12, 1),(15, 4),(-1, -1),(15, 14),(15, 0),(-1, -1)],
|
||||
#Ascii 118
|
||||
[(5,16),(2, 14),(8, 0),(-1, -1),(14, 14),(8, 0),(-1, -1)],
|
||||
#Ascii 119
|
||||
[(11,22),(3, 14),(7, 0),(-1, -1),(11, 14),(7, 0),(-1, -1),(11, 14),(15, 0),(-1, -1),(19, 14),(15, 0),(-1, -1)],
|
||||
#Ascii 120
|
||||
[(5,17),(3, 14),(14, 0),(-1, -1),(14, 14),(3, 0),(-1, -1)],
|
||||
#Ascii 121
|
||||
[(9,16),(2, 14),(8, 0),(-1, -1),(14, 14),(8, 0),(6, -4),(4, -6),(2, -7),(1, -7),(-1, -1)],
|
||||
#Ascii 122
|
||||
[(8,17),(14, 14),(3, 0),(-1, -1),(3, 14),(14, 14),(-1, -1),(3, 0),(14, 0),(-1, -1)],
|
||||
#Ascii 123
|
||||
[(39,14),(9, 25),(7, 24),(6, 23),(5, 21),(5, 19),(6, 17),(7, 16),(8, 14),(8, 12),(6, 10),(-1, -1),(7, 24),(6, 22),(6, 20),(7, 18),(8, 17),(9, 15),(9, 13),(8, 11),(4, 9),(8, 7),(9, 5),(9, 3),(8, 1),(7, 0),(6, -2),(6, -4),(7, -6),(-1, -1),(6, 8),(8, 6),(8, 4),(7, 2),(6, 1),(5, -1),(5, -3),(6, -5),(7, -6),(9, -7),(-1, -1)],
|
||||
#Ascii 124
|
||||
[(2,8),(4, 25),(4, -7),(-1, -1)],
|
||||
#Ascii 125
|
||||
[(39,14),(5, 25),(7, 24),(8, 23),(9, 21),(9, 19),(8, 17),(7, 16),(6, 14),(6, 12),(8, 10),(-1, -1),(7, 24),(8, 22),(8, 20),(7, 18),(6, 17),(5, 15),(5, 13),(6, 11),(10, 9),(6, 7),(5, 5),(5, 3),(6, 1),(7, 0),(8, -2),(8, -4),(7, -6),(-1, -1),(8, 8),(6, 6),(6, 4),(7, 2),(8, 1),(9, -1),(9, -3),(8, -5),(7, -6),(5, -7),(-1, -1)],
|
||||
#Ascii 126
|
||||
[(23,24),(3, 6),(3, 8),(4, 11),(6, 12),(8, 12),(10, 11),(14, 8),(16, 7),(18, 7),(20, 8),(21, 10),(-1, -1),(3, 8),(4, 10),(6, 11),(8, 11),(10, 10),(14, 7),(16, 6),(18, 6),(20, 7),(21, 10),(21, 12),(-1, -1)]]
|
||||
|
|
@ -1,292 +0,0 @@
|
|||
|
||||
|
||||
from argparse import ArgumentParser
|
||||
import enum
|
||||
import json
|
||||
from pathlib import Path
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from trap.base import DataclassJSONEncoder, DistortedCamera, Frame
|
||||
from trap.lines import CoordinateSpace, RenderableLine, RenderableLines, RenderablePoint, RenderablePosition, SrgbaColor, cross_points
|
||||
from trap.node import Node
|
||||
from trap.stage import Coordinate
|
||||
|
||||
|
||||
class Modes(enum.Enum):
|
||||
POINTS = 1
|
||||
TEST_LINE = 2
|
||||
|
||||
class LaserCalibration(Node):
|
||||
"""
|
||||
A calibrated camera can be used to reverse-map the points of the laser to world coordinates.
|
||||
Note, it publishes on the address of the stage node, so they cannot run at the same time.
|
||||
|
||||
1. Draw points with the laser (use 1-9 to create/select, then position them with arrow keys)
|
||||
2. Use cursor on camera stream to create an image point for.
|
||||
- Locate nearby point to select and drag
|
||||
3. Use image coordinate of point, undistort, homograph, gives world coordinate.
|
||||
4. Perform homography on world coordinates + laser coordinates
|
||||
"""
|
||||
|
||||
def setup(self):
|
||||
# self.scenarios: List[DrawnScenario] = []
|
||||
|
||||
self.frame_sock = self.sub(self.config.zmq_frame_addr)
|
||||
self.laser_sock = self.pub(self.config.zmq_stage_addr)
|
||||
|
||||
self.camera: Optional[DistortedCamera] = None
|
||||
|
||||
self._selected_point = None
|
||||
self._is_dragging = False
|
||||
self.laser_points = {}
|
||||
self.image_points = {}
|
||||
self.mode = Modes.POINTS
|
||||
self.H = None
|
||||
|
||||
self.img_size = (1920,1080)
|
||||
self.frame_img_factor = (1,1)
|
||||
|
||||
if self.config.calibfile.exists():
|
||||
with self.config.calibfile.open('r') as fp:
|
||||
calibdata = json.load(fp)
|
||||
self.laser_points = calibdata['laser_points']
|
||||
self.image_points = calibdata['image_points']
|
||||
self.H = calibdata['H']
|
||||
|
||||
|
||||
|
||||
|
||||
def run(self):
|
||||
|
||||
cv2.namedWindow("laser_calib", cv2.WINDOW_NORMAL)
|
||||
# https://gist.github.com/ronekko/dc3747211543165108b11073f929b85e
|
||||
# cv2.moveWindow("laser_calib", 0, -1)
|
||||
cv2.setMouseCallback('laser_calib',self.mouse_event)
|
||||
cv2.setWindowProperty("laser_calib",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
|
||||
|
||||
# arrow up (82), down (84), arrow left(81)
|
||||
|
||||
frame = None
|
||||
while self.run_loop_capped_fps(60):
|
||||
if self.frame_sock.poll(0):
|
||||
frame: Frame = self.frame_sock.recv_pyobj()
|
||||
if not self.camera:
|
||||
self.camera = frame.camera
|
||||
|
||||
if frame is None:
|
||||
continue
|
||||
|
||||
self.frame_img_factor = frame.img.shape[1] / self.img_size[0], frame.img.shape[0] / self.img_size[1]
|
||||
|
||||
|
||||
img = frame.img
|
||||
img = cv2.resize(img, self.img_size)
|
||||
|
||||
cv2.putText(img, 'press 1-0 to create/edit points', (10,20), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,255))
|
||||
if len(self.laser_points) < 4:
|
||||
cv2.putText(img, 'add points to calculate homography', (10,40), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,255))
|
||||
else:
|
||||
cv2.putText(img, 'press c to calculate homography', (10,40), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,0))
|
||||
|
||||
cv2.putText(img, str(self.config.calibfile), (10,self.img_size[1]-30), cv2.FONT_HERSHEY_SIMPLEX, .5, (255,255,0))
|
||||
|
||||
if self._selected_point:
|
||||
color = (0,255,255)
|
||||
cv2.putText(img, f'selected {self._selected_point}', (10,60), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
|
||||
cv2.putText(img, 'press d to delete', (10,80), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
|
||||
cv2.putText(img, 'use arrows to position laser for this point', (10,100), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
|
||||
target = self.camera.points_img_to_world([self.image_points[self._selected_point]])[0].tolist()
|
||||
target = round(target[0], 2), round(target[1], 2)
|
||||
cv2.putText(img, f'map {self.laser_points[self._selected_point]} to {target} ({self.image_points[self._selected_point]})', (10,120), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
|
||||
|
||||
|
||||
for k, coord in self.image_points.items():
|
||||
color = (0,0,255) if self._selected_point == k else (255,0,0)
|
||||
coord = int(coord[0] / self.frame_img_factor[0]), int(coord[1] / self.frame_img_factor[1])
|
||||
cv2.circle(img, coord, 4, color, thickness=2)
|
||||
cv2.putText(img, str(k), (coord[0]+10, coord[1]), cv2.FONT_HERSHEY_SIMPLEX, .5, color)
|
||||
|
||||
key = cv2.waitKey(5) # or for arrows: full_key_code = cv2.waitKeyEx(0)
|
||||
self.key_event(key)
|
||||
# nr_keys = [ord(i) for i in range(10)] # select/add point
|
||||
# cv2.
|
||||
cv2.imshow('laser_calib', img)
|
||||
|
||||
lines = []
|
||||
if self.mode == Modes.TEST_LINE:
|
||||
lines.append(RenderableLine([
|
||||
RenderablePoint((i,time.time()%18), SrgbaColor(0,1,0,1)) for i in range(-15, 40)
|
||||
|
||||
]))
|
||||
# render in laser space
|
||||
rl = RenderableLines(lines, CoordinateSpace.WORLD)
|
||||
self.laser_sock.send_json(rl, cls=DataclassJSONEncoder)
|
||||
else:
|
||||
if self._selected_point:
|
||||
point = self.laser_points[self._selected_point]
|
||||
lines.extend(cross_points(point[0], point[1], 100, SrgbaColor(0,1,0,1)))
|
||||
|
||||
# render in laser space
|
||||
rl = RenderableLines(lines, CoordinateSpace.LASER)
|
||||
self.laser_sock.send_json(rl, cls=DataclassJSONEncoder)
|
||||
|
||||
# print(json.dumps(rl, cls=DataclassJSONEncoder))
|
||||
|
||||
def key_event(self, key: int):
|
||||
if key < 0:
|
||||
return
|
||||
|
||||
if key == ord('q'):
|
||||
exit()
|
||||
|
||||
if key == 27: #esc
|
||||
self._selected_point = None
|
||||
|
||||
|
||||
if key == ord('c'):
|
||||
self.calculate_homography()
|
||||
self.save()
|
||||
|
||||
if key == ord('d') and self._selected_point:
|
||||
self.delete_point(self._selected_point)
|
||||
|
||||
if key == ord('t'):
|
||||
self.mode = Modes.TEST_LINE if self.mode == Modes.POINTS else Modes.POINTS
|
||||
print(self.mode)
|
||||
|
||||
# arrow up (82), down (84), arrow left(81)
|
||||
if self._selected_point and key in [81, 84, 82, 83,
|
||||
ord('h'), ord('j'), ord('k'), ord('l'),
|
||||
ord('H'), ord('J'), ord('K'), ord('L'),
|
||||
]:
|
||||
diff = [0,0]
|
||||
if key in [81, ord('h')]:
|
||||
diff[0] -= 1
|
||||
if key == ord('H'):
|
||||
diff[0] -= 10
|
||||
if key in [83, ord('l')]:
|
||||
diff[0] += 1
|
||||
if key == ord('L'):
|
||||
diff[0] += 10
|
||||
|
||||
if key in [82, ord('k')]:
|
||||
diff[1] += 1
|
||||
if key == ord('K'):
|
||||
diff[1] += 10
|
||||
if key in [84, ord('j')]:
|
||||
diff[1] -= 1
|
||||
if key == ord('J'):
|
||||
diff[1] -= 10
|
||||
|
||||
self.laser_points[self._selected_point] = (
|
||||
self.laser_points[self._selected_point][0] + diff[0],
|
||||
self.laser_points[self._selected_point][1] + diff[1],
|
||||
)
|
||||
|
||||
|
||||
nr_keys = [ord(str(i)) for i in range(10)]
|
||||
if key in nr_keys:
|
||||
select = str(nr_keys.index(key))
|
||||
self.create_or_select(select)
|
||||
|
||||
|
||||
|
||||
|
||||
def mouse_event(self, event,x,y,flags,param):
|
||||
x *= self.frame_img_factor[0]
|
||||
y *= self.frame_img_factor[1]
|
||||
if event == cv2.EVENT_MOUSEMOVE:
|
||||
if not self._is_dragging or not self._selected_point:
|
||||
return
|
||||
|
||||
self.image_points[self._selected_point] = (x, y)
|
||||
|
||||
if event == cv2.EVENT_LBUTTONDOWN:
|
||||
# select or create
|
||||
self._selected_point = None
|
||||
for i, p in self.image_points.items():
|
||||
d = (p[0]-x)**2 + (p[1]-y)**2
|
||||
if d < 30:
|
||||
self._selected_point = i
|
||||
break
|
||||
if self._selected_point is None:
|
||||
self._selected_point = self.new_point((x,y), None)
|
||||
|
||||
self._is_dragging = True
|
||||
|
||||
if event == cv2.EVENT_LBUTTONUP:
|
||||
self._is_dragging = False
|
||||
# ... point stays selected to tweak laser
|
||||
|
||||
def create_or_select(self, nr: str):
|
||||
if nr not in self.image_points:
|
||||
self.new_point(None, None, nr)
|
||||
self._selected_point = nr
|
||||
return nr
|
||||
|
||||
def new_point(self, img_coord: Optional[Coordinate], laser_coord: Optional[Coordinate], nr: Optional[str]=None):
|
||||
if nr:
|
||||
new_nr = nr
|
||||
else:
|
||||
new_nr = None
|
||||
for i in range(100):
|
||||
k = str(i)
|
||||
if k not in self.image_points:
|
||||
new_nr = k
|
||||
break
|
||||
if not new_nr:
|
||||
new_nr = 0 # cover unlikely case
|
||||
|
||||
self.image_points[new_nr] = img_coord or (100,100)
|
||||
self.laser_points[new_nr] = laser_coord or (100,100)
|
||||
return new_nr
|
||||
|
||||
def delete_point(self, point: str):
|
||||
del self.image_points[point]
|
||||
del self.laser_points[point]
|
||||
self._selected_point = None
|
||||
|
||||
def calculate_homography(self):
|
||||
if len(self.image_points) < 4:
|
||||
return
|
||||
|
||||
world_points = self.camera.points_img_to_world(list(self.image_points.values()))
|
||||
laser_points = np.array(list(self.laser_points.values()))
|
||||
print('from', world_points)
|
||||
print('to', laser_points)
|
||||
self.H, status = cv2.findHomography(world_points, laser_points)
|
||||
|
||||
print('Found')
|
||||
print(self.H)
|
||||
|
||||
def save(self):
|
||||
with self.config.calibfile.open('w') as fp:
|
||||
json.dump({
|
||||
'laser_points': self.laser_points,
|
||||
'image_points': self.image_points,
|
||||
'H': self.H.tolist()
|
||||
}, fp)
|
||||
|
||||
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls) -> ArgumentParser:
|
||||
argparser = ArgumentParser()
|
||||
argparser.add_argument('--zmq-frame-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame")
|
||||
argparser.add_argument('--zmq-stage-addr',
|
||||
help='Manually specity communication addr for the stage messages (the rendered lines)',
|
||||
type=str,
|
||||
default="tcp://0.0.0.0:99174")
|
||||
argparser.add_argument('--calibfile',
|
||||
help='specify file to save & load points with',
|
||||
type=Path,
|
||||
default=Path("./laser_calib.json"))
|
||||
|
||||
return argparser
|
|
@ -1,693 +0,0 @@
|
|||
# used for "Forward Referencing of type annotations"
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
import ffmpeg
|
||||
from argparse import Namespace
|
||||
import datetime
|
||||
import logging
|
||||
from multiprocessing import Event
|
||||
from multiprocessing.synchronize import Event as BaseEvent
|
||||
import cv2
|
||||
import numpy as np
|
||||
import json
|
||||
import pyglet
|
||||
import pyglet.event
|
||||
import zmq
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import math
|
||||
from typing import Dict, Iterable, Optional
|
||||
|
||||
|
||||
from pyglet import shapes
|
||||
from PIL import Image
|
||||
|
||||
# from trap.scenarios import TrackScenario
|
||||
from trap.counter import CounterSender
|
||||
from trap.frame_emitter import DetectionState, Frame, Track, Camera
|
||||
# from trap.helios import HeliosDAC, HeliosPoint
|
||||
from trap.preview_renderer import PROJECTION_MAP, DrawnTrack, FrameWriter
|
||||
from trap.tools import draw_track, draw_track_predictions, draw_track_projected, draw_trackjectron_history, drawntrack_predictions_to_lines, to_point, track_predictions_to_lines
|
||||
from trap.utils import convert_world_points_to_img_points, convert_world_space_to_img_space, lerp
|
||||
|
||||
|
||||
|
||||
logger = logging.getLogger("trap.laser_renderer")
|
||||
|
||||
import ctypes
|
||||
|
||||
class LaserFrame():
|
||||
def __init__(self, paths: list[LaserPath]):
|
||||
self.paths = paths
|
||||
|
||||
def point_count(self):
|
||||
return sum([len(p.points) for p in self.paths])
|
||||
|
||||
# def closest_path(cls, point, paths):
|
||||
# distances = [min(p.last()-)]
|
||||
|
||||
# def optimise_paths_lazy(self, last_point = None):
|
||||
# """Quick way to optimise order of paths
|
||||
# last_point can be the ending point of previous frame.
|
||||
# """
|
||||
# ordered_paths = []
|
||||
# if not last_point:
|
||||
# ordered_paths.append(self.paths.pop(0))
|
||||
|
||||
# last_point = endpoint
|
||||
# pass
|
||||
|
||||
def as_cropped_to_projector(self):
|
||||
paths = []
|
||||
for path in self.paths:
|
||||
p = path.as_cropped_to_projector()
|
||||
if len(p.points):
|
||||
paths.append(p)
|
||||
return LaserFrame(paths)
|
||||
|
||||
def get_points_interpolated_by_distance(self, point_interval, last_point: Optional[LaserPoint] = None) -> list[LaserPoint]:
|
||||
"""
|
||||
Interpolate the gaps between paths (NOT THE PATHS THEMSELVES)
|
||||
point_interval is the maximum interval at which a new point should be added
|
||||
"""
|
||||
points: list[LaserPoint] = []
|
||||
for path in self.paths:
|
||||
if last_point:
|
||||
a = last_point
|
||||
b = path.first()
|
||||
dx = b.x - a.x
|
||||
dy = b.y - a.y
|
||||
distance = np.linalg.norm([dx,dy])
|
||||
steps = int(distance // point_interval)
|
||||
for step in range(steps+1): # have both 0 and 1 in the lerp for empty points
|
||||
t = step/(steps+1)
|
||||
t = 1 # just asap to starting point of next shape
|
||||
x = int(lerp(a.x, b.x, t))
|
||||
y = int(lerp(a.y, b.y, t))
|
||||
points.append(LaserPoint(x,y, (0,0,0), 0, True))
|
||||
# print('append', steps)
|
||||
|
||||
points.extend(path.points)
|
||||
|
||||
last_point = path.last()
|
||||
|
||||
return points
|
||||
|
||||
|
||||
|
||||
|
||||
class LaserPath():
|
||||
def __init__(self, points: list[LaserPoint] = []):
|
||||
# if len(points) < 1:
|
||||
# raise RuntimeError("LaserPath should have some points")
|
||||
|
||||
self.points = points
|
||||
|
||||
def last(self):
|
||||
return self.points[-1]
|
||||
|
||||
def first(self):
|
||||
return self.points[0]
|
||||
|
||||
def as_array(self):
|
||||
np.array([[p.x, p.y] for p in self.points])
|
||||
|
||||
def as_cropped_to_projector(self):
|
||||
"""Make sure all points fall within range of laser"""
|
||||
points = [p for p in self.points if p.x >= 0 and p.y >= 0 and p.x < 0xFFF and p.y < 0xFFF ]
|
||||
return LaserPath(points)
|
||||
|
||||
|
||||
def simplyfied_path(self, start_v= 10., max_v= 20., a = 2):
|
||||
"""walk over the path with specific velocity,
|
||||
continuously accelerate (a) until max_v is reached
|
||||
place point at each step
|
||||
|
||||
(see also tools.transition_path_points() )
|
||||
"""
|
||||
if len(self.points) < 1:
|
||||
return self.points
|
||||
|
||||
path = self.as_array()
|
||||
|
||||
# new_path = np.array([])
|
||||
lengths = np.sqrt(np.sum(np.diff(path, axis=0)**2, axis=1))
|
||||
cum_lenghts = np.cumsum(lengths)
|
||||
# distance = cum_lenghts[-1] * t
|
||||
# ts = np.concatenate((np.array([0.]), cum_lenghts / cum_lenghts[-1]))
|
||||
# print(cum_lenghts[-1])
|
||||
# DRAW_SPEED = 35 # fixed speed (independent of lenght) TODO)) make variable
|
||||
# ts = np.concatenate((np.array([0.]), cum_lenghts / DRAW_SPEED))
|
||||
new_path = [path[0]]
|
||||
|
||||
position = 0
|
||||
next_pos = position + v
|
||||
|
||||
for a, b, pos in zip(path[:-1], path[1:], cum_lenghts):
|
||||
# TODO))
|
||||
if pos < (next_pos):
|
||||
continue
|
||||
|
||||
v = min(v+a, max_v)
|
||||
next_pos = position + v
|
||||
|
||||
relative_t = inv_lerp(t_a, t_b, t)
|
||||
|
||||
pass
|
||||
|
||||
# for a, b, t_a, t_b in zip(path[:-1], path[1:], ts[:-1], ts[1:]):
|
||||
# if t_b < t:
|
||||
# new_path.append(b)
|
||||
# continue
|
||||
# # interpolate
|
||||
# relative_t = inv_lerp(t_a, t_b, t)
|
||||
# x = lerp(a[0], b[0], relative_t)
|
||||
# y = lerp(a[1], b[1], relative_t)
|
||||
# new_path.append([x,y])
|
||||
# break
|
||||
# return np.array(new_path)
|
||||
|
||||
class LaserPoint():
|
||||
def __init__(self,x,y,c: Color = (255,0,0),i= 255,blank=False):
|
||||
self.x = x
|
||||
self.y = y
|
||||
self.c = c
|
||||
self._i = i
|
||||
self.blank = blank
|
||||
|
||||
@property
|
||||
def color(self):
|
||||
if self.blank: return (0,0,0)
|
||||
return self.c
|
||||
|
||||
@property
|
||||
def i(self):
|
||||
return 0 if self.blank else self._i
|
||||
|
||||
|
||||
#Define point structure
|
||||
class CHeliosPoint(ctypes.Structure):
|
||||
#_pack_=1
|
||||
_fields_ = [('x', ctypes.c_uint16),
|
||||
('y', ctypes.c_uint16),
|
||||
('r', ctypes.c_uint8),
|
||||
('g', ctypes.c_uint8),
|
||||
('b', ctypes.c_uint8),
|
||||
('i', ctypes.c_uint8)]
|
||||
|
||||
|
||||
class LaserRenderer:
|
||||
def __init__(self, config: Namespace, is_running: BaseEvent):
|
||||
self.config = config
|
||||
self.is_running = is_running
|
||||
|
||||
context = zmq.Context()
|
||||
self.prediction_sock = context.socket(zmq.SUB)
|
||||
self.prediction_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.prediction_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
# self.prediction_sock.connect(config.zmq_prediction_addr if not self.config.bypass_prediction else config.zmq_trajectory_addr)
|
||||
self.prediction_sock.connect(config.zmq_prediction_addr)
|
||||
|
||||
self.tracker_sock = context.socket(zmq.SUB)
|
||||
self.tracker_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.tracker_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
self.tracker_sock.connect(config.zmq_trajectory_addr)
|
||||
|
||||
self.H = self.config.H
|
||||
|
||||
self.inv_H = np.linalg.pinv(self.H)
|
||||
|
||||
# TODO: get FPS from frame_emitter
|
||||
# self.out = cv2.VideoWriter(str(filename), fourcc, 23.97, (1280,720))
|
||||
self.fps = 60
|
||||
self.frame_size = (self.config.camera.w,self.config.camera.h)
|
||||
|
||||
self.first_time: float|None = None
|
||||
self.frame: Frame|None= None
|
||||
self.tracker_frame: Frame|None = None
|
||||
self.prediction_frame: Frame|None = None
|
||||
|
||||
self.tracks: Dict[str, Track] = {}
|
||||
# self.scenarios: Dict[str, TrackScenario] = {}
|
||||
self.predictions: Dict[str, Track] = {}
|
||||
self.drawn_tracks: Dict[str, DrawnTrack] = {}
|
||||
|
||||
self.helios = ctypes.cdll.LoadLibrary("./trap/helios_dac/libHeliosDacAPI.so")
|
||||
numDevices = self.helios.OpenDevices()
|
||||
logger.info(f"Found {numDevices} Helios DACs")
|
||||
|
||||
|
||||
# self.dac = HeliosDAC(debug=False)
|
||||
# logger.info(f"{self.dac.dev}")
|
||||
# logger.info(f"{self.dac.GetName()}")
|
||||
# logger.info(f"{self.dac.getHWVersion()}")
|
||||
|
||||
# logger.info(f"Helios version: {self.dac.getHWVersion()}")
|
||||
|
||||
|
||||
# self.init_shapes()
|
||||
|
||||
# self.init_labels()
|
||||
|
||||
|
||||
def check_frames(self, dt):
|
||||
new_tracks = False
|
||||
try:
|
||||
self.frame: Frame = self.frame_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
if not self.first_time:
|
||||
self.first_time = self.frame.time
|
||||
img = cv2.GaussianBlur(self.frame.img, (15, 15), 0)
|
||||
img = cv2.flip(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), 0)
|
||||
img = pyglet.image.ImageData(self.frame_size[0], self.frame_size[1], 'RGB', img.tobytes())
|
||||
# don't draw in batch, so that it is the background
|
||||
self.video_sprite = pyglet.sprite.Sprite(img=img, batch=self.batch_bg)
|
||||
self.video_sprite.opacity = 100
|
||||
except zmq.ZMQError as e:
|
||||
# idx = frame.index if frame else "NONE"
|
||||
# logger.debug(f"reuse video frame {idx}")
|
||||
pass
|
||||
try:
|
||||
self.prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
new_tracks = True
|
||||
except zmq.ZMQError as e:
|
||||
pass
|
||||
try:
|
||||
self.tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
new_tracks = True
|
||||
except zmq.ZMQError as e:
|
||||
pass
|
||||
|
||||
def run(self, timer_counter):
|
||||
frame = None
|
||||
prediction_frame = None
|
||||
tracker_frame = None
|
||||
|
||||
i=0
|
||||
first_time = None
|
||||
|
||||
kpps = 50000
|
||||
|
||||
# frames = [0 for x in range(30)]
|
||||
# frameTr= CHeliosPoint(int(x),int(y),20,20,20,255)
|
||||
# print(frames)
|
||||
|
||||
pointlist_test = []
|
||||
# pointlist_test.append(HeliosPoint(10,10, blank=False))
|
||||
for i in range(30):
|
||||
if i < 15:
|
||||
# y = int(i*0xfff/500)
|
||||
y = int(i*10 + 0xfff/2)
|
||||
else:
|
||||
# y = int((15-i)*0xfff/500)
|
||||
y = int((15-i)*10 + 0xfff/2)
|
||||
pointlist_test.append(LaserPoint(int(0),0xfff-y, blank=False))
|
||||
# pointlist_test.append(HeliosPoint(10,0xfff, blank=False))
|
||||
# pointlist_test.append(HeliosPoint(8000,8000, blank=False))
|
||||
# pointlist_test.append(HeliosPoint(8000,10, blank=False))
|
||||
# pointlist_test.append(HeliosPoint(10,10, blank=True))
|
||||
|
||||
# frameType = CHeliosPoint * len(pointlist_test)
|
||||
# frame = frameType()
|
||||
|
||||
# for j, point in enumerate(pointlist_test):
|
||||
# frame[j] = CHeliosPoint(point.x, point.y, 0,40,0,0 if point.blank else 255)
|
||||
|
||||
counter = CounterSender()
|
||||
|
||||
print(f"RENDER DAC\n\n\n")
|
||||
|
||||
last_laser_point = None
|
||||
# for i in range(150):
|
||||
while self.is_running.is_set():
|
||||
# Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
|
||||
# statusAttempts=0
|
||||
# while (statusAttempts < 512 and self.helios.GetStatus(0) != 1):
|
||||
# statusAttempts += 1
|
||||
# self.helios.WriteFrame(0, kpps, 0, ctypes.pointer(frame), len(pointlist))
|
||||
# continue
|
||||
|
||||
i+=1
|
||||
with timer_counter.get_lock():
|
||||
timer_counter.value+=1
|
||||
|
||||
try:
|
||||
prediction_frame: Frame = self.prediction_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
for track_id, track in prediction_frame.tracks.items():
|
||||
prediction_id = f"{track_id}-{track.history[-1].frame_nr}"
|
||||
self.predictions[prediction_id] = track
|
||||
|
||||
# TODO)) also for tracks:
|
||||
if track_id not in self.drawn_tracks:
|
||||
self.drawn_tracks[track_id] = DrawnTrack(track_id, track, self, prediction_frame.camera.H, PROJECTION_MAP, prediction_frame.camera)
|
||||
elif self.drawn_tracks[track_id].update_predictions_at < (time.time() - .5): # TODO)) only update predictions every n frames. configure
|
||||
# self.drawn_tracks[track_id].pred_track
|
||||
self.drawn_tracks[track_id].set_predictions(track)
|
||||
|
||||
# if track_id in self.scenarios:
|
||||
# self.scenarios[track_id].set_prediction(track)
|
||||
|
||||
# self.drawn_predictions[track_id] = track
|
||||
except zmq.ZMQError as e:
|
||||
logger.debug(f'reuse prediction')
|
||||
|
||||
try:
|
||||
tracker_frame: Frame = self.tracker_sock.recv_pyobj(zmq.NOBLOCK)
|
||||
|
||||
for track_id, track in tracker_frame.tracks.items():
|
||||
self.tracks[track_id] = track
|
||||
# if not track_id in self.scenarios:
|
||||
# self.scenarios[track_id] = TrackScenario(track)
|
||||
# else:
|
||||
# self.scenarios[track_id].set_track(track)
|
||||
# self.scenarios[track_id].receive_track(track)
|
||||
except zmq.ZMQError as e:
|
||||
logger.debug(f'reuse tracks')
|
||||
|
||||
# if tracker_frame is None:
|
||||
# # might need to wait a few iterations before first frame comes available
|
||||
# time.sleep(.1)
|
||||
# continue
|
||||
|
||||
if first_time is None and tracker_frame is not None:
|
||||
first_time = tracker_frame.time
|
||||
|
||||
|
||||
# print('-------')
|
||||
paths = render_frame_to_pathlist( tracker_frame, prediction_frame, self.drawn_tracks, first_time, self.config, self.tracks, self.predictions, self.config.render_clusters)
|
||||
counter.set('paths', len(paths))
|
||||
counter.set('points', sum([len(p.points) for p in paths]))
|
||||
|
||||
if self.prediction_frame:
|
||||
counter.set('pred_render_latency', time.time() - self.prediction_frame.time)
|
||||
if self.tracker_frame:
|
||||
counter.set('track_render_latency', time.time() - self.tracker_frame.time)
|
||||
# print(f"Paths: {len(paths)} ... points {sum([len(p.points) for p in paths])}")
|
||||
laserframe = LaserFrame(paths)
|
||||
laserframe_cropped = laserframe.as_cropped_to_projector()
|
||||
counter.set('laser.removed', laserframe_cropped.point_count() - laserframe.point_count())
|
||||
if laserframe.point_count() > laserframe_cropped.point_count():
|
||||
# logger.warning("Removed laser points out of frame!")
|
||||
laserframe = laserframe_cropped
|
||||
# pointlist=pointlist_test
|
||||
# print([(p.x, p.y) for p in pointlist])
|
||||
# pointlist.extend(pointlist_test)
|
||||
|
||||
pointlist = laserframe.get_points_interpolated_by_distance(30, last_laser_point)
|
||||
# pointlist_cropped =
|
||||
|
||||
# pointlist = pointlist[::2]
|
||||
# print('decimated', len(pointlist))
|
||||
|
||||
if len(pointlist):
|
||||
last_laser_point = pointlist[-1]
|
||||
|
||||
frameType = CHeliosPoint * len(pointlist)
|
||||
frame = frameType()
|
||||
|
||||
# print(len(pointlist)) #, last_laser_point.x, last_laser_point.y)
|
||||
|
||||
for j, point in enumerate(pointlist):
|
||||
frame[j] = CHeliosPoint(int(point.x), int(point.y), point.color[0],point.color[1], point.color[2], point.i)
|
||||
|
||||
# Make 512 attempts for DAC status to be ready. After that, just give up and try to write the frame anyway
|
||||
statusAttempts=0
|
||||
|
||||
while (statusAttempts < 512 and self.helios.GetStatus(0) != 1):
|
||||
statusAttempts += 1
|
||||
|
||||
self.helios.WriteFrame(0, kpps, 0, ctypes.pointer(frame), len(pointlist))
|
||||
|
||||
# continue
|
||||
# self.helios.WriteFrame(0, kpps, 0, ctypes.pointer(frame), len(pointlist))
|
||||
|
||||
# self.dac.newFrame(50000, pointlist)
|
||||
|
||||
|
||||
# clear out old tracks & predictions:
|
||||
|
||||
for track_id, track in list(self.tracks.items()):
|
||||
# TODO)) Migrate to using time() instead of framenr, to detach the two
|
||||
if get_animation_position(track, tracker_frame) == 1:
|
||||
self.tracks.pop(track_id)
|
||||
for prediction_id, track in list(self.predictions.items()):
|
||||
if get_animation_position(track, tracker_frame) == 1:
|
||||
self.predictions.pop(prediction_id)
|
||||
|
||||
for track_id in list(self.drawn_tracks.keys()):
|
||||
# TODO make delay configurable
|
||||
if self.drawn_tracks[track_id].update_at < time.time() - 5:
|
||||
# TODO fade out
|
||||
del self.drawn_tracks[track_id]
|
||||
|
||||
logger.info('Stopping')
|
||||
self.helios.CloseDevices()
|
||||
|
||||
# if i>2:
|
||||
|
||||
|
||||
|
||||
logger.info('stopped')
|
||||
# colorset = itertools.product([0,255], repeat=3) # but remove white
|
||||
# colorset = [(0, 0, 0),
|
||||
# (0, 0, 255),
|
||||
# (0, 255, 0),
|
||||
# (0, 255, 255),
|
||||
# (255, 0, 0),
|
||||
# (255, 0, 255),
|
||||
# (255, 255, 0)
|
||||
# ]
|
||||
colorset = [
|
||||
(255,255,100),
|
||||
(255,100,255),
|
||||
(100,255,255),
|
||||
]
|
||||
# colorset = [
|
||||
# (0,0,0),
|
||||
# ]
|
||||
|
||||
def get_animation_position(track: Track, current_frame: Frame) -> float:
|
||||
fade_duration = current_frame.camera.fps * 2
|
||||
diff = current_frame.index - track.history[-1].frame_nr
|
||||
return max(0, min(1, diff / fade_duration))
|
||||
# track.history[-1].frame_nr < (current_frame.index - current_frame.camera.fps * 3)
|
||||
# track.history[-1].frame_nr < (current_frame.index - current_frame.camera.fps * 3)
|
||||
|
||||
def circle_points(cx, cy, r, c: Color):
|
||||
# r = r
|
||||
steps = 30
|
||||
pointlist: list[LaserPoint] = []
|
||||
for i in range(steps):
|
||||
x = int(cx + math.cos(i * (2*math.pi)/steps) * r)
|
||||
y = int(cy + math.sin(i * (2*math.pi)/steps)* r)
|
||||
pointlist.append(LaserPoint(x, y, c, blank=(i==(steps-1)or i==0)))
|
||||
|
||||
return pointlist
|
||||
|
||||
Color = tuple[int, int, int]
|
||||
|
||||
# derived with trap/helios_dac/calibration_points.py
|
||||
# set points in the script to points from hof3/irl_points.json
|
||||
laser_H =np.array([[ 2.47442963e+02, -7.01714050e+01, -9.71749119e+01],
|
||||
[ 1.02328119e+01, 1.47185254e+02, 1.96295638e+02],
|
||||
[-1.20921986e-03, -3.32735973e-02, 1.00000000e+00]])
|
||||
|
||||
def world_points_to_laser_points(points):
|
||||
return cv2.perspectiveTransform(np.array([points]), laser_H)
|
||||
|
||||
# Deprecated
|
||||
def render_frame_to_pathlist(tracker_frame: Optional[Frame], prediction_frame: Optional[Frame], drawn_tracks: Optional[Dict[str, DrawnTrack]], first_time: Optional[float], config: Namespace, tracks: Dict[str, Track], predictions: Dict[str, Track], as_clusters = True):
|
||||
# TODO: replace opencv with QPainter to support alpha? https://doc.qt.io/qtforpython-5/PySide2/QtGui/QPainter.html#PySide2.QtGui.PySide2.QtGui.QPainter.drawImage
|
||||
# or https://github.com/pygobject/pycairo?tab=readme-ov-file
|
||||
# or https://pyglet.readthedocs.io/en/latest/programming_guide/shapes.html
|
||||
# and use http://code.astraw.com/projects/motmot/pygarrayimage.html or https://gist.github.com/nkymut/1cb40ea6ae4de0cf9ded7332f1ca0d55
|
||||
# or https://api.arcade.academy/en/stable/index.html (supports gradient color in line -- "Arcade is built on top of Pyglet and OpenGL.")
|
||||
# pointlist: list[LaserPoint] = []
|
||||
# frame = LaserFrame()
|
||||
paths: list[LaserPath] = []
|
||||
|
||||
# pointlist.append(HeliosPoint(x,y, dac.palette[cindex],blank=blank))
|
||||
|
||||
# all not working:
|
||||
# if i == 1:
|
||||
# # thanks to GpG for fixing scaling issue: https://stackoverflow.com/a/39668864
|
||||
# scale_factor = 1./20 # from 10m to 1000px
|
||||
# S = np.array([[scale_factor, 0,0],[0,scale_factor,0 ],[ 0,0,1 ]])
|
||||
# new_H = S * self.H * np.linalg.inv(S)
|
||||
# warpedFrame = cv2.warpPerspective(img, new_H, (1000,1000))
|
||||
# cv2.imwrite(str(self.config.output_dir / "orig.png"), warpedFrame)
|
||||
# cv2.rectangle(img, (0,0), (img.shape[1],25), (0,0,0), -1)
|
||||
|
||||
intensity = 39 # range 0-255
|
||||
test_r = 100
|
||||
base_c = (0,0, intensity)
|
||||
# base_c = (0,intensity, intensity)
|
||||
track_c = (intensity,0,0)
|
||||
pred_c = (0,intensity,0)
|
||||
|
||||
if not tracker_frame and not prediction_frame:
|
||||
paths.append(
|
||||
LaserPath(circle_points(0xFFF/2, 0xFFF/2, test_r, base_c))
|
||||
)
|
||||
# c = (0,intensity,0)#, dac.palette[4] # Green
|
||||
# r = 100
|
||||
# steps = 100
|
||||
# for i in range(steps):
|
||||
# x = int(0xFFF/2 + math.cos(i * (2*math.pi)/steps) * r)
|
||||
# y = int(0xFFF/2 + math.sin(i * (2*math.pi)/steps)* r)
|
||||
# pointlist.append(HeliosPoint(x, y, c, blank=i==99))
|
||||
|
||||
# pointlist.append(HeliosPoint(10,10, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(10,100, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(10,200, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(100,200, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(200,200, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(200,100, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(200,10, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(100,10, c,blank=False))
|
||||
# pointlist.append(HeliosPoint(10,10, c,blank=True))
|
||||
|
||||
# return pointlist
|
||||
|
||||
# print(not tracker_frame, not prediction_frame)
|
||||
|
||||
if not tracker_frame:
|
||||
paths.append(
|
||||
LaserPath(circle_points(0xFFF/2+2*test_r, 0xFFF/2, test_r, track_c))
|
||||
)
|
||||
else:
|
||||
# if not len(tracks):
|
||||
# paths.append(
|
||||
# LaserPath(circle_points(0xFFF/2+4*test_r, 0xFFF/2, test_r/2, pred_c))
|
||||
# )
|
||||
|
||||
for track_id, track in tracks.items():
|
||||
inv_H = np.linalg.pinv(tracker_frame.H)
|
||||
# track = track.get_sampled(4)
|
||||
projected_history = track.get_projected_history(camera=config.camera)
|
||||
history_for_laser = world_points_to_laser_points(projected_history)[0]
|
||||
|
||||
# point_color = bgr_colors[color_index % len(bgr_colors)]
|
||||
points = np.rint(history_for_laser.reshape((-1,1,2))).astype(np.int32)
|
||||
# print('point len',len(points))
|
||||
laserpoints = []
|
||||
for i, point in enumerate(points):
|
||||
laserpoints.append(LaserPoint(point[0][0], point[0][1], track_c, blank=False))
|
||||
path = LaserPath(laserpoints)
|
||||
paths.append(path)
|
||||
|
||||
|
||||
paths.append(
|
||||
LaserPath(circle_points(history_for_laser[-1][0], history_for_laser[-1][1], 20, track_c))
|
||||
)
|
||||
# draw_track_projected(img, track, int(track_id), config.camera, convert_world_points_to_img_points)
|
||||
|
||||
|
||||
if not prediction_frame:
|
||||
paths.append(
|
||||
LaserPath(circle_points(0xFFF/2+4*test_r, 0xFFF/2, test_r, pred_c))
|
||||
)
|
||||
# cv2.putText(img, f"Waiting for prediction...", (500,17), cv2.FONT_HERSHEY_PLAIN, 1, (255,255,0), 1)
|
||||
# continue
|
||||
# elif True:
|
||||
# pass
|
||||
elif drawn_tracks:
|
||||
inv_H = np.linalg.pinv(prediction_frame.H)
|
||||
for track_id, drawn_track in drawn_tracks.items():
|
||||
drawn_track.update_drawn_positions(dt=None, no_shapes=True)
|
||||
|
||||
|
||||
# For debugging:
|
||||
# draw_trackjectron_history(img, track, int(track.track_id), convert_world_points_to_img_points)
|
||||
anim_position = 1 # TODO)) calculate without video frame: get_animation_position(track, tracker_frame)
|
||||
lines = drawntrack_predictions_to_lines(drawn_track, config.camera, anim_position)
|
||||
# if lines:
|
||||
# lines.extend(get_prediction_text(drawn_track))
|
||||
|
||||
if not lines:
|
||||
continue
|
||||
|
||||
|
||||
|
||||
# draw in a single pass
|
||||
# line_points = line_points.reshape((1, -1,1,2))
|
||||
for line in lines:
|
||||
# print('prediction line')
|
||||
line = world_points_to_laser_points(line)[0]
|
||||
# line = convert_world_points_to_img_points(line)
|
||||
line = np.rint(line).astype(np.int32)
|
||||
laserpoints = []
|
||||
for i, point in enumerate(line):
|
||||
laserpoints.append(LaserPoint(point[0], point[1], pred_c, blank=False))
|
||||
path = LaserPath(laserpoints)
|
||||
paths.append(path)
|
||||
# draw_track_predictions(img, track, int(track.track_id)+1, config.camera, convert_world_points_to_img_points, anim_position=anim_position, as_clusters=as_clusters)
|
||||
# cv2.putText(img, f"{len(track.predictor_history) if track.predictor_history else 'none'}", to_point(track.history[0].get_foot_coords()), cv2.FONT_HERSHEY_COMPLEX, 1, (255,255,255), 1)
|
||||
|
||||
|
||||
# print(len(paths))
|
||||
return paths
|
||||
|
||||
def get_prediction_text(drawn_track: DrawnTrack)-> list[list[float, float]]:
|
||||
position_index = 20
|
||||
if not drawn_track.drawn_predictions:
|
||||
return []
|
||||
|
||||
if len(drawn_track.drawn_predictions[0]) < position_index:
|
||||
logger.warning("prediction to short!")
|
||||
return []
|
||||
|
||||
# draw only for first prediction
|
||||
draw_pos = drawn_track.drawn_predictions[0][position_index-1]
|
||||
current_pos = drawn_track.drawn_positions[-1]
|
||||
|
||||
angle = np.arctan2(draw_pos[0]-current_pos[0], draw_pos[1]-current_pos[1]) + np.pi
|
||||
# print('angle', angle)
|
||||
|
||||
text_paths = []
|
||||
|
||||
with open("your_future_points_test.json", 'r') as fp:
|
||||
lines = json.load(fp)
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
if i != 0:
|
||||
continue
|
||||
points = np.array(line)
|
||||
|
||||
avg_x = np.average(points[:,0])
|
||||
avg_y = np.average(points[:,1])
|
||||
|
||||
minx, maxx = np.min(points[:,0]), np.max(points[:,0])
|
||||
miny, maxy = np.min(points[:,1]), np.max(points[:,1])
|
||||
|
||||
sx = maxx-minx
|
||||
sy = maxy-miny
|
||||
|
||||
points[:,0] -= avg_x
|
||||
points[:,1] -= avg_y - i/2
|
||||
points /= sx # scale to 1
|
||||
|
||||
points @= rotateMatrix(angle)
|
||||
|
||||
points += draw_pos
|
||||
|
||||
|
||||
text_paths.append(points)
|
||||
|
||||
|
||||
return text_paths
|
||||
|
||||
def rotateMatrix(a):
|
||||
return np.array([[np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]])
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def run_laser_renderer(config: Namespace, is_running: BaseEvent, timer_counter):
|
||||
renderer = LaserRenderer(config, is_running)
|
||||
renderer.run(timer_counter)
|
135
trap/lines.py
135
trap/lines.py
|
@ -1,135 +0,0 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum, IntEnum
|
||||
import math
|
||||
from typing import List, Tuple
|
||||
import numpy as np
|
||||
|
||||
from simplification.cutil import simplify_coords_idx, simplify_coords_vw_idx
|
||||
|
||||
"""
|
||||
See [notebook](../test_path_transforms.ipynb) for examples
|
||||
"""
|
||||
|
||||
RenderablePosition = Tuple[float,float]
|
||||
|
||||
class CoordinateSpace(IntEnum):
|
||||
CAMERA = 1
|
||||
UNDISTORTED_CAMERA = 2
|
||||
WORLD = 3
|
||||
LASER = 4
|
||||
|
||||
@dataclass
|
||||
class SrgbaColor():
|
||||
red: float
|
||||
green: float
|
||||
blue: float
|
||||
alpha: float
|
||||
|
||||
def with_alpha(self, alpha: float) -> SrgbaColor:
|
||||
return SrgbaColor(self.red, self.green, self.blue, alpha)
|
||||
|
||||
def as_faded(self, alpha: float) -> SrgbaColor:
|
||||
return SrgbaColor(self.red, self.green, self.blue, self.alpha * alpha)
|
||||
|
||||
@dataclass
|
||||
class RenderablePoint():
|
||||
position: RenderablePosition
|
||||
color: SrgbaColor
|
||||
|
||||
def __post_init__(self):
|
||||
if type(self.position) is np.ndarray:
|
||||
# convert if wrong type, so it can be serialised
|
||||
# print('convert')
|
||||
self.position = tuple(self.position.tolist())
|
||||
# self.position = (float(self.position[0]), float(self.position[0]))
|
||||
# pass
|
||||
|
||||
@classmethod
|
||||
def from_list(cls, l: List[float, float], color: SrgbaColor) -> RenderablePoint:
|
||||
return cls([float(l[0]), float(l[1])], color)
|
||||
|
||||
SIMPLIFY_FACTOR_RDP = .001 # smaller is more detailed
|
||||
SIMPLIFY_FACTOR_VW = 10
|
||||
|
||||
class SimplifyMethod(Enum):
|
||||
RDP = 1 #' Ramer–Douglas–Peucker'
|
||||
VW = 2 # Visvalingam-Whyatt
|
||||
|
||||
@dataclass
|
||||
class RenderableLine():
|
||||
points: List[RenderablePoint]
|
||||
|
||||
def as_simplified(self, method: SimplifyMethod = SimplifyMethod.RDP, factor = SIMPLIFY_FACTOR_RDP):
|
||||
linestring = [p.position for p in self.points]
|
||||
if method == SimplifyMethod.RDP:
|
||||
indexes = simplify_coords_idx(linestring, factor)
|
||||
elif method == SimplifyMethod.VW:
|
||||
indexes = simplify_coords_vw_idx(linestring, factor)
|
||||
points = [self.points[i] for i in indexes]
|
||||
return RenderableLine(points)
|
||||
|
||||
|
||||
@dataclass
|
||||
class RenderableLines():
|
||||
lines: List[RenderableLine]
|
||||
space: CoordinateSpace = CoordinateSpace.WORLD
|
||||
|
||||
def as_simplified(self, method: SimplifyMethod = SimplifyMethod.RDP, factor = SIMPLIFY_FACTOR_RDP):
|
||||
"""Wraps RenderableLine simplification, smaller factor is more detailed"""
|
||||
return RenderableLines(
|
||||
[line.as_simplified(method, factor) for line in self.lines]
|
||||
)
|
||||
|
||||
def append(self, rl: RenderableLine):
|
||||
self.lines.append(rl)
|
||||
|
||||
def append_lines(self, rls: RenderableLines):
|
||||
self.lines.extend(rls.lines)
|
||||
|
||||
def point_count(self):
|
||||
return sum([len(l.points) for l in self.lines])
|
||||
|
||||
# def merge(self, rl: RenderableLines):
|
||||
|
||||
|
||||
|
||||
def circle_arc(cx, cy, r, t, l, c: SrgbaColor):
|
||||
"""
|
||||
draw an cirlce arc, around point cx,cy, with radius r
|
||||
for l*2pi, offset by t. Both t and l are 0<= [t,l] <= 1
|
||||
"""
|
||||
|
||||
resolution = 30
|
||||
steps = int(resolution * l)
|
||||
offset = int(resolution * t)
|
||||
pointlist: list[RenderablePoint] = []
|
||||
for i in range(offset, offset+steps):
|
||||
x = cx + math.cos(i * (2*math.pi)/resolution) * r
|
||||
y = cy + math.sin(i * (2*math.pi)/resolution)* r
|
||||
|
||||
pointlist.append(RenderablePoint((x, y), c))
|
||||
|
||||
|
||||
return RenderableLine(pointlist)
|
||||
|
||||
def cross_points(cx, cy, r, c: SrgbaColor):
|
||||
# r = 100
|
||||
steps = 3
|
||||
pointlist: list[RenderablePoint] = []
|
||||
for i in range(steps):
|
||||
x = int(cx)
|
||||
y = int(cy + r - i * 2 * r/steps)
|
||||
pos = (x, y)
|
||||
pointlist.append(RenderablePoint(pos, c))
|
||||
path = RenderableLine(pointlist)
|
||||
pointlist: list[RenderablePoint] = []
|
||||
for i in range(steps):
|
||||
y = int(cy)
|
||||
x = int(cx + r - i * 2 * r/steps)
|
||||
pos = (x, y)
|
||||
pointlist.append(RenderablePoint(pos, c))
|
||||
path2 = RenderableLine(pointlist)
|
||||
|
||||
return [path, path2]
|
|
@ -1,65 +0,0 @@
|
|||
|
||||
from argparse import ArgumentParser
|
||||
import time
|
||||
from trap.counter import CounterListerner
|
||||
from trap.node import Node
|
||||
|
||||
|
||||
class Monitor(Node):
|
||||
"""
|
||||
Render a stage, on which different TrackScenarios take place to a
|
||||
single image of lines. Which can be passed to different renderers
|
||||
E.g. the laser or image renderers.
|
||||
"""
|
||||
|
||||
FPS = 1
|
||||
|
||||
def setup(self):
|
||||
# self.scenarios: List[DrawnScenario] = []
|
||||
self.counter_listener = CounterListerner()
|
||||
|
||||
def run(self):
|
||||
prev_time = time.perf_counter()
|
||||
while self.is_running.is_set():
|
||||
# self.tick() # don't polute it with own data
|
||||
|
||||
self.counter_listener.snapshot()
|
||||
stats = self.counter_listener.to_string()
|
||||
if len(stats):
|
||||
self.logger.info(stats)
|
||||
# else:
|
||||
# self.logger.info("no stats")
|
||||
|
||||
# for i, (k, v) in enumerate(self.counter_listener.get_latest().items()):
|
||||
# print(k,v)
|
||||
# cv2.putText(img, f"{k} {v.value()}", (20,img.shape[0]-(40*i)-40), cv2.FONT_HERSHEY_PLAIN, 1, base_color, 1)
|
||||
|
||||
# 3) calculate latency for desired FPS
|
||||
now = time.perf_counter()
|
||||
time_diff = (now - prev_time)
|
||||
if time_diff < 1/self.FPS:
|
||||
# print(f"sleep {1/self.FPS - time_diff}")
|
||||
time.sleep(1/self.FPS - time_diff)
|
||||
now += 1/self.FPS - time_diff
|
||||
|
||||
prev_time = now
|
||||
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls) -> ArgumentParser:
|
||||
argparser = ArgumentParser()
|
||||
# argparser.add_argument('--zmq-trajectory-addr',
|
||||
# help='Manually specity communication addr for the trajectory messages',
|
||||
# type=str,
|
||||
# default="ipc:///tmp/feeds_traj")
|
||||
# argparser.add_argument('--zmq-prediction-addr',
|
||||
# help='Manually specity communication addr for the prediction messages',
|
||||
# type=str,
|
||||
# default="ipc:///tmp/feeds_preds")
|
||||
# argparser.add_argument('--zmq-stage-addr',
|
||||
# help='Manually specity communication addr for the stage messages (the rendered lines)',
|
||||
# type=str,
|
||||
# default="tcp://0.0.0.0:99174")
|
||||
return argparser
|
||||
|
||||
|
144
trap/node.py
144
trap/node.py
|
@ -1,144 +0,0 @@
|
|||
import logging
|
||||
from logging.handlers import QueueHandler, QueueListener, SocketHandler
|
||||
import multiprocessing
|
||||
from multiprocessing.synchronize import Event as BaseEvent
|
||||
from argparse import ArgumentParser, Namespace
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import zmq
|
||||
|
||||
from trap.counter import CounterFpsSender, CounterSender
|
||||
from trap.timer import Timer
|
||||
|
||||
|
||||
class Node():
|
||||
def __init__(self, config: Namespace, is_running: BaseEvent, fps_counter: CounterFpsSender):
|
||||
self.config = config
|
||||
self.is_running = is_running
|
||||
self.fps_counter = fps_counter
|
||||
self.zmq_context = zmq.Context()
|
||||
self.logger = self._logger()
|
||||
|
||||
self._prev_loop_time = 0
|
||||
|
||||
self.setup()
|
||||
|
||||
@classmethod
|
||||
def _logger(cls):
|
||||
return logging.getLogger(f"trap.{cls.__name__}")
|
||||
|
||||
def tick(self):
|
||||
self.fps_counter.tick()
|
||||
# with self.fps_counter.get_lock():
|
||||
# self.fps_counter.value+=1
|
||||
|
||||
def setup(self):
|
||||
raise RuntimeError("Not implemented setup()")
|
||||
|
||||
def run(self):
|
||||
raise RuntimeError("Not implemented run()")
|
||||
|
||||
def run_loop(self):
|
||||
"""Use in run(), to check if it should keep looping
|
||||
Takes care of tick()'ing the iterations/second counter
|
||||
"""
|
||||
self.tick()
|
||||
return self.is_running.is_set()
|
||||
|
||||
def run_loop_capped_fps(self, max_fps: float):
|
||||
"""Use in run(), to check if it should keep looping
|
||||
Takes care of tick()'ing the iterations/second counter
|
||||
"""
|
||||
|
||||
now = time.perf_counter()
|
||||
time_diff = (now - self._prev_loop_time)
|
||||
if time_diff < 1/max_fps:
|
||||
# print(f"sleep {1/max_fps - time_diff}")
|
||||
time.sleep(1/max_fps - time_diff)
|
||||
now += 1/max_fps - time_diff
|
||||
self._prev_loop_time = now
|
||||
|
||||
return self.run_loop()
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls) -> ArgumentParser:
|
||||
raise RuntimeError("Not implemented arg_parser()")
|
||||
|
||||
@classmethod
|
||||
def _get_arg_parser(cls) -> ArgumentParser:
|
||||
parser = cls.arg_parser()
|
||||
# add some defaults
|
||||
parser.add_argument(
|
||||
'--verbose',
|
||||
'-v',
|
||||
help="Increase verbosity. Add multiple times to increase further.",
|
||||
action='count', default=0
|
||||
)
|
||||
parser.add_argument(
|
||||
'--remote-log-addr',
|
||||
help="Connect to a remote logger like cutelog. Specify the ip",
|
||||
type=str,
|
||||
default="100.72.38.82"
|
||||
)
|
||||
parser.add_argument(
|
||||
'--remote-log-port',
|
||||
help="Connect to a remote logger like cutelog. Specify the port",
|
||||
type=int,
|
||||
default=19996
|
||||
)
|
||||
return parser
|
||||
|
||||
|
||||
def sub(self, addr: str):
|
||||
"Default zmq sub configuration"
|
||||
sock = self.zmq_context.socket(zmq.SUB)
|
||||
sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
sock.connect(addr)
|
||||
return sock
|
||||
|
||||
def pub(self, addr: str):
|
||||
"Default zmq pub configuration"
|
||||
sock = self.zmq_context.socket(zmq.PUB)
|
||||
sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
|
||||
sock.bind(addr)
|
||||
return sock
|
||||
|
||||
@classmethod
|
||||
def start(cls, config: Namespace, is_running: BaseEvent, timer_counter: Optional[Timer]):
|
||||
instance = cls(config, is_running, timer_counter)
|
||||
instance.run()
|
||||
instance.logger.info("Stopping")
|
||||
|
||||
@classmethod
|
||||
def parse_and_start(cls):
|
||||
"""To start the node from CLI/supervisor"""
|
||||
config = cls._get_arg_parser().parse_args()
|
||||
setup_logging(config) # running from cli, we need to setup logging
|
||||
is_running = multiprocessing.Event()
|
||||
is_running.set()
|
||||
statsender = CounterSender()
|
||||
counter = CounterFpsSender(f"trap.{cls.__name__}", statsender)
|
||||
# timer_counter = Timer(cls.__name__)
|
||||
|
||||
cls.start(config, is_running, counter)
|
||||
|
||||
|
||||
def setup_logging(config: Namespace):
|
||||
loglevel = logging.NOTSET if config.verbose > 1 else logging.DEBUG if config.verbose > 0 else logging.INFO
|
||||
stream_handler = logging.StreamHandler()
|
||||
log_handlers = [stream_handler]
|
||||
|
||||
if config.remote_log_addr:
|
||||
logging.captureWarnings(True)
|
||||
# root_logger.setLevel(logging.NOTSET) # to send all records to cutelog
|
||||
socket_handler = SocketHandler(config.remote_log_addr, config.remote_log_port)
|
||||
print(socket_handler.host, socket_handler.port)
|
||||
socket_handler.setLevel(logging.NOTSET)
|
||||
log_handlers.append(socket_handler)
|
||||
|
||||
logging.basicConfig(
|
||||
level=loglevel,
|
||||
handlers=log_handlers # [queue_handler]
|
||||
)
|
|
@ -7,16 +7,12 @@ import signal
|
|||
import sys
|
||||
import time
|
||||
from trap.config import parser
|
||||
from trap.counter import CounterListerner
|
||||
from trap.cv_renderer import run_cv_renderer
|
||||
from trap.face_detector import run_detector
|
||||
from trap.frame_emitter import run_frame_emitter
|
||||
from trap.laser_renderer import run_laser_renderer
|
||||
from trap.prediction_server import run_prediction_server
|
||||
from trap.preview_renderer import run_preview_renderer
|
||||
from trap.animation_renderer import run_animation_renderer
|
||||
from trap.socket_forwarder import run_ws_forwarder
|
||||
from trap.stage import Stage
|
||||
from trap.timer import TimerCollection
|
||||
from trap.tracker import run_tracker
|
||||
|
||||
|
@ -91,16 +87,12 @@ def start():
|
|||
timers = TimerCollection()
|
||||
timer_fe = timers.new('frame_emitter')
|
||||
timer_tracker = timers.new('tracker')
|
||||
timer_faces = timers.new('faces')
|
||||
timer_stage = timers.new('stage')
|
||||
|
||||
# instantiating process with arguments
|
||||
procs = [
|
||||
# ExceptionHandlingProcess(target=run_ws_forwarder, kwargs={'config': args, 'is_running': isRunning}, name='forwarder'),
|
||||
ExceptionHandlingProcess(target=run_frame_emitter, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_fe.iterations}, name='frame_emitter'),
|
||||
ExceptionHandlingProcess(target=run_tracker, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_tracker.iterations}, name='tracker'),
|
||||
# ExceptionHandlingProcess(target=run_detector, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_faces.iterations}, name='detector'),
|
||||
ExceptionHandlingProcess(target=Stage.start, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_stage.iterations}, name='stage'),
|
||||
]
|
||||
|
||||
# if args.render_file or args.render_url or args.render_window:
|
||||
|
@ -114,10 +106,6 @@ def start():
|
|||
procs.append(
|
||||
ExceptionHandlingProcess(target=run_animation_renderer, kwargs={'config': args, 'is_running': isRunning}, name='renderer')
|
||||
)
|
||||
if args.render_laser:
|
||||
procs.append(
|
||||
ExceptionHandlingProcess(target=run_laser_renderer, kwargs={'config': args, 'is_running': isRunning, 'timer_counter': timer_preview.iterations}, name='renderer')
|
||||
)
|
||||
|
||||
if not args.bypass_prediction:
|
||||
timer_predict = timers.new('predict')
|
||||
|
@ -126,14 +114,10 @@ def start():
|
|||
)
|
||||
|
||||
def timer_process(timers: TimerCollection, is_running: Event):
|
||||
counter_listener = CounterListerner()
|
||||
|
||||
while is_running.is_set():
|
||||
time.sleep(1)
|
||||
timers.snapshot()
|
||||
counter_listener.snapshot()
|
||||
print(timers.to_string(), counter_listener.to_string())
|
||||
|
||||
print(timers.to_string())
|
||||
|
||||
procs.append(
|
||||
ExceptionHandlingProcess(target=timer_process, kwargs={'is_running':isRunning, 'timers': timers}, name='timer'),
|
||||
|
|
|
@ -1,27 +1,33 @@
|
|||
# adapted from Trajectron++ online_server.py
|
||||
import json
|
||||
from argparse import Namespace
|
||||
import logging
|
||||
from multiprocessing import Event, Queue
|
||||
import os
|
||||
import pathlib
|
||||
import pickle
|
||||
import random
|
||||
import sys
|
||||
import time
|
||||
import json
|
||||
import traceback
|
||||
import warnings
|
||||
from argparse import ArgumentParser, Namespace
|
||||
from multiprocessing import Event
|
||||
|
||||
import dill
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import torch
|
||||
import zmq
|
||||
from trajectron.environment import Environment, Scene
|
||||
from trajectron.model.model_registrar import ModelRegistrar
|
||||
from trajectron.model.online.online_trajectron import OnlineTrajectron
|
||||
import dill
|
||||
import random
|
||||
import pathlib
|
||||
import numpy as np
|
||||
from trajectron.environment.data_utils import derivative_of
|
||||
from trajectron.utils import prediction_output_to_trajectories
|
||||
from trajectron.model.online.online_trajectron import OnlineTrajectron
|
||||
from trajectron.model.model_registrar import ModelRegistrar
|
||||
from trajectron.environment import Environment, Scene
|
||||
from trajectron.environment.node import Node
|
||||
from trajectron.environment.node_type import NodeType
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
import zmq
|
||||
|
||||
from trap.frame_emitter import DataclassJSONEncoder, Frame
|
||||
from trap.node import Node
|
||||
from trap.tracker import Smoother
|
||||
from trap.tracker import Track, Smoother
|
||||
|
||||
logger = logging.getLogger("trap.prediction")
|
||||
|
||||
|
@ -140,18 +146,27 @@ def offset_trajectron_dict(source, x, y):
|
|||
source[t][node][:,1] += y
|
||||
return source
|
||||
|
||||
class PredictionServer(Node):
|
||||
def setup(self):
|
||||
class PredictionServer:
|
||||
def __init__(self, config: Namespace, is_running: Event):
|
||||
self.config = config
|
||||
self.is_running = is_running
|
||||
|
||||
if self.config.eval_device == 'cpu':
|
||||
logger.warning("Running on CPU. Specifying --eval_device cuda:0 should dramatically speed up prediction")
|
||||
|
||||
if self.config.smooth_predictions:
|
||||
self.smoother = Smoother(window_len=12, convolution=True) # convolution seems fine for predictions
|
||||
|
||||
self.trajectory_socket = self.sub(self.config.zmq_trajectory_addr)
|
||||
self.prediction_socket = self.pub(self.config.zmq_prediction_addr)
|
||||
self.external_predictions = not self.config.zmq_prediction_addr.startswith("ipc://")
|
||||
context = zmq.Context()
|
||||
self.trajectory_socket: zmq.Socket = context.socket(zmq.SUB)
|
||||
self.trajectory_socket.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
self.trajectory_socket.setsockopt(zmq.CONFLATE, 1) # only keep last msg. Set BEFORE connect!
|
||||
self.trajectory_socket.connect(config.zmq_trajectory_addr)
|
||||
|
||||
self.prediction_socket: zmq.Socket = context.socket(zmq.PUB)
|
||||
self.prediction_socket.bind(config.zmq_prediction_addr)
|
||||
self.external_predictions = not self.config.zmq_prediction_addr.startswith("ipc://")
|
||||
# print(self.prediction_socket)
|
||||
|
||||
def send_frame(self, frame: Frame):
|
||||
if self.external_predictions:
|
||||
|
@ -160,7 +175,8 @@ class PredictionServer(Node):
|
|||
else:
|
||||
self.prediction_socket.send_pyobj(frame)
|
||||
|
||||
def run(self):
|
||||
def run(self, timer_counter):
|
||||
print(self.config)
|
||||
if self.config.seed is not None:
|
||||
random.seed(self.config.seed)
|
||||
np.random.seed(self.config.seed)
|
||||
|
@ -234,9 +250,17 @@ class PredictionServer(Node):
|
|||
trajectron.set_environment(online_env, init_timestep)
|
||||
|
||||
timestep = init_timestep + 1
|
||||
|
||||
while self.run_loop():
|
||||
prev_run_time = 0
|
||||
while self.is_running.is_set():
|
||||
timestep += 1
|
||||
with timer_counter.get_lock():
|
||||
timer_counter.value+=1
|
||||
|
||||
# this_run_time = time.time()
|
||||
# logger.debug(f'test {prev_run_time - this_run_time}')
|
||||
# time.sleep(max(0, prev_run_time - this_run_time + .5))
|
||||
# prev_run_time = time.time()
|
||||
|
||||
|
||||
# TODO: see process_data.py on how to create a node, the provide nodes + incoming data columns
|
||||
# data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
|
||||
|
@ -259,6 +283,7 @@ class PredictionServer(Node):
|
|||
if self.config.predict_training_data:
|
||||
input_dict = eval_scene.get_clipped_input_dict(timestep, hyperparams['state'])
|
||||
else:
|
||||
# print('await', self.config.zmq_trajectory_addr)
|
||||
zmq_ev = self.trajectory_socket.poll(timeout=2000)
|
||||
if not zmq_ev:
|
||||
# on no data loop so that is_running is checked
|
||||
|
@ -269,14 +294,6 @@ class PredictionServer(Node):
|
|||
data = self.trajectory_socket.recv()
|
||||
# print('recv tracker frame')
|
||||
frame: Frame = pickle.loads(data)
|
||||
|
||||
# add settings to log
|
||||
frame.log['predictor'] = {}
|
||||
for option in ['prediction_horizon','num_samples','full_dist','gmm_mode','z_mode', 'model_dir']:
|
||||
frame.log['predictor'][option] = self.config.__dict__[option]
|
||||
|
||||
|
||||
# print('indexrecv', [frame.tracks[t].frame_index for t in frame.tracks])
|
||||
# trajectory_data = {t.track_id: t.get_projected_history_as_dict(frame.H) for t in frame.tracks.values()}
|
||||
# trajectory_data = json.loads(data)
|
||||
# logger.debug(f"Receive {frame.index}")
|
||||
|
@ -303,7 +320,7 @@ class PredictionServer(Node):
|
|||
if len(track.history) < 2:
|
||||
continue
|
||||
|
||||
node = track.to_trajectron_node(frame.camera, online_env)
|
||||
node = track.to_trajectron_node(self.config.camera, online_env)
|
||||
# print(node.data.data[-1])
|
||||
input_dict[node] = np.array(object=node.data.data[-1])
|
||||
# print("history", node.data.data[-10:])
|
||||
|
@ -465,165 +482,9 @@ class PredictionServer(Node):
|
|||
|
||||
frame.maps = list([m.cpu().numpy() for m in maps.values()]) if maps else None
|
||||
|
||||
# print('index', [frame.tracks[t].frame_index for t in frame.tracks])
|
||||
|
||||
self.send_frame(frame)
|
||||
|
||||
logger.info('Stopping')
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls) -> ArgumentParser:
|
||||
inference_parser = ArgumentParser()
|
||||
inference_parser.add_argument('--zmq-trajectory-addr',
|
||||
help='Manually specity communication addr for the trajectory messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_traj")
|
||||
inference_parser.add_argument('--zmq-prediction-addr',
|
||||
help='Manually specity communication addr for the prediction messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_preds")
|
||||
|
||||
|
||||
inference_parser.add_argument("--step-size",
|
||||
# TODO)) Make dataset/model metadata
|
||||
help="sample step size (should be the same as for data processing and augmentation)",
|
||||
type=int,
|
||||
default=1,
|
||||
)
|
||||
inference_parser.add_argument("--model_dir",
|
||||
help="directory with the model to use for inference",
|
||||
type=str, # TODO: make into Path
|
||||
default='../Trajectron-plus-plus/experiments/trap/models/models_18_Oct_2023_19_56_22_virat_vel_ar3/')
|
||||
# default='../Trajectron-plus-plus/experiments/pedestrians/models/models_04_Oct_2023_21_04_48_eth_vel_ar3')
|
||||
|
||||
inference_parser.add_argument("--conf",
|
||||
help="path to json config file for hyperparameters, relative to model_dir",
|
||||
type=str,
|
||||
default='config.json')
|
||||
|
||||
# Model Parameters (hyperparameters)
|
||||
inference_parser.add_argument("--offline_scene_graph",
|
||||
help="whether to precompute the scene graphs offline, options are 'no' and 'yes'",
|
||||
type=str,
|
||||
default='yes')
|
||||
|
||||
inference_parser.add_argument("--dynamic_edges",
|
||||
help="whether to use dynamic edges or not, options are 'no' and 'yes'",
|
||||
type=str,
|
||||
default='yes')
|
||||
|
||||
inference_parser.add_argument("--edge_state_combine_method",
|
||||
help="the method to use for combining edges of the same type",
|
||||
type=str,
|
||||
default='sum')
|
||||
|
||||
inference_parser.add_argument("--edge_influence_combine_method",
|
||||
help="the method to use for combining edge influences",
|
||||
type=str,
|
||||
default='attention')
|
||||
|
||||
inference_parser.add_argument('--edge_addition_filter',
|
||||
nargs='+',
|
||||
help="what scaling to use for edges as they're created",
|
||||
type=float,
|
||||
default=[0.25, 0.5, 0.75, 1.0]) # We don't automatically pad left with 0.0, if you want a sharp
|
||||
# and short edge addition, then you need to have a 0.0 at the
|
||||
# beginning, e.g. [0.0, 1.0].
|
||||
|
||||
inference_parser.add_argument('--edge_removal_filter',
|
||||
nargs='+',
|
||||
help="what scaling to use for edges as they're removed",
|
||||
type=float,
|
||||
default=[1.0, 0.0]) # We don't automatically pad right with 0.0, if you want a sharp drop off like
|
||||
# the default, then you need to have a 0.0 at the end.
|
||||
|
||||
|
||||
inference_parser.add_argument('--incl_robot_node',
|
||||
help="whether to include a robot node in the graph or simply model all agents",
|
||||
action='store_true')
|
||||
|
||||
inference_parser.add_argument('--map_encoding',
|
||||
help="Whether to use map encoding or not",
|
||||
action='store_true')
|
||||
|
||||
inference_parser.add_argument('--no_edge_encoding',
|
||||
help="Whether to use neighbors edge encoding",
|
||||
action='store_true')
|
||||
|
||||
|
||||
inference_parser.add_argument('--batch_size',
|
||||
help='training batch size',
|
||||
type=int,
|
||||
default=256)
|
||||
|
||||
inference_parser.add_argument('--k_eval',
|
||||
help='how many samples to take during evaluation',
|
||||
type=int,
|
||||
default=25)
|
||||
|
||||
# Data Parameters
|
||||
inference_parser.add_argument("--eval_data_dict",
|
||||
help="what file to load for evaluation data (WHEN NOT USING LIVE DATA)",
|
||||
type=str,
|
||||
default='../Trajectron-plus-plus/experiments/processed/eth_test.pkl')
|
||||
|
||||
inference_parser.add_argument("--output_dir",
|
||||
help="what dir to save output (i.e., saved models, logs, etc) (WHEN NOT USING LIVE OUTPUT)",
|
||||
type=pathlib.Path,
|
||||
default='./OUT/test_inference')
|
||||
|
||||
|
||||
# inference_parser.add_argument('--device',
|
||||
# help='what device to perform training on',
|
||||
# type=str,
|
||||
# default='cuda:0')
|
||||
|
||||
inference_parser.add_argument("--eval_device",
|
||||
help="what device to use during inference",
|
||||
type=str,
|
||||
default="cpu")
|
||||
|
||||
|
||||
inference_parser.add_argument('--seed',
|
||||
help='manual seed to use, default is 123',
|
||||
type=int,
|
||||
default=123)
|
||||
|
||||
inference_parser.add_argument('--predict_training_data',
|
||||
help='Ignore tracker and predict data from the training dataset',
|
||||
action='store_true')
|
||||
|
||||
inference_parser.add_argument("--smooth-predictions",
|
||||
help="Smooth the predicted tracks",
|
||||
action='store_true')
|
||||
|
||||
inference_parser.add_argument('--prediction-horizon',
|
||||
help='Trajectron.incremental_forward parameter',
|
||||
type=int,
|
||||
default=30)
|
||||
inference_parser.add_argument('--num-samples',
|
||||
help='Trajectron.incremental_forward parameter',
|
||||
type=int,
|
||||
default=5)
|
||||
inference_parser.add_argument("--full-dist",
|
||||
help="Trajectron.incremental_forward parameter",
|
||||
action='store_true')
|
||||
inference_parser.add_argument("--gmm-mode",
|
||||
help="Trajectron.incremental_forward parameter",
|
||||
type=bool,
|
||||
default=True)
|
||||
inference_parser.add_argument("--z-mode",
|
||||
help="Trajectron.incremental_forward parameter",
|
||||
action='store_true')
|
||||
inference_parser.add_argument('--cm-to-m',
|
||||
help="Correct for homography that is in cm (i.e. {x,y}/100). Should also be used when processing data",
|
||||
action='store_true')
|
||||
inference_parser.add_argument('--center-data',
|
||||
help="Center data around cx and cy. Should also be used when processing data",
|
||||
action='store_true')
|
||||
|
||||
|
||||
return inference_parser
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ from typing import List, Optional
|
|||
from pyglet import shapes
|
||||
from PIL import Image
|
||||
|
||||
from trap.utils import convert_world_points_to_img_points, exponentialDecay, relativePointToPolar, relativePolarToPoint
|
||||
from trap.utils import convert_world_points_to_img_points
|
||||
from trap.frame_emitter import DetectionState, Frame, Track, Camera
|
||||
|
||||
|
||||
|
@ -45,6 +45,18 @@ class FrameAnimation:
|
|||
def done(self):
|
||||
return (time.time() - self.start_time) > 5
|
||||
|
||||
def exponentialDecay(a, b, decay, dt):
|
||||
"""Exponential decay as alternative to Lerp
|
||||
Introduced by Freya Holmér: https://www.youtube.com/watch?v=LSNQuFEDOyQ
|
||||
"""
|
||||
return b + (a-b) * math.exp(-decay * dt)
|
||||
|
||||
def relativePointToPolar(origin, point) -> tuple[float, float]:
|
||||
x, y = point[0] - origin[0], point[1] - origin[1]
|
||||
return np.sqrt(x**2 + y**2), np.arctan2(y, x)
|
||||
|
||||
def relativePolarToPoint(origin, r, angle) -> tuple[float, float]:
|
||||
return r * np.cos(angle) + origin[0], r * np.sin(angle) + origin[1]
|
||||
|
||||
PROJECTION_IMG = 0
|
||||
PROJECTION_UNDISTORT = 1
|
||||
|
@ -55,8 +67,7 @@ class DrawnTrack:
|
|||
def __init__(self, track_id, track: Track, renderer: PreviewRenderer, H, draw_projection = PROJECTION_IMG, camera: Optional[Camera] = None):
|
||||
# self.created_at = time.time()
|
||||
self.draw_projection = draw_projection
|
||||
self.update_at = self.created_at = self.update_predictions_at = time.time()
|
||||
self.last_update_t = time.perf_counter()
|
||||
self.update_at = self.created_at = time.time()
|
||||
self.track_id = track_id
|
||||
self.renderer = renderer
|
||||
self.camera = camera
|
||||
|
@ -82,7 +93,6 @@ class DrawnTrack:
|
|||
self.inv_H = np.linalg.pinv(self.H)
|
||||
|
||||
def set_predictions(self, track: Track, H = None):
|
||||
self.update_predictions_at = time.time()
|
||||
|
||||
pred_coords = []
|
||||
pred_history_coords = []
|
||||
|
@ -102,7 +112,7 @@ class DrawnTrack:
|
|||
# color = (128,0,128) if pred_i else (128,
|
||||
|
||||
|
||||
def update_drawn_positions(self, dt: float|None, no_shapes=False) -> List:
|
||||
def update_drawn_positions(self, dt) -> List:
|
||||
'''
|
||||
use dt to lerp the drawn positions in the direction of current prediction
|
||||
'''
|
||||
|
@ -112,11 +122,6 @@ class DrawnTrack:
|
|||
"""quick wrapper to toggle int'ing"""
|
||||
return v
|
||||
# return int(v)
|
||||
|
||||
if dt is None:
|
||||
t = time.perf_counter()
|
||||
dt = t - self.last_update_t
|
||||
self.last_update_t = t
|
||||
|
||||
# 1. track history
|
||||
for i, pos in enumerate(self.drawn_positions):
|
||||
|
@ -165,8 +170,7 @@ class DrawnTrack:
|
|||
|
||||
|
||||
# finally: update shapes from coordinates
|
||||
if not no_shapes: # to be used when not rendering to pyglet (e.g. laser renderer)
|
||||
self.update_shapes(dt)
|
||||
self.update_shapes(dt)
|
||||
return self.drawn_positions
|
||||
|
||||
def update_shapes(self, dt):
|
||||
|
@ -201,9 +205,7 @@ class DrawnTrack:
|
|||
if draw_dot:
|
||||
line = pyglet.shapes.Arc(x2, y2, 10, thickness=2, color=color, batch=self.renderer.batch_anim)
|
||||
else:
|
||||
# line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
||||
line = pyglet.shapes.Line(x, y, x2, y2, 3, color, batch=self.renderer.batch_anim)
|
||||
# line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
||||
line = self.renderer.gradientLine(x, y, x2, y2, 3, color, color, batch=self.renderer.batch_anim)
|
||||
line.opacity = 20 if not for_laser else 255
|
||||
self.shapes.append(line)
|
||||
|
||||
|
@ -298,9 +300,9 @@ class FrameWriter:
|
|||
framerate.
|
||||
See https://video.stackexchange.com/questions/25811/ffmpeg-make-video-with-non-constant-framerate-from-image-filenames
|
||||
"""
|
||||
def __init__(self, filename: str, fps: float, frame_size: Optional[tuple] = None) -> None:
|
||||
def __init__(self, filename: str, fps: float, frame_size: tuple) -> None:
|
||||
self.filename = filename
|
||||
self._fps = fps
|
||||
self.fps = fps
|
||||
self.frame_size = frame_size
|
||||
|
||||
self.tmp_dir = tempfile.TemporaryDirectory(prefix="trap-output-")
|
||||
|
|
File diff suppressed because one or more lines are too long
1085
trap/stage.py
1085
trap/stage.py
File diff suppressed because it is too large
Load diff
|
@ -1,13 +1,13 @@
|
|||
import collections
|
||||
from re import A
|
||||
import time
|
||||
from multiprocessing.sharedctypes import Value
|
||||
from multiprocessing.sharedctypes import RawValue, Value, Array
|
||||
from ctypes import c_double
|
||||
from typing import MutableSequence
|
||||
|
||||
|
||||
class Timer():
|
||||
"""
|
||||
Multiprocess timer. Count iterations in one process, while converting that
|
||||
to fps in the other.
|
||||
Measure 2 independent things: the freuency of tic, and the duration of tic->toc
|
||||
Note that indeed these don't need to be equal
|
||||
"""
|
||||
|
@ -40,6 +40,7 @@ class Timer():
|
|||
|
||||
@property
|
||||
def fps(self):
|
||||
fpses = []
|
||||
if len(self.tocs) < 2:
|
||||
return 0
|
||||
dt = self.tocs[-1][0] - self.tocs[0][0]
|
||||
|
|
221
trap/tools.py
221
trap/tools.py
|
@ -1,6 +1,4 @@
|
|||
from __future__ import annotations
|
||||
from argparse import Namespace
|
||||
from dataclasses import dataclass
|
||||
import json
|
||||
import math
|
||||
from pathlib import Path
|
||||
|
@ -10,13 +8,10 @@ from tempfile import mktemp
|
|||
import jsonlines
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import shapely
|
||||
from shapely.ops import split
|
||||
from trap.preview_renderer import DrawnTrack
|
||||
import trap.tracker
|
||||
from trap.config import parser
|
||||
from trap.frame_emitter import Camera, Detection, DetectionState, video_src_from_config, Frame
|
||||
from trap.tracker import DETECTOR_YOLOv8, FinalDisplacementFilter, Smoother, TrackReader, _ultralytics_track, Track, TrainingDataWriter, Tracker, read_tracks_json
|
||||
from trap.tracker import DETECTOR_YOLOv8, FinalDisplacementFilter, Smoother, TrackReader, _yolov8_track, Track, TrainingDataWriter, Tracker, read_tracks_json
|
||||
from collections import defaultdict
|
||||
|
||||
import logging
|
||||
|
@ -221,117 +216,25 @@ def transition_path_points(path: np.array, t: float):
|
|||
break
|
||||
return np.array(new_path)
|
||||
|
||||
from shapely.geometry import LineString
|
||||
from shapely.geometry import Point
|
||||
from sklearn.cluster import AgglomerativeClustering
|
||||
|
||||
@dataclass
|
||||
class PointCluster:
|
||||
point: np.ndarray
|
||||
start: np.ndarray
|
||||
source_points: List[np.ndarray]
|
||||
probability: float
|
||||
next_point_clusters: List[PointCluster]
|
||||
|
||||
|
||||
def cluster_predictions_by_radius(start_point, lines: Iterable[np.ndarray] | LineString, radius = .5, p_factor = 1.) -> List[PointCluster]:
|
||||
# start = lines[0][0]
|
||||
p0 = Point(*start_point)
|
||||
# print(lines[0][0], start_point)
|
||||
circle = p0.buffer(radius).boundary
|
||||
|
||||
# print(lines)
|
||||
# print([line.tolist() for line in lines])
|
||||
intersections = []
|
||||
remaining_lines = []
|
||||
for line in lines:
|
||||
linestring = line if type(line) is LineString else LineString(line.tolist())
|
||||
intersection = circle.intersection(linestring)
|
||||
if type(intersection) is LineString and intersection.is_empty:
|
||||
# No intersection with circle, a dangling endpoint that we can skip
|
||||
continue
|
||||
|
||||
if type(intersection) is not Point:
|
||||
# with multiple intersections: use only the first one
|
||||
intersection = intersection.geoms[0]
|
||||
|
||||
# set a buffer around the intersection to assure a match is fond oun the line
|
||||
split_line = split(linestring, intersection.buffer(.01))
|
||||
remaining_line = split_line.geoms[2] if len(split_line.geoms) > 2 else None
|
||||
# print(intersection, split_line)
|
||||
|
||||
intersections.append(intersection)
|
||||
remaining_lines.append(remaining_line)
|
||||
|
||||
if len(intersections) < 1:
|
||||
return []
|
||||
|
||||
# linestrings = [LineString(line.tolist()) for line in lines]
|
||||
# intersections = [circle.intersection(line) for line in linestrings]
|
||||
# dangling_lines = [(type(i) is LineString and i.is_empty) for i in intersections]
|
||||
|
||||
# intersections = [False if is_end else (p if type(p) is Point else p.geoms[0]) for p, is_end in zip(intersections, dangling_lines)]
|
||||
|
||||
|
||||
# as all intersections are on the same circle we can guestimate angle by
|
||||
# estimating distance, as circumfence is 2*pi*r, thus distance ~ proportional with radius.
|
||||
if len(intersections) > 1:
|
||||
clustering = AgglomerativeClustering(None, linkage="ward", distance_threshold=2*math.pi * radius / 6)
|
||||
coords = np.asarray([i.coords for i in intersections]).reshape((-1,2))
|
||||
assigned_clusters = clustering.fit_predict(coords)
|
||||
else:
|
||||
assigned_clusters = [0] # only one item
|
||||
|
||||
clusters = defaultdict(lambda: [])
|
||||
cluster_remainders = defaultdict(lambda: [])
|
||||
for point, line, c in zip(intersections, remaining_lines, assigned_clusters):
|
||||
clusters[c].append(point)
|
||||
cluster_remainders[c].append(line)
|
||||
|
||||
line_clusters = []
|
||||
for c, points in clusters.items():
|
||||
mean = np.mean(points, axis=0)
|
||||
prob = p_factor * len(points) / len(assigned_clusters)
|
||||
|
||||
remaining_lines = cluster_remainders[c]
|
||||
remaining_lines = list(filter(None, remaining_lines))
|
||||
|
||||
|
||||
next_points = cluster_predictions_by_radius(mean, remaining_lines, radius, prob)
|
||||
|
||||
line_clusters.append(PointCluster(mean, start_point, points, prob, next_points))
|
||||
|
||||
|
||||
|
||||
# split_lines = [shapely.ops.split(line, point) for line, point in zip(linestrings, intersections)]
|
||||
# remaining_lines = [l[1] for l in split_lines if len(l) > 1]
|
||||
|
||||
|
||||
# print(line_clusters)
|
||||
return line_clusters
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# def cosine_similarity(point1, point2):
|
||||
# dot_product = np.dot(point1, point2)
|
||||
# norm1 = np.linalg.norm(point1)
|
||||
# norm2 = np.linalg.norm(point2)
|
||||
# return dot_product / (norm1 * norm2)
|
||||
|
||||
# p = Point(5,5)
|
||||
# c = p.buffer(3).boundary
|
||||
# l = LineString([(0,0), (10, 10)])
|
||||
# i = c.intersection(l)
|
||||
|
||||
def track_predictions_to_lines(track: Track, camera:Camera, anim_position=.8):
|
||||
def draw_track_predictions(img: cv2.Mat, track: Track, color_index: int, camera:Camera, convert_points: Optional[Callable], anim_position=.8):
|
||||
"""
|
||||
anim_position: 0-1
|
||||
"""
|
||||
if not track.predictions:
|
||||
return
|
||||
|
||||
|
||||
current_point = track.get_projected_history(camera=camera)[-1]
|
||||
|
||||
opacity = 1-min(1, max(0, inv_lerp(0.8, 1, anim_position))) # fade out
|
||||
slide_t = min(1, max(0, inv_lerp(0, 0.8, anim_position))) # slide_position
|
||||
|
||||
|
||||
# if convert_points:
|
||||
# current_point = convert_points([current_point])[0]
|
||||
|
||||
lines = []
|
||||
for pred_i, pred in enumerate(track.predictions):
|
||||
pred_coords = pred #cv2.perspectiveTransform(np.array([pred]), inv_H)[0].tolist()
|
||||
|
@ -339,86 +242,23 @@ def track_predictions_to_lines(track: Track, camera:Camera, anim_position=.8):
|
|||
line_points = np.concatenate(([current_point], pred_coords)) # 'current point' is amoving target
|
||||
# print(pred_coords, current_point, line_points)
|
||||
line_points = transition_path_points(line_points, slide_t)
|
||||
lines.append(line_points)
|
||||
# print("prediction line", len(line_points))
|
||||
# break # TODO: only one
|
||||
return lines
|
||||
|
||||
def drawntrack_predictions_to_lines(drawn_track: DrawnTrack, camera:Camera, anim_position=.8):
|
||||
if not drawn_track.drawn_predictions:
|
||||
return
|
||||
|
||||
# current_point = drawn_track.pred_track.get_projected_history(camera=camera)[-1] # not guaranteed to be up to date
|
||||
current_point = drawn_track.drawn_predictions[0][0]
|
||||
# print(current_point)
|
||||
slide_t = min(1, max(0, inv_lerp(0, 0.8, anim_position))) # slide_position
|
||||
|
||||
lines = []
|
||||
for pred_i, pred in enumerate(drawn_track.drawn_predictions):
|
||||
pred_coords = pred #cv2.perspectiveTransform(np.array([pred]), inv_H)[0].tolist()
|
||||
# line_points = pred_coords
|
||||
line_points = np.concatenate(([current_point], pred_coords)) # 'current point' is amoving target
|
||||
# print(pred_coords, current_point, line_points)
|
||||
line_points = transition_path_points(line_points, slide_t)
|
||||
lines.append(line_points)
|
||||
# print("prediction line", len(line_points))
|
||||
# break # TODO: only one
|
||||
return lines
|
||||
|
||||
def draw_track_predictions(img: cv2.Mat, track: Track, color_index: int, camera:Camera, convert_points: Optional[Callable], anim_position=.8, as_clusters=False):
|
||||
"""
|
||||
anim_position: 0-1
|
||||
"""
|
||||
|
||||
lines = track_predictions_to_lines(track, camera, anim_position)
|
||||
|
||||
if not lines:
|
||||
return
|
||||
|
||||
opacity = 1-min(1, max(0, inv_lerp(0.8, 1, anim_position))) # fade out
|
||||
|
||||
|
||||
# if convert_points:
|
||||
# current_point = convert_points([current_point])[0]
|
||||
|
||||
color = bgr_colors[color_index % len(bgr_colors)]
|
||||
color = tuple([int(c*opacity) for c in color])
|
||||
|
||||
if as_clusters:
|
||||
|
||||
clusters = cluster_predictions_by_radius(current_point, lines, 1.5)
|
||||
def draw_cluster(img, cluster: PointCluster):
|
||||
points = convert_points([cluster.start, cluster.point])
|
||||
# cv2 only draws to integer coordinates
|
||||
points = np.rint(points).astype(int)
|
||||
thickness = max(1, int(cluster.probability * 6))
|
||||
thickness=1
|
||||
# if len(cluster.next_point_clusters) == 1:
|
||||
# not a final point, nor a split:
|
||||
cv2.line(img, points[0], points[1], color, thickness, lineType=cv2.LINE_AA)
|
||||
# else:
|
||||
# cv2.arrowedLine(img, points[0], points[1], color, thickness, cv2.LINE_AA)
|
||||
|
||||
for sub in cluster.next_point_clusters:
|
||||
draw_cluster(img, sub)
|
||||
# pass
|
||||
# # cv2.circle(img, end, 2, color, 1, lineType=cv2.LINE_AA)
|
||||
# print(clusters)
|
||||
|
||||
for cluster in clusters:
|
||||
draw_cluster(img, cluster)
|
||||
|
||||
else:
|
||||
# convert function (e.g. to project points to img space)
|
||||
if convert_points:
|
||||
lines = [convert_points(points) for points in lines]
|
||||
|
||||
# cv2 only draws to integer coordinates
|
||||
lines = [np.rint(points).astype(int) for points in lines]
|
||||
|
||||
# draw in a single pass
|
||||
# line_points = line_points.reshape((1, -1,1,2)) # TODO)) SEems to do nothing..
|
||||
cv2.polylines(img, lines, False, color, 2, cv2.LINE_AA)
|
||||
line_points = convert_points(line_points)
|
||||
line_points = np.rint(line_points).astype(int)
|
||||
# color = (128,0,128) if pred_i else (128,128,0)
|
||||
|
||||
color = bgr_colors[color_index % len(bgr_colors)]
|
||||
color = tuple([int(c*opacity) for c in color])
|
||||
|
||||
line_points = line_points.reshape((-1,1,2))
|
||||
lines.append(line_points)
|
||||
|
||||
# draw in a single pass
|
||||
cv2.polylines(img, lines, False, color, 2, cv2.LINE_AA)
|
||||
# for start, end in zip(line_points[:-1], line_points[1:]):
|
||||
# cv2.line(img, start, end, color, 2, lineType=cv2.LINE_AA)
|
||||
# pass
|
||||
# # cv2.circle(img, end, 2, color, 1, lineType=cv2.LINE_AA)
|
||||
|
||||
def draw_trackjectron_history(img: cv2.Mat, track: Track, color_index: int, convert_points: Optional[Callable]):
|
||||
if not track.predictor_history:
|
||||
|
@ -461,12 +301,9 @@ def draw_track_projected(img: cv2.Mat, track: Track, color_index: int, camera: C
|
|||
for j in range(len(history)-1):
|
||||
# a = history[j]
|
||||
b = history[j+1]
|
||||
detection = track.history[j+1]
|
||||
|
||||
color = point_color if detection.state == DetectionState.Confirmed else (100,100,100)
|
||||
|
||||
# cv2.line(img, to_point(a), to_point(b), point_color, 1)
|
||||
cv2.circle(img, to_point(b), 3, color, 2)
|
||||
cv2.circle(img, to_point(b), 3, point_color, 2)
|
||||
|
||||
|
||||
def draw_track(img: cv2.Mat, track: Track, color_index: int):
|
||||
|
|
265
trap/tracker.py
265
trap/tracker.py
|
@ -1,42 +1,36 @@
|
|||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import logging
|
||||
import multiprocessing
|
||||
import pickle
|
||||
import time
|
||||
from argparse import Namespace
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timedelta
|
||||
import csv
|
||||
from dataclasses import dataclass, field
|
||||
import json
|
||||
import logging
|
||||
from math import nan
|
||||
from multiprocessing import Event
|
||||
from pathlib import Path
|
||||
from typing import DefaultDict, Dict, List, Optional
|
||||
|
||||
import cv2
|
||||
import pickle
|
||||
import time
|
||||
from typing import Dict, Optional, List
|
||||
import jsonlines
|
||||
import numpy as np
|
||||
import torch
|
||||
import torchvision
|
||||
import zmq
|
||||
from bytetracker import BYTETracker
|
||||
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
||||
from deep_sort_realtime.deepsort_tracker import DeepSort
|
||||
from torchvision.models.detection import (FasterRCNN_ResNet50_FPN_V2_Weights,
|
||||
KeypointRCNN_ResNet50_FPN_Weights,
|
||||
MaskRCNN_ResNet50_FPN_V2_Weights,
|
||||
fasterrcnn_resnet50_fpn_v2,
|
||||
keypointrcnn_resnet50_fpn,
|
||||
maskrcnn_resnet50_fpn_v2)
|
||||
from tsmoothie.smoother import ConvolutionSmoother, KalmanSmoother
|
||||
from ultralytics import YOLO, RTDETR
|
||||
from ultralytics.engine.model import Model as UltralyticsModel
|
||||
from ultralytics.engine.results import Results as UltralyticsResult
|
||||
import cv2
|
||||
|
||||
from trap import timer
|
||||
from trap.frame_emitter import (Camera, DataclassJSONEncoder, Detection,
|
||||
DetectionState, Frame, Track)
|
||||
from trap.gemma import ImgMovementFilter
|
||||
from trap.node import Node
|
||||
from torchvision.models.detection import retinanet_resnet50_fpn_v2, RetinaNet_ResNet50_FPN_V2_Weights, keypointrcnn_resnet50_fpn, KeypointRCNN_ResNet50_FPN_Weights, maskrcnn_resnet50_fpn_v2, MaskRCNN_ResNet50_FPN_V2_Weights, FasterRCNN_ResNet50_FPN_V2_Weights, fasterrcnn_resnet50_fpn_v2
|
||||
from deep_sort_realtime.deepsort_tracker import DeepSort
|
||||
from torchvision.models import ResNet50_Weights
|
||||
from deep_sort_realtime.deep_sort.track import Track as DeepsortTrack
|
||||
|
||||
from ultralytics import YOLO
|
||||
from ultralytics.engine.results import Results as YOLOResult
|
||||
|
||||
from trap.frame_emitter import Camera, DataclassJSONEncoder, DetectionState, Frame, Detection, Track
|
||||
from bytetracker import BYTETracker
|
||||
|
||||
from tsmoothie.smoother import KalmanSmoother, ConvolutionSmoother
|
||||
import tsmoothie.smoother
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# Detection = [int, int, int, int, float, int]
|
||||
# Detections = [Detection]
|
||||
|
@ -53,12 +47,11 @@ DETECTOR_RETINANET = 'retinanet'
|
|||
DETECTOR_MASKRCNN = 'maskrcnn'
|
||||
DETECTOR_FASTERRCNN = 'fasterrcnn'
|
||||
DETECTOR_YOLOv8 = 'ultralytics'
|
||||
DETECTOR_RTDETR = 'rtdetr'
|
||||
|
||||
TRACKER_DEEPSORT = 'deepsort'
|
||||
TRACKER_BYTETRACK = 'bytetrack'
|
||||
|
||||
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8, DETECTOR_RTDETR]
|
||||
DETECTORS = [DETECTOR_RETINANET, DETECTOR_MASKRCNN, DETECTOR_FASTERRCNN, DETECTOR_YOLOv8]
|
||||
TRACKERS =[TRACKER_DEEPSORT, TRACKER_BYTETRACK]
|
||||
|
||||
TRACKER_CONFIDENCE_MINIMUM = .2
|
||||
|
@ -66,10 +59,9 @@ TRACKER_BYTETRACK_MINIMUM = .1 # bytetrack can track items iwth lower thershold
|
|||
NON_MAXIMUM_SUPRESSION = 1
|
||||
RCNN_SCALE = .4 # seems to have no impact on detections in the corners
|
||||
|
||||
def _ultralytics_track(img: cv2.Mat, frame_idx: int, model: UltralyticsModel, **kwargs) -> List[Detection]:
|
||||
|
||||
results: List[UltralyticsResult] = list(model.track(img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, conf=0.000001, **kwargs))
|
||||
def _yolov8_track(frame: Frame, model: YOLO, **kwargs) -> List[Detection]:
|
||||
|
||||
results: List[YOLOResult] = list(model.track(frame.img, persist=True, tracker="custom_bytetrack.yaml", verbose=False, conf=0.00001, **kwargs))
|
||||
if results[0].boxes is None or results[0].boxes.id is None:
|
||||
# work around https://github.com/ultralytics/ultralytics/issues/5968
|
||||
return []
|
||||
|
@ -77,7 +69,7 @@ def _ultralytics_track(img: cv2.Mat, frame_idx: int, model: UltralyticsModel, **
|
|||
boxes = results[0].boxes.xywh.cpu()
|
||||
track_ids = results[0].boxes.id.int().cpu().tolist()
|
||||
classes = results[0].boxes.cls.int().cpu().tolist()
|
||||
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], 1, DetectionState.Confirmed, frame_idx, class_id) for bbox, track_id, class_id in zip(boxes, track_ids, classes)]
|
||||
return [Detection(track_id, bbox[0]-.5*bbox[2], bbox[1]-.5*bbox[3], bbox[2], bbox[3], 1, DetectionState.Confirmed, frame.index, class_id) for bbox, track_id, class_id in zip(boxes, track_ids, classes)]
|
||||
|
||||
class Multifile():
|
||||
def __init__(self, srcs: List[Path]):
|
||||
|
@ -184,9 +176,6 @@ class TrackReader:
|
|||
for track_id in self._tracks:
|
||||
yield self.get(track_id)
|
||||
|
||||
def track_ids(self):
|
||||
return list(self._tracks.keys())
|
||||
|
||||
def read_tracks_json(path: Path, fps):
|
||||
"""
|
||||
Reader for tracks.json produced by TrainingDataWriter
|
||||
|
@ -383,25 +372,22 @@ class ByteTrackWrapper(TrackerWrapper):
|
|||
detections = np.ndarray((0,0)) # needs to be 2-D
|
||||
|
||||
_ = self.tracker.update(detections)
|
||||
removed_tracks = self.tracker.removed_stracks
|
||||
active_tracks = [track for track in self.tracker.tracked_stracks if track.is_activated]
|
||||
# TODO)) why was this in here:
|
||||
# active_tracks = [track for track in active_tracks if track.start_frame < (self.tracker.frame_id - 5)]
|
||||
active_tracks = [track for track in active_tracks if track.start_frame < (self.tracker.frame_id - 5)]
|
||||
return [Detection.from_bytetrack(track, frame_idx) for track in active_tracks]
|
||||
|
||||
|
||||
|
||||
class Tracker(Node):
|
||||
def setup(self):
|
||||
class Tracker:
|
||||
def __init__(self, config: Namespace):
|
||||
self.config = config
|
||||
|
||||
|
||||
# # TODO: config device
|
||||
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
self.frame_preprocess = ImgMovementFilter()
|
||||
|
||||
# TODO: support removal
|
||||
self.tracks: DefaultDict[str, Track] = defaultdict(lambda: Track())
|
||||
self.tracks = defaultdict(lambda: Track())
|
||||
|
||||
|
||||
logger.debug(f"Load tracker: {self.config.detector}")
|
||||
|
@ -441,22 +427,14 @@ class Tracker(Node):
|
|||
self.mot_tracker = TrackerWrapper.init_type(self.config.tracker)
|
||||
elif self.config.detector == DETECTOR_YOLOv8:
|
||||
# self.model = YOLO('EXPERIMENTS/yolov8x.pt')
|
||||
# best from arsen:
|
||||
# self.model = YOLO('./tracker/all_yolo11-2-20-15-41/weights')
|
||||
# self.model = YOLO('models/yolo11x-pose.pt')
|
||||
# self.model = YOLO("models/yolo12l.pt")
|
||||
self.model = YOLO("models/yolo12x.pt")
|
||||
# NOTE: changing the model, also tweak imgsz in
|
||||
elif self.config.detector == DETECTOR_RTDETR:
|
||||
# self.model = RTDETR('models/rtdetr-x.pt') # drops frames
|
||||
self.model = RTDETR('models/rtdetr-l.pt') # somewhat less good in corners, but less frame dropping == better tracking
|
||||
self.model = YOLO('yolo11x.pt')
|
||||
else:
|
||||
raise RuntimeError(f"{self.config.detector} is not implemented yet. See --help")
|
||||
|
||||
|
||||
# homography = list(source.glob('*img2world.txt'))[0]
|
||||
|
||||
# self.H = self.config.H
|
||||
self.H = self.config.H
|
||||
|
||||
if self.config.smooth_tracks:
|
||||
logger.info("Smoother enabled")
|
||||
|
@ -466,50 +444,71 @@ class Tracker(Node):
|
|||
logger.info("Smoother Disabled (enable with --smooth-tracks)")
|
||||
|
||||
|
||||
self.frame_sock = self.sub(self.config.zmq_frame_addr)
|
||||
self.trajectory_socket = self.pub(self.config.zmq_trajectory_addr)
|
||||
self.detection_socket = self.pub(self.config.zmq_detection_addr)
|
||||
|
||||
logger.debug("Set up tracker")
|
||||
|
||||
def track_frame(self, frame: Frame):
|
||||
det_img = frame.img
|
||||
# det_img = self.frame_preprocess.apply(frame.img)
|
||||
|
||||
if self.config.detector in [DETECTOR_YOLOv8, DETECTOR_RTDETR]:
|
||||
# both ultralytics
|
||||
detections: List[Detection] = _ultralytics_track(det_img, frame.index, self.model, classes=[0, 15, 16], imgsz=self.config.imgsz)
|
||||
if self.config.detector == DETECTOR_YOLOv8:
|
||||
detections: List[Detection] = _yolov8_track(frame, self.model, classes=[0, 15, 16], imgsz=[1152, 640])
|
||||
else :
|
||||
detections: List[Detection] = self._resnet_track(det_img, frame.index, scale = RCNN_SCALE)
|
||||
|
||||
# emit raw detections
|
||||
self.detection_socket.send_pyobj(detections)
|
||||
detections: List[Detection] = self._resnet_track(frame, scale = RCNN_SCALE)
|
||||
|
||||
for detection in detections:
|
||||
track = self.tracks[detection.track_id]
|
||||
track.track_id = detection.track_id # for new tracks
|
||||
track.fps = frame.camera.fps
|
||||
track.frame_index = frame.index
|
||||
track.updated_at = time.time()
|
||||
# track.fps = self.config.camera.fps # for new tracks
|
||||
track.fps = self.config.camera.fps # for new tracks
|
||||
|
||||
track.history.append(detection) # add to history
|
||||
|
||||
return detections
|
||||
|
||||
def run(self):
|
||||
|
||||
|
||||
def track(self, is_running: Event, timer_counter: int = 0):
|
||||
"""
|
||||
Live tracking of frames coming in over zmq
|
||||
"""
|
||||
|
||||
self.is_running = is_running
|
||||
|
||||
|
||||
context = zmq.Context()
|
||||
self.frame_sock = context.socket(zmq.SUB)
|
||||
self.frame_sock.setsockopt(zmq.CONFLATE, 1) # only keep latest frame. NB. make sure this comes BEFORE connect, otherwise it's ignored!!
|
||||
self.frame_sock.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
self.frame_sock.connect(self.config.zmq_frame_addr)
|
||||
|
||||
self.trajectory_socket = context.socket(zmq.PUB)
|
||||
self.trajectory_socket.setsockopt(zmq.CONFLATE, 1) # only keep latest frame
|
||||
self.trajectory_socket.bind(self.config.zmq_trajectory_addr)
|
||||
|
||||
prev_run_time = 0
|
||||
|
||||
# training_fp = None
|
||||
# training_csv = None
|
||||
# training_frames = 0
|
||||
|
||||
# if self.config.save_for_training is not None:
|
||||
# if not isinstance(self.config.save_for_training, Path):
|
||||
# raise ValueError("save-for-training should be a path")
|
||||
# if not self.config.save_for_training.exists():
|
||||
# logger.info(f"Making path for training data: {self.config.save_for_training}")
|
||||
# self.config.save_for_training.mkdir(parents=True, exist_ok=False)
|
||||
# else:
|
||||
# logger.warning(f"Path for training-data exists: {self.config.save_for_training}. Continuing assuming that's ok.")
|
||||
# training_fp = open(self.config.save_for_training / 'all.txt', 'w')
|
||||
# # following https://github.com/StanfordASL/Trajectron-plus-plus/blob/master/experiments/pedestrians/process_data.py
|
||||
# training_csv = csv.DictWriter(training_fp, fieldnames=['frame_id', 'track_id', 'l', 't', 'w', 'h', 'x', 'y', 'state'], delimiter='\t', quoting=csv.QUOTE_NONE)
|
||||
|
||||
prev_frame_i = -1
|
||||
|
||||
with TrainingDataWriter(self.config.save_for_training) as writer:
|
||||
end_time = None
|
||||
tracker_dt = None
|
||||
w_time = None
|
||||
displacement_filter = FinalDisplacementFilter(.8)
|
||||
displacement_filter = FinalDisplacementFilter(.2)
|
||||
while self.is_running.is_set():
|
||||
|
||||
with timer_counter.get_lock():
|
||||
timer_counter.value += 1
|
||||
# this waiting for target_dt causes frame loss. E.g. with target_dt at .1, it
|
||||
# skips exactly 1 frame on a 10 fps video (which, it obviously should not do)
|
||||
# so for now, timing should move to emitter
|
||||
|
@ -521,12 +520,10 @@ class Tracker(Node):
|
|||
poll_time = time.time()
|
||||
zmq_ev = self.frame_sock.poll(timeout=2000)
|
||||
if not zmq_ev:
|
||||
logger.warning('no frame for 2000ms')
|
||||
logger.warning('skip poll after 2000ms')
|
||||
# when there's no data after timeout, loop so that is_running is checked
|
||||
continue
|
||||
|
||||
self.tick() # only tick if something is actually received
|
||||
|
||||
start_time = time.time()
|
||||
frame: Frame = self.frame_sock.recv_pyobj() # frame delivery in current setup: 0.012-0.03s
|
||||
|
||||
|
@ -540,15 +537,15 @@ class Tracker(Node):
|
|||
prev_frame_i = frame.index
|
||||
# load homography into frame (TODO: should this be done in emitter?)
|
||||
if frame.H is None:
|
||||
raise RuntimeError('Tracker no longer configures H')
|
||||
# logger.warning('Falling back to default H')
|
||||
# fallback: load configured H
|
||||
# frame.H = self.H
|
||||
frame.H = self.H
|
||||
|
||||
# logger.info(f"Frame delivery delay = {time.time()-frame.time}s")
|
||||
|
||||
|
||||
detections: List[Detection] = self.track_frame(frame)
|
||||
|
||||
|
||||
# Store detections into tracklets
|
||||
projected_coordinates = []
|
||||
# now in track_frame()
|
||||
|
@ -578,18 +575,8 @@ class Tracker(Node):
|
|||
active_track_ids = [d.track_id for d in detections]
|
||||
active_tracks = {t.track_id: t.get_with_interpolated_history() for t in self.tracks.values() if t.track_id in active_track_ids}
|
||||
active_tracks = displacement_filter.apply_to_dict(active_tracks, frame.camera)# a filter to remove just detecting static objects
|
||||
# print(len(detections), len(active_tracks))
|
||||
|
||||
removable_tracks = []
|
||||
for track_id, track in self.tracks.items():
|
||||
if not len(track.history):
|
||||
continue
|
||||
detection: Detection = track.history[-1]
|
||||
if detection.frame_nr < (frame.index - frame.camera.fps * 5):
|
||||
removable_tracks.append(track_id)
|
||||
for track_id in removable_tracks:
|
||||
del self.tracks[track_id]
|
||||
|
||||
|
||||
# active_tracks = {t.track_id: t for t in self.tracks.values() if t.track_id in active_track_ids}
|
||||
# active_tracks = {t.track_id: t for t in self.tracks.values() if t.track_id in active_track_ids}
|
||||
# logger.info(f"{trajectories}")
|
||||
|
@ -631,12 +618,13 @@ class Tracker(Node):
|
|||
logger.info('Stopping')
|
||||
|
||||
|
||||
def _resnet_track(self, img: cv2.Mat, frame_idx: int, scale: float = 1) -> List[Detection]:
|
||||
def _resnet_track(self, frame: Frame, scale: float = 1) -> List[Detection]:
|
||||
img = frame.img
|
||||
if scale != 1:
|
||||
dsize = (int(img.shape[1] * scale), int(img.shape[0] * scale))
|
||||
img = cv2.resize(img, dsize)
|
||||
detections = self._resnet_detect_persons(img)
|
||||
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame_idx)
|
||||
tracks: List[Detection] = self.mot_tracker.track_detections(detections, img, frame.index)
|
||||
# active_tracks = [t for t in tracks if t.is_confirmed()]
|
||||
return [d.get_scaled(1/scale) for d in tracks]
|
||||
|
||||
|
@ -686,90 +674,12 @@ class Tracker(Node):
|
|||
different nesting
|
||||
"""
|
||||
return [([d[0], d[1], d[2]-d[0], d[3]-d[1]], d[4], d[5]) for d in detections]
|
||||
|
||||
@classmethod
|
||||
def arg_parser(cls):
|
||||
argparser = argparse.ArgumentParser()
|
||||
argparser.add_argument('--zmq-frame-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame")
|
||||
argparser.add_argument('--zmq-trajectory-addr',
|
||||
help='Manually specity communication addr for the trajectory messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_traj")
|
||||
|
||||
argparser.add_argument('--zmq-detection-addr',
|
||||
help='Manually specity communication addr for the detection messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_dets")
|
||||
|
||||
argparser.add_argument("--save-for-training",
|
||||
help="Specify the path in which to save",
|
||||
type=Path,
|
||||
default=None)
|
||||
argparser.add_argument("--detector",
|
||||
help="Specify the detector to use",
|
||||
type=str,
|
||||
default=DETECTOR_YOLOv8,
|
||||
choices=DETECTORS)
|
||||
argparser.add_argument("--tracker",
|
||||
help="Specify the detector to use",
|
||||
type=str,
|
||||
default=TRACKER_BYTETRACK,
|
||||
choices=TRACKERS)
|
||||
argparser.add_argument("--smooth-tracks",
|
||||
help="Smooth the tracker tracks before sending them to the predictor",
|
||||
action='store_true')
|
||||
argparser.add_argument("--imgsz",
|
||||
help="Detector imgsz parameter (applicable to ultralytics detectors)",
|
||||
type=int,
|
||||
default=960)
|
||||
return argparser
|
||||
|
||||
|
||||
def run_tracker(config: Namespace, is_running: Event, timer_counter):
|
||||
router = Tracker(config)
|
||||
router.run(is_running, timer_counter)
|
||||
router.track(is_running, timer_counter)
|
||||
|
||||
def run():
|
||||
# Frame emitter
|
||||
import argparse
|
||||
argparser = argparse.ArgumentParser()
|
||||
argparser.add_argument('--zmq-frame-addr',
|
||||
help='Manually specity communication addr for the frame messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_frame")
|
||||
argparser.add_argument('--zmq-trajectory-addr',
|
||||
help='Manually specity communication addr for the trajectory messages',
|
||||
type=str,
|
||||
default="ipc:///tmp/feeds_traj")
|
||||
|
||||
argparser.add_argument("--save-for-training",
|
||||
help="Specify the path in which to save",
|
||||
type=Path,
|
||||
default=None)
|
||||
argparser.add_argument("--detector",
|
||||
help="Specify the detector to use",
|
||||
type=str,
|
||||
default=DETECTOR_YOLOv8,
|
||||
choices=DETECTORS)
|
||||
argparser.add_argument("--tracker",
|
||||
help="Specify the detector to use",
|
||||
type=str,
|
||||
default=TRACKER_BYTETRACK,
|
||||
choices=TRACKERS)
|
||||
argparser.add_argument("--smooth-tracks",
|
||||
help="Smooth the tracker tracks before sending them to the predictor",
|
||||
action='store_true')
|
||||
config = argparser.parse_args()
|
||||
is_running = multiprocessing.Event()
|
||||
is_running.set()
|
||||
timer_counter = timer.Timer('frame_emitter')
|
||||
|
||||
router = Tracker(config)
|
||||
router.run(is_running, timer_counter.iterations)
|
||||
is_running.clear()
|
||||
|
||||
|
||||
class Smoother:
|
||||
|
@ -803,8 +713,7 @@ class Smoother:
|
|||
self.smoother.smooth(hs)
|
||||
hs = self.smoother.smooth_data[0]
|
||||
new_history = [Detection(d.track_id, l, t, w, h, d.conf, d.state, d.frame_nr, d.det_class) for l, t, w, h, d in zip(ls,ts,ws,hs, track.history)]
|
||||
return track.get_with_new_history(new_history)
|
||||
# return Track(track.track_id, new_history, track.predictor_history, track.predictions, track.fps)
|
||||
return Track(track.track_id, new_history, track.predictor_history, track.predictions, track.fps)
|
||||
|
||||
def smooth_frame_tracks(self, frame: Frame) -> Frame:
|
||||
new_tracks = []
|
||||
|
@ -814,7 +723,7 @@ class Smoother:
|
|||
frame.tracks = {t.track_id: t for t in new_tracks}
|
||||
return frame
|
||||
|
||||
def smooth_frame_predictions(self, frame: Frame) -> Frame:
|
||||
def smooth_frame_predictions(self, frame) -> Frame:
|
||||
|
||||
for track in frame.tracks.values():
|
||||
new_predictions = []
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
# lerp & inverse lerp from https://gist.github.com/laundmo/b224b1f4c8ef6ca5fe47e132c8deab56
|
||||
import linecache
|
||||
import math
|
||||
import os
|
||||
from pathlib import Path
|
||||
import tracemalloc
|
||||
|
@ -29,78 +28,23 @@ def inv_lerp(a: float, b: float, v: float) -> float:
|
|||
return (v - a) / (b - a)
|
||||
|
||||
|
||||
|
||||
|
||||
def exponentialDecayRounded(a, b, decay, dt, abs_tolerance):
|
||||
"""Exponential decay as alternative to Lerp
|
||||
Introduced by Freya Holmér: https://www.youtube.com/watch?v=LSNQuFEDOyQ
|
||||
"""
|
||||
c = b + (a-b) * math.exp(-decay * dt)
|
||||
if abs(b-c) < abs_tolerance:
|
||||
return b
|
||||
return c
|
||||
|
||||
def exponentialDecay(a, b, decay, dt):
|
||||
"""Exponential decay as alternative to Lerp
|
||||
Introduced by Freya Holmér: https://www.youtube.com/watch?v=LSNQuFEDOyQ
|
||||
"""
|
||||
return b + (a-b) * math.exp(-decay * dt)
|
||||
|
||||
def relativePointToPolar(origin, point) -> tuple[float, float]:
|
||||
x, y = point[0] - origin[0], point[1] - origin[1]
|
||||
return np.sqrt(x**2 + y**2), np.arctan2(y, x)
|
||||
|
||||
def relativePolarToPoint(origin, r, angle) -> tuple[float, float]:
|
||||
return r * np.cos(angle) + origin[0], r * np.sin(angle) + origin[1]
|
||||
|
||||
# def line_intersection(line1, line2):
|
||||
# xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0])
|
||||
# ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1])
|
||||
|
||||
# def det(a, b):
|
||||
# return a[0] * b[1] - a[1] * b[0]
|
||||
|
||||
# div = det(xdiff, ydiff)
|
||||
# if div == 0:
|
||||
# return None
|
||||
|
||||
# d = (det(*line1), det(*line2))
|
||||
# x = det(d, xdiff) / div
|
||||
# y = det(d, ydiff) / div
|
||||
# return x, y
|
||||
|
||||
# def polyline_intersection(poly1, poly2):
|
||||
# for i, p1_first_point in enumerate(poly1[:-1]):
|
||||
# p1_second_point = poly1[i + 1]
|
||||
|
||||
# for j, p2_first_point in enumerate(poly2[:-1]):
|
||||
# p2_second_point = poly2[j + 1]
|
||||
|
||||
# intersection = line_intersection((p1_first_point, p1_second_point), (p2_first_point, p2_second_point))
|
||||
# if intersection:
|
||||
# return intersection # returns x,y
|
||||
|
||||
# return None
|
||||
|
||||
|
||||
|
||||
def get_bins(bin_size: float):
|
||||
return [[bin_size, 0], [bin_size, bin_size], [0, bin_size], [-bin_size, bin_size], [-bin_size, 0], [-bin_size, -bin_size], [0, -bin_size], [bin_size, -bin_size]]
|
||||
|
||||
|
||||
def convert_world_space_to_img_space(H: cv2.Mat, scale=100):
|
||||
def convert_world_space_to_img_space(H: cv2.Mat):
|
||||
"""Transform the given matrix so that it immediately converts
|
||||
the points to img space"""
|
||||
new_H = H.copy()
|
||||
new_H[:2] = H[:2] * scale
|
||||
new_H[:2] = H[:2] * 100
|
||||
return new_H
|
||||
|
||||
def convert_world_points_to_img_points(points: Iterable, scale=100):
|
||||
def convert_world_points_to_img_points(points: Iterable):
|
||||
"""Transform the given matrix so that it immediately converts
|
||||
the points to img space"""
|
||||
if isinstance(points, np.ndarray):
|
||||
return np.array(points) * scale
|
||||
return [[p[0]*scale, p[1]*scale] for p in points]
|
||||
return np.array(points) * 100
|
||||
return [[p[0]*100, p[1]*100] for p in points]
|
||||
|
||||
def display_top(snapshot: tracemalloc.Snapshot, key_type='lineno', limit=5):
|
||||
snapshot = snapshot.filter_traces((
|
||||
|
|
|
@ -1,239 +0,0 @@
|
|||
from dataclasses import dataclass
|
||||
from itertools import cycle
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
from os import PathLike
|
||||
from pathlib import Path
|
||||
import time
|
||||
from typing import Any, Generator, Iterable, List, Literal, Optional, Tuple
|
||||
import neoapi
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from trap.base import Camera, UrlOrPath
|
||||
|
||||
logger = logging.getLogger('video_source')
|
||||
|
||||
class VideoSource:
|
||||
"""Video Frame generator
|
||||
"""
|
||||
def recv(self) -> Generator[Optional[cv2.typing.MatLike], Any, None]:
|
||||
raise RuntimeError("Not implemented")
|
||||
|
||||
def __iter__(self):
|
||||
for i in self.recv():
|
||||
yield i
|
||||
|
||||
BinningValue = Literal[1, 2]
|
||||
Coordinate = Tuple[int, int]
|
||||
|
||||
@dataclass
|
||||
class GigEConfig:
|
||||
identifier: Optional[str] = None
|
||||
binning_h: BinningValue = 1
|
||||
binning_v: BinningValue = 1
|
||||
pixel_format: int = neoapi.PixelFormat_BayerRG8
|
||||
|
||||
post_crop_tl: Optional[Coordinate] = None
|
||||
post_crop_br: Optional[Coordinate] = None
|
||||
|
||||
@classmethod
|
||||
def from_file(cls, file: PathLike):
|
||||
with open(file, 'r') as fp:
|
||||
return cls(**json.load(fp))
|
||||
|
||||
|
||||
class GigE(VideoSource):
|
||||
def __init__(self, config=GigEConfig):
|
||||
|
||||
self.config = config
|
||||
|
||||
self.camera = neoapi.Cam()
|
||||
# self.camera.Connect('-B127')
|
||||
self.camera.Connect(self.config.identifier)
|
||||
# Default buffer mode, streaming, always returns latest frame
|
||||
self.camera.SetImageBufferCount(10)
|
||||
# neoAPI docs: Setting the neoapi.Cam.SetImageBufferCycleCount()to one ensures that all buffers but one are given back to the neoAPI to be re-cycled and never given to the user by the neoapi.Cam.GetImage() method.
|
||||
self.camera.SetImageBufferCycleCount(1)
|
||||
self.setPixelFormat(self.config.pixel_format)
|
||||
|
||||
if self.camera.IsConnected():
|
||||
# self.camera.f.PixelFormat.Set(neoapi.PixelFormat_RGB8)
|
||||
self.camera.f.BinningHorizontal.Set(self.config.binning_h)
|
||||
self.camera.f.BinningVertical.Set(self.config.binning_v)
|
||||
# print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(20000)) # shutter 1/50
|
||||
print('exposure time', self.camera.f.ExposureAutoMaxValue.Set(25000))
|
||||
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Get())
|
||||
print('brightness targt', self.camera.f.BrightnessAutoNominalValue.Set(30))
|
||||
print('exposure time', self.camera.f.ExposureTime.Get())
|
||||
print('Gamma', self.camera.f.Gamma.Set(0.39))
|
||||
# print('LUT', self.camera.f.LUTIndex.Get())
|
||||
# print('LUT', self.camera.f.LUTEnable.Get())
|
||||
# print('exposure time max', self.camera.f.ExposureTimeGapMax.Get())
|
||||
# print('exposure time min', self.camera.f.ExposureTimeGapMin.Get())
|
||||
# self.pixfmt = self.camera.f.PixelFormat.Get()
|
||||
|
||||
def setPixelFormat(self, pixfmt):
|
||||
self.pixfmt = pixfmt
|
||||
self.camera.f.PixelFormat.Set(pixfmt)
|
||||
# self.pixfmt = self.camera.f.PixelFormat.Get()
|
||||
|
||||
def recv(self):
|
||||
while True:
|
||||
if not self.camera.IsConnected():
|
||||
return
|
||||
|
||||
i = self.camera.GetImage(0)
|
||||
if i.IsEmpty():
|
||||
time.sleep(.01)
|
||||
continue
|
||||
|
||||
imgarray = i.GetNPArray()
|
||||
if self.pixfmt == neoapi.PixelFormat_BayerRG12:
|
||||
img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
|
||||
elif self.pixfmt == neoapi.PixelFormat_BayerRG8:
|
||||
img = cv2.cvtColor(imgarray, cv2.COLOR_BayerRG2RGB)
|
||||
else:
|
||||
img = cv2.cvtColor(imgarray, cv2.COLOR_BGR2RGB)
|
||||
|
||||
if img.dtype == np.uint16:
|
||||
img = cv2.convertScaleAbs(img, alpha=(255.0/65535.0))
|
||||
img = self._crop(img)
|
||||
yield img
|
||||
|
||||
def _crop(self, img):
|
||||
tl = self.config.post_crop_tl or (0,0)
|
||||
br = self.config.post_crop_br or (img.shape[1], img.shape[0])
|
||||
|
||||
return img[tl[1]:br[1],tl[0]:br[0],:]
|
||||
|
||||
class SingleCvVideoSource(VideoSource):
|
||||
def recv(self):
|
||||
while True:
|
||||
ret, img = self.video.read()
|
||||
self.frame_idx+=1
|
||||
|
||||
# seek to 0 if video has finished. Infinite loop
|
||||
if not ret:
|
||||
# now loading multiple files
|
||||
break
|
||||
|
||||
# frame = Frame(index=self.n, img=img, H=self.camera.H, camera=self.camera)
|
||||
yield img
|
||||
|
||||
class RtspSource(SingleCvVideoSource):
|
||||
def __init__(self, video_url: str | Path, camera: Camera = None):
|
||||
# keep max 1 frame in app-buffer (0 = unlimited)
|
||||
# When using gstreamer 1.28 drop=true is deprecated, use: leaky-type=2 which frame to drop: https://gstreamer.freedesktop.org/documentation/applib/gstappsrc.html?gi-language=c
|
||||
|
||||
gst = f"rtspsrc location={video_url} latency=0 buffer-mode=auto ! decodebin ! videoconvert ! appsink max-buffers=1 drop=true"
|
||||
logger.info(f"Capture gstreamer (gst-launch-1.0): {gst}")
|
||||
self.video = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
|
||||
self.frame_idx = 0
|
||||
|
||||
|
||||
class FilelistSource(SingleCvVideoSource):
|
||||
def __init__(self, video_sources: Iterable[UrlOrPath], camera: Camera = None, delay = True, offset = 0, end: Optional[int] = None, loop=False):
|
||||
# store current position
|
||||
self.video_sources = video_sources if not loop else cycle(video_sources)
|
||||
self.camera = camera
|
||||
self.video_path = None
|
||||
self.video_nr = None
|
||||
self.frame_count = None
|
||||
self.frame_idx = None
|
||||
self.n = 0
|
||||
self.delay_generation = delay
|
||||
self.offset = offset
|
||||
self.end = end
|
||||
|
||||
def recv(self):
|
||||
prev_time = time.time()
|
||||
|
||||
for video_nr, video_path in enumerate(self.video_sources):
|
||||
self.video_path = video_path
|
||||
self.video_nr = video_nr
|
||||
logger.info(f"Play from '{str(video_path)}'")
|
||||
video = cv2.VideoCapture(str(video_path))
|
||||
fps = video.get(cv2.CAP_PROP_FPS)
|
||||
target_frame_duration = 1./fps
|
||||
self.frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
|
||||
if self.frame_count < 0:
|
||||
self.frame_count = math.inf
|
||||
self.frame_idx = 0
|
||||
# TODO)) Video offset
|
||||
if self.offset:
|
||||
logger.info(f"Start at frame {self.offset}")
|
||||
video.set(cv2.CAP_PROP_POS_FRAMES, self.offset)
|
||||
self.frame_idx = self.offset
|
||||
|
||||
while True:
|
||||
ret, img = video.read()
|
||||
self.frame_idx+=1
|
||||
self.n+=1
|
||||
|
||||
# seek to 0 if video has finished. Infinite loop
|
||||
if not ret:
|
||||
# now loading multiple files
|
||||
break
|
||||
|
||||
if "DATASETS/hof/" in str(video_path):
|
||||
# hack to mask out area
|
||||
cv2.rectangle(img, (0,0), (800,200), (0,0,0), -1)
|
||||
|
||||
|
||||
# frame = Frame(index=self.n, img=img, H=self.camera.H, camera=self.camera)
|
||||
yield img
|
||||
|
||||
if self.end is not None and self.frame_idx >= self.end:
|
||||
logger.info(f"Reached frame {self.end}")
|
||||
break
|
||||
|
||||
if self.delay_generation:
|
||||
# defer next loop
|
||||
now = time.time()
|
||||
time_diff = (now - prev_time)
|
||||
if time_diff < target_frame_duration:
|
||||
time.sleep(target_frame_duration - time_diff)
|
||||
now += target_frame_duration - time_diff
|
||||
|
||||
prev_time = now
|
||||
|
||||
|
||||
class CameraSource(SingleCvVideoSource):
|
||||
def __init__(self, identifier: int, camera: Camera):
|
||||
self.video = cv2.VideoCapture(identifier)
|
||||
self.camera = camera
|
||||
|
||||
# TODO: make config variables
|
||||
self.video.set(cv2.CAP_PROP_FRAME_WIDTH, int(self.camera.w))
|
||||
self.video.set(cv2.CAP_PROP_FRAME_HEIGHT, int(self.camera.h))
|
||||
# print("exposure!", video.get(cv2.CAP_PROP_AUTO_EXPOSURE))
|
||||
self.video.set(cv2.CAP_PROP_FPS, self.camera.fps)
|
||||
self.frame_idx = 0
|
||||
|
||||
def get_video_source(video_sources: List[UrlOrPath], camera: Optional[Camera] = None, frame_offset=0, frame_end:Optional[int]=None, loop=False):
|
||||
|
||||
if str(video_sources[0]).isdigit():
|
||||
# numeric input is a CV camera
|
||||
if frame_offset:
|
||||
logger.info("video-offset ignored for camera source")
|
||||
return CameraSource(int(str(video_sources[0])), camera)
|
||||
elif video_sources[0].url.scheme == 'rtsp':
|
||||
# video_sources[0].url.hostname
|
||||
if frame_offset:
|
||||
logger.info("video-offset ignored for rtsp source")
|
||||
return RtspSource(video_sources[0])
|
||||
elif video_sources[0].url.scheme == 'gige':
|
||||
if frame_offset:
|
||||
logger.info("video-offset ignored for gige source")
|
||||
config = GigEConfig.from_file(Path(video_sources[0].url.netloc + video_sources[0].url.path))
|
||||
return GigE(config)
|
||||
else:
|
||||
return FilelistSource(video_sources, offset = frame_offset, end=frame_end, loop=loop)
|
||||
# os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "fflags;nobuffer|flags;low_delay|avioflags;direct|rtsp_transport;udp"
|
||||
|
||||
|
||||
def get_video_source_from_str(video_sources: List[str]):
|
||||
paths = [UrlOrPath(s) for s in video_sources]
|
||||
return get_video_source(paths)
|
Loading…
Reference in a new issue