23 lines
1.2 KiB
Markdown
23 lines
1.2 KiB
Markdown
|
# Some tools to facilitate trajectory prediction
|
||
|
|
||
|
_See also [trap](https://git.rubenvandeven.com/security_vision/trap)_
|
||
|
|
||
|
## 1. Camera calibration
|
||
|
|
||
|
Find the camera intrinsics and lens distortion matrixes. This helps to remove curvature from the image, and points map to a linear space.
|
||
|
|
||
|
## 02. Test Calibration and draw points
|
||
|
|
||
|
Apply the now obtained camera matrix to undistort a snapshot. Check if it looks good.
|
||
|
|
||
|
Now we can obtain coordinates to map for the homography. Draw points on the floor (I used chalk) and measure their distances. I then used SolveSpace to go from their distances to positions in a plane.
|
||
|
|
||
|
Then with a camera snapshot of these points, click with the cursor in the source image to draw mark these points in the image.
|
||
|
|
||
|
This is saved to `points.json`. If this is right, rename it to `img_points.json` for the homography.
|
||
|
|
||
|
## 2. Homography
|
||
|
|
||
|
Having the camera intrinsics, the perspective of the camera can be undone by mapping points to a 'top down' space. This way, the distances between points is in accordance to their distance IRL.
|
||
|
|
||
|
This file reads camera intrinsics & distortion matrixes, `img_points.json` (obtained step 2) and the corresponding `irl_points.json`. Which I created based on coordinates obtained with SolveSpace.
|