Updated codebase following the paper update.

This commit is contained in:
BorisIvanovic 2020-04-05 21:43:49 -04:00
parent 3588a3ebea
commit 40dec06e9c
231 changed files with 531586 additions and 5210 deletions

411
.gitignore vendored Normal file
View File

@ -0,0 +1,411 @@
pred_figs/
processed/
public-code/
.idea/
## Core latex/pdflatex auxiliary files:
*.aux
*.lof
*.log
*.lot
*.fls
*.out
*.toc
*.fmt
*.fot
*.cb
*.cb2
.*.lb
## Intermediate documents:
*.dvi
*.xdv
*-converted-to.*
# these rules might exclude image files for figures etc.
# *.ps
# *.eps
# *.pdf
## Generated if empty string is given at "Please type another file name for output:"
.pdf
## Bibliography auxiliary files (bibtex/biblatex/biber):
*.bbl
*.bcf
*.blg
*-blx.aux
*-blx.bib
*.run.xml
## Build tool auxiliary files:
*.fdb_latexmk
*.synctex
*.synctex(busy)
*.synctex.gz
*.synctex.gz(busy)
*.pdfsync
## Build tool directories for auxiliary files
# latexrun
latex.out/
## Auxiliary and intermediate files from other packages:
# algorithms
*.alg
*.loa
# achemso
acs-*.bib
# amsthm
*.thm
# beamer
*.nav
*.pre
*.snm
*.vrb
# changes
*.soc
# comment
*.cut
# cprotect
*.cpt
# elsarticle (documentclass of Elsevier journals)
*.spl
# endnotes
*.ent
# fixme
*.lox
# feynmf/feynmp
*.mf
*.mp
*.t[1-9]
*.t[1-9][0-9]
*.tfm
#(r)(e)ledmac/(r)(e)ledpar
*.end
*.?end
*.[1-9]
*.[1-9][0-9]
*.[1-9][0-9][0-9]
*.[1-9]R
*.[1-9][0-9]R
*.[1-9][0-9][0-9]R
*.eledsec[1-9]
*.eledsec[1-9]R
*.eledsec[1-9][0-9]
*.eledsec[1-9][0-9]R
*.eledsec[1-9][0-9][0-9]
*.eledsec[1-9][0-9][0-9]R
# glossaries
*.acn
*.acr
*.glg
*.glo
*.gls
*.glsdefs
*.lzo
*.lzs
# uncomment this for glossaries-extra (will ignore makeindex's style files!)
# *.ist
# gnuplottex
*-gnuplottex-*
# gregoriotex
*.gaux
*.gtex
# htlatex
*.4ct
*.4tc
*.idv
*.lg
*.trc
*.xref
# hyperref
*.brf
# knitr
*-concordance.tex
# TODO Comment the next line if you want to keep your tikz graphics files
*.tikz
*-tikzDictionary
# listings
*.lol
# luatexja-ruby
*.ltjruby
# makeidx
*.idx
*.ilg
*.ind
# minitoc
*.maf
*.mlf
*.mlt
*.mtc[0-9]*
*.slf[0-9]*
*.slt[0-9]*
*.stc[0-9]*
# minted
_minted*
*.pyg
# morewrites
*.mw
# nomencl
*.nlg
*.nlo
*.nls
# pax
*.pax
# pdfpcnotes
*.pdfpc
# sagetex
*.sagetex.sage
*.sagetex.py
*.sagetex.scmd
# scrwfile
*.wrt
# sympy
*.sout
*.sympy
sympy-plots-for-*.tex/
# pdfcomment
*.upa
*.upb
# pythontex
*.pytxcode
pythontex-files-*/
# tcolorbox
*.listing
# thmtools
*.loe
# TikZ & PGF
*.dpth
*.md5
*.auxlock
# todonotes
*.tdo
# vhistory
*.hst
*.ver
# easy-todo
*.lod
# xcolor
*.xcp
# xmpincl
*.xmpi
# xindy
*.xdy
# xypic precompiled matrices and outlines
*.xyc
*.xyd
# endfloat
*.ttt
*.fff
# Latexian
TSWLatexianTemp*
## Editors:
# WinEdt
*.bak
*.sav
# Texpad
.texpadtmp
# LyX
*.lyx~
# Kile
*.backup
# gummi
.*.swp
# KBibTeX
*~[0-9]*
# TeXnicCenter
*.tps
# auto folder when using emacs and auctex
./auto/*
*.el
# expex forward references with \gathertags
*-tags.tex
# standalone packages
*.sta
# Makeindex log files
*.lpz
logs/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/

4
.gitmodules vendored
View File

@ -1,3 +1,3 @@
[submodule "data/nuScenes/nuscenes-devkit"]
path = data/nuScenes/nuscenes-devkit
[submodule "experiments/nuScenes/devkit"]
path = experiments/nuScenes/devkit
url = https://github.com/nutonomy/nuscenes-devkit

126
README.md
View File

@ -1,13 +1,15 @@
# Trajectron++: Multi-Agent Generative Trajectory Forecasting With Heterogeneous Data for Control #
<p align="center"><img width="100%" src="img/Trajectron++.png"/></p>
This repository contains the code for [Trajectron++: Multi-Agent Generative Trajectory Forecasting With Heterogeneous Data for Control](https://arxiv.org/abs/2001.03093) by Tim Salzmann\*, Boris Ivanovic\*, Punarjay Chakravarty, and Marco Pavone (\* denotes equal contribution).
Things to fix/add:
- keep pedestrians branch as pedestrians_old and make this one master.
Specifically, this branch is for the Trajectron++ applied to the nuScenes autonomous driving dataset.
# Trajectron++: Dynamically-Feasible Trajectory Forecasting With Heterogeneous Data #
This repository contains the code for [Trajectron++: Dynamically-Feasible Trajectory Forecasting With Heterogeneous Data](img/paper.pdf) by Tim Salzmann\*, Boris Ivanovic\*, Punarjay Chakravarty, and Marco Pavone (\* denotes equal contribution).
## Installation ##
### Note about Submodules ###
When cloning this branch, make sure you clone the submodules as well, with the following command:
### Cloning ###
When cloning this repository, make sure you clone the submodules as well, with the following command:
```
git clone --recurse-submodules <repository cloning URL>
```
@ -18,7 +20,6 @@ git submodule update # Fetching all of the data from the submodules at the speci
```
### Environment Setup ###
First, we'll create a conda environment to hold the dependencies.
```
conda create --name trajectron++ python=3.6 -y
@ -28,29 +29,116 @@ pip install -r requirements.txt
Then, since this project uses IPython notebooks, we'll install this conda environment as a kernel.
```
python -m ipykernel install --user --name trajectron++ --display-name "Python 3.6 (Trajectron++)"
python -m ipykernel install --user --name trajectronpp --display-name "Python 3.6 (Trajectron++)"
```
Now, you can start a Jupyter session and view/run all the notebooks in `code/notebooks` with
### Data Setup ###
#### Pedestrian Datasets ####
We've already included preprocessed data splits for the ETH and UCY Pedestrian datasets in this repository, you can see them in `experiments/pedestrians/raw`. In order to process them into a data format that our model can work with, execute the follwing.
```
jupyter notebook
cd experiments/pedestrians
python process_data.py # This will take around 10-15 minutes, depending on your computer.
```
When you're done, don't forget to deactivate the conda environment with
#### nuScenes Dataset ####
Download the nuScenes dataset (this requires signing up on [their website](https://www.nuscenes.org/)). Note that the full dataset is very large, so if you only wish to test out the codebase and model then you can just download the nuScenes "mini" dataset which only requires around 4 GB of space. Extract the downloaded zip file's contents and place them in the `experiments/nuScenes` directory. Then, download the map expansion pack (v1.1) and copy the contents of the extracted `maps` folder into the `experiments/nuScenes/v1.0-mini/maps` folder. Finally, process them into a data format that our model can work with.
```
source deactivate
cd experiments/nuScenes
# For the mini nuScenes dataset, use the following
python process_data.py --data=./v1.0-mini --version="v1.0-mini" --output_path=../processed
# For the full nuScenes dataset, use the following
python process_data.py --data=./v1.0 --version="v1.0" --output_path=../processed
```
In case you also want a validation set generated (by default this will just produce the training and test sets), replace line 406 in `process_data.py` with:
```
val_scene_names = val_scenes
```
## Scripts ##
## Model Training ##
### Pedestrian Dataset ###
To train a model on the ETH and UCY Pedestrian datasets, you can execute a version of the following command from within the `trajectron/` directory.
```
python train.py --eval_every 10 --vis_every 1 --train_data_dict <dataset>_train.pkl --eval_data_dict <dataset>_val.pkl --offline_scene_graph yes --preprocess_workers 5 --log_dir ../experiments/pedestrians/models --log_tag <desired tag> --train_epochs 100 --augment --conf <desired model configuration>
```
Run any of these with a `-h` or `--help` flag to see all available command arguments.
* `code/train.py` - Trains a new Trajectron++ model.
* `code/notebooks/run_eval.bash` - Evaluates the performance of the Trajectron++. This script mainly collects evaluation data, which can then be visualized with `code/notebooks/NuScenes Quantitative.ipynb`.
* `data/nuScenes/process_nuScenes.py` - Processes the nuScenes dataset into a format that the Trajectron++ can directly work with, following our internal structures for handling data (see `code/data` for more information).
* `code/notebooks/NuScenes Qualitative.ipynb` - Visualizes the predictions that the Trajectron++ makes.
For example, a fully-fleshed out version of this command to train a model without dynamics integration for evaluation on the ETH - University scene would look like:
```
python train.py --eval_every 10 --vis_every 1 --train_data_dict eth_train.pkl --eval_data_dict eth_val.pkl --offline_scene_graph yes --preprocess_workers 5 --log_dir ../experiments/pedestrians/models --log_tag _eth_vel_ar3 --train_epochs 100 --augment --conf ../experiments/pedestrians/models/eth_vel/config.json
```
What this means is to train a new Trajectron++ model which will be evaluated every 10 epochs, have a few outputs visualized in Tensorboard every 1 epoch, use the `eth_train.pkl` file as the source of training data (which actually contains the four other datasets, since we train using a leave-one-out scheme), and evaluate the partially-trained models on the data within `eth_val.pkl`. Further options specify that we want to perform a bit of preprocessing to make training as fast as possible (`--offline_scene_graph yes`), use 5 threads to parallelize data loading, save trained models and Tensorboard logs to `../experiments/pedestrians/models`, mark the created log directory with an additional `_eth_vel_ar3` at the end, run training for 100 epochs, augment the dataset with rotations (`--augment`), and use the same model configuration as in the model we previously trained for the ETH dataset without any dynamics integration (`--conf ../experiments/pedestrians/models/eth_vel/config.json`).
If you wanted to train a model _with_ dynamics integration for the ETH - University scene, then you would instead run:
```
python train.py --eval_every 10 --vis_every 1 --train_data_dict eth_train.pkl --eval_data_dict eth_val.pkl --offline_scene_graph yes --preprocess_workers 5 --log_dir ../experiments/pedestrians/models --log_tag _eth_ar3 --train_epochs 100 --augment --conf ../experiments/pedestrians/models/eth_attention_radius_3/config.json
```
where the only difference is the sourced model configuration (now from `../experiments/pedestrians/models/eth_attention_radius_3/config.json`). Our codebase is set up such that hyperparameters are saved in a json file every time a model is trained, so that you don't have to remember what settings you use when you end up training many models in parallel!
Commands like these would be used for all of the scenes in the ETH and UCY datasets (the options being `eth`, `hotel`, `univ`, `zara1`, and `zara2`). The only change would be what `train_data_dict`, `eval_data_dict`, `log_tag`, and configuration file (`conf`) you wish to use.
### nuScenes Dataset ###
To train a model on the nuScenes dataset, you can execute one of the following commands from within the `trajectron/` directory, depending on the model version you desire.
| Model | Command |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Base | `python train.py --eval_every 1 --vis_every 1 --conf ../experiments/nuScenes/models/vel_ee/config.json --train_data_dict nuScenes_train_full.pkl --eval_data_dict nuScenes_val_full.pkl --offline_scene_graph yes --preprocess_workers 10 --batch_size 256 --log_dir ../experiments/nuScenes/models --train_epochs 20 --node_freq_mult_train --log_tag _vel_ee --augment` |
| +Dynamics Integration | `python train.py --eval_every 1 --vis_every 1 --conf ../experiments/nuScenes/models/int_ee/config.json --train_data_dict nuScenes_train_full.pkl --eval_data_dict nuScenes_val_full.pkl --offline_scene_graph yes --preprocess_workers 10 --batch_size 256 --log_dir ../experiments/nuScenes/models --train_epochs 20 --node_freq_mult_train --log_tag _int_ee --augment` |
| +Dynamics Integration, Maps | `python train.py --eval_every 1 --vis_every 1 --conf ../experiments/nuScenes/models/int_ee_me/config.json --train_data_dict nuScenes_train_full.pkl --eval_data_dict nuScenes_val_full.pkl --offline_scene_graph yes --preprocess_workers 10 --batch_size 256 --log_dir ../experiments/nuScenes/models --train_epochs 20 --node_freq_mult_train --log_tag _int_ee_me --map_encoding --augment` |
| +Dynamics Integration, Maps, Robot Future | `python train.py --eval_every 1 --vis_every 1 --conf ../experiments/nuScenes/models/robot/config.json --train_data_dict nuScenes_train_full.pkl --eval_data_dict nuScenes_val_full.pkl --offline_scene_graph yes --preprocess_workers 10 --batch_size 256 --log_dir ../experiments/nuScenes/models --train_epochs 20 --node_freq_mult_train --log_tag _robot --incl_robot_node --map_encoding` |
In case you also want to produce the version of our model that was trained without the ego-vehicle (first row of Table 4 (b) in the paper), then run the command from the third row of the table above, but change line 132 of `train.py` to:
```
return_robot=False)
```
### CPU Training ###
By default, our training script assumes access to a GPU. If you want to train on a CPU, comment out line 38 in `train.py` and add `--device cpu` to the training command.
## Model Evaluation ##
### Pedestrian Datasets ###
To evaluate a trained model, you can execute a version of the following command from within the `experiments/pedestrians` directory.
```
python evaluate.py --model <model directory> --checkpoint <epoch number> --data ../processed/<dataset>_test.pkl --output_path results --output_tag <dataset>_<vel if no integration>_12 --node_type PEDESTRIAN
```
For example, a fully-fleshed out version of this command to evaluate a model without dynamics integration for evaluation on the ETH - University scene would look like:
```
python evaluate.py --model models/eth_vel --checkpoint 100 --data ../processed/eth_test.pkl --output_path results --output_tag eth_vel_12 --node_type PEDESTRIAN
```
The same for a model with dynamics integration would look like:
```
python evaluate.py --model models/eth_attention_radius_3 --checkpoint 100 --data ../processed/eth_test.pkl --output_path results --output_tag eth_12 --node_type PEDESTRIAN
```
These scripts will produce csv files in the `results` directory which can then be analyzed in the `Result Analysis.ipynb` notebook.
### nuScenes Dataset ###
If you just want to use a trained model to generate trajectories and plot them, you can do this in the `NuScenes Qualitative.ipynb` notebook.
To evaluate a trained model's performance on forecasting vehicles, you can execute a one of the following commands from within the `experiments/nuScenes` directory.
| Model | Command |
|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Base | `python evaluate.py --model models/vel_ee --checkpoint=12 --data ../processed/nuScenes_test_full.pkl --output_path results --output_tag vel_ee --node_type VEHICLE --prediction_horizon 6` |
| +Dynamics Integration | `python evaluate.py --model models/int_ee --checkpoint=12 --data ../processed/nuScenes_test_full.pkl --output_path results --output_tag int_ee --node_type VEHICLE --prediction_horizon 6` |
| +Dynamics Integration, Maps | `python evaluate.py --model models/int_ee_me --checkpoint=12 --data ../processed/nuScenes_test_full.pkl --output_path results --output_tag int_ee_me --node_type VEHICLE --prediction_horizon 6` |
| +Dynamics Integration, Maps, Robot Future | `python evaluate.py --model models/robot --checkpoint=12 --data ../processed/nuScenes_test_full.pkl --output_path results --output_tag robot --node_type VEHICLE --prediction_horizon 6` |
If you instead wanted to evaluate a trained model's performance on forecasting pedestrians, you can execute a one of the following.
| Model | Command |
|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Base | `python evaluate.py --model models/ee_vel --checkpoint=12 --data ../processed/nuScenes_test_full.pkl --output_path results --output_tag vel_ee_ped --node_type PEDESTRIAN --prediction_horizon 6` |
| +Dynamics Integration, Maps | `python evaluate.py --model models/int_ee_me --checkpoint=12 --data ../processed/nuScenes_test_full.pkl --output_path results --output_tag int_ee_me_ped --node_type PEDESTRIAN --prediction_horizon 6` |
These scripts will produce csv files in the `results` directory which can then be analyzed in the `NuScenes Quantitative.ipynb` notebook.
## Datasets ##
A sample of fully-processed scenes from the nuScenes dataset are available in this repository, in `data/processed`.
### ETH and UCY Pedestrian Datasets ###
Preprocessed ETH and UCY datasets are available in this repository, under `experiments/pedestrians/raw` (e.g., `raw/eth/train`). The train/validation/test splits are the same as those found in [Social GAN](https://github.com/agrimgupta92/sgan).
If you want the *original* nuScenes dataset, you can find it here: [nuScenes Dataset](https://www.nuscenes.org/).
If you want the *original* ETH or UCY datasets, you can find them here: [ETH Dataset](http://www.vision.ee.ethz.ch/en/datasets/) and [UCY Dataset](https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data).
### nuScenes Dataset ###
If you only want to evaluate models (e.g., produce trajectories and plot them), then the nuScenes mini dataset should be fine. If you want to train a model, then the full nuScenes dataset is required. In either case, you can find them on the [dataset website](https://www.nuscenes.org/).

View File

@ -1,108 +0,0 @@
{
"batch_size": 256,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.002,
"min_learning_rate": 0.0005,
"learning_decay_rate": 0.9995,
"prediction_horizon": 6,
"minimum_history_length": 1,
"maximum_history_length": 8,
"map_context": 120,
"map_enc_num_layers": 4,
"map_enc_hidden_size": 512,
"map_enc_output_size": 512,
"map_enc_dropout": 0.5,
"alpha": 1,
"k": 30,
"k_eval": 200,
"use_iwae": false,
"kl_exact": true,
"kl_min": 0.07,
"kl_weight": 5.0,
"kl_weight_start": 0,
"kl_decay_rate": 0.99995,
"kl_crossover": 500,
"kl_sigmoid_divisor": 4,
"inf_warmup": 1.0,
"inf_warmup_start": 1.0,
"inf_warmup_crossover": 1500,
"inf_warmup_sigmoid_divisor": 4,
"rnn_kwargs": {
"dropout_keep_prob": 0.5
},
"MLP_dropout_keep_prob": 0.9,
"rnn_io_dropout_keep_prob": 1.0,
"enc_rnn_dim_multiple_inputs": 8,
"enc_rnn_dim_edge": 8,
"enc_rnn_dim_edge_influence": 8,
"enc_rnn_dim_history": 32,
"enc_rnn_dim_future": 32,
"dec_rnn_dim": 512,
"dec_GMM_proj_MLP_dims": null,
"sample_model_during_dec": true,
"dec_sample_model_prob_start": 1.00,
"dec_sample_model_prob_final": 1.00,
"dec_sample_model_prob_crossover": 200,
"dec_sample_model_prob_divisor": 4,
"q_z_xy_MLP_dims": null,
"p_z_x_MLP_dims": 32,
"fuzz_factor": 0.05,
"GMM_components": 12,
"log_sigma_min": -10,
"log_sigma_max": 10,
"log_p_yt_xz_max": 50,
"N": 2,
"K": 5,
"tau_init": 2.0,
"tau_final": 0.05,
"tau_decay_rate": 0.997,
"use_z_logit_clipping": true,
"z_logit_clip_start": 0.05,
"z_logit_clip_final": 5.0,
"z_logit_clip_crossover": 500,
"z_logit_clip_divisor": 5,
"state": {
"PEDESTRIAN": {
"position": ["x", "y"],
"velocity": ["x", "y"],
"acceleration": ["x", "y"],
"heading": ["value"]
},
"BICYCLE": {
"position": ["x", "y"],
"velocity": ["x", "y", "m"],
"acceleration": ["x", "y", "m"],
"heading": ["value"]
},
"VEHICLE": {
"position": ["x", "y"],
"velocity": ["x", "y", "m"],
"acceleration": ["x", "y", "m"],
"heading": ["value"]
}
},
"pred_state": {
"PEDESTRIAN": {
"velocity": ["x", "y"]
},
"BICYCLE": {
"velocity": ["x", "y"]
},
"VEHICLE": {
"velocity": ["x", "y"]
}
},
"log_histograms": false
}

View File

@ -1,5 +0,0 @@
from .data_structures import Position, Velocity, Acceleration, Orientation, Map, ActuatorAngle, Scalar
from .scene import Scene
from .node import Node, BicycleNode
from .scene_graph import TemporalSceneGraph
from .environment import Environment

View File

@ -1,163 +0,0 @@
import numpy as np
from scipy.ndimage.interpolation import rotate
class MotionEntity(object):
def __init__(self, x, y, z=None):
self.x = x
self.y = y
self.z = z
self.m = None
@property
def l(self):
if self.z is not None:
return np.linalg.norm(np.vstack((self.x, self.y, self.z)), axis=0)
else:
return np.linalg.norm(np.vstack((self.x, self.y)), axis=0)
class Position(MotionEntity):
pass
class Velocity(MotionEntity):
@staticmethod
def from_position(position, dt=1):
dx = np.zeros_like(position.x) * np.nan
dx[~np.isnan(position.x)] = np.gradient(position.x[~np.isnan(position.x)], dt)
dy = np.zeros_like(position.y) * np.nan
dy[~np.isnan(position.y)] = np.gradient(position.y[~np.isnan(position.y)], dt)
if position.z is not None:
dz = np.zeros_like(position.z) * np.nan
dz[~np.isnan(position.z)] = np.gradient(position.z[~np.isnan(position.z)], dt)
else:
dz = None
return Velocity(dx, dy, dz)
class Acceleration(MotionEntity):
@staticmethod
def from_velocity(velocity, dt=1):
ddx = np.zeros_like(velocity.x) * np.nan
ddx[~np.isnan(velocity.x)] = np.gradient(velocity.x[~np.isnan(velocity.x)], dt)
ddy = np.zeros_like(velocity.y) * np.nan
ddy[~np.isnan(velocity.y)] = np.gradient(velocity.y[~np.isnan(velocity.y)], dt)
if velocity.z is not None:
ddz = np.zeros_like(velocity.z) * np.nan
ddz[~np.isnan(velocity.z)] = np.gradient(velocity.z[~np.isnan(velocity.z)], dt)
else:
ddz = None
return Acceleration(ddx, ddy, ddz)
class ActuatorAngle(object):
def __init__(self):
pass
class Scalar(object):
def __init__(self, value):
self.value = value
self.derivative = None
# TODO Finish
class Orientation(object):
def __init__(self, x, y, z, w):
self.x = x
self.y = y
self.z = z
self.w = w
class Map(object):
def __init__(self, data=None, homography=None, description=None, data_file=""):
self.data = data
self.homography = homography
self.description = description
self.uint = False
self.data_file = data_file
self.rotated_maps_origin = None
self.rotated_maps = None
if self.data.dtype == np.uint8:
self.uint = True
@property
def fdata(self):
if self.uint:
return self.data / 255.
else:
return self.data
def to_map_points(self, world_pts):
org_shape = None
if len(world_pts.shape) > 2:
org_shape = world_pts.shape
world_pts = world_pts.reshape((-1, 2))
N, dims = world_pts.shape
points_with_one = np.ones((dims + 1, N))
points_with_one[:dims] = world_pts.T
map_points = (self.homography @ points_with_one).T[..., :dims] # TODO There was np.fliplr here for pedestrian dataset. WHY?
if org_shape is not None:
map_points = map_points.reshape(org_shape)
return map_points
def to_rotated_map_points(self, world_pts, rotation_angle):
rotation_rad = -rotation_angle * np.pi / 180
rot_mat = np.array([[np.cos(rotation_rad), np.sin(rotation_rad), 0.],
[-np.sin(rotation_rad), np.cos(rotation_rad), 0.],
[0., 0., 1.]])
org_map_points = self.to_map_points(world_pts) + 1
org_shape = None
if len(org_map_points.shape) > 2:
org_shape = org_map_points.shape
org_map_points = org_map_points.reshape((-1, 2))
N, dims = org_map_points.shape
points_with_one = np.ones((dims + 1, N))
points_with_one[:dims] = org_map_points.T
org_map_pts_rot = (rot_mat @ points_with_one).T[..., :dims]
if org_shape is not None:
org_map_pts_rot = org_map_pts_rot.reshape(org_shape)
map_pts_rot = self.rotated_maps_origin + org_map_pts_rot
return map_pts_rot
def calculate_rotations(self):
org_shape = self.data.shape
l = (np.ceil(np.sqrt(org_shape[0]**2 + org_shape[1]**2)) * 2).astype(int) + 1
rotated_maps = np.zeros((360, l, l, org_shape[2]), dtype=np.uint8)
o = np.array([l // 2, l // 2])
rotated_maps[0, o[0]+1:o[0]+org_shape[0]+1, o[1]+1:o[1]+org_shape[1]+1] = self.data
for i in range(1, 360):
rotated_maps[i] = rotate(rotated_maps[0], reshape=False, angle=i, prefilter=False)
rotated_maps[0] = rotate(rotated_maps[0], reshape=False, angle=0, prefilter=False)
self.rotated_maps_origin = o
self.rotated_maps = rotated_maps
# def __getstate__(self):
# with open(self.data_file, 'w') as f:
# np.save(f, self.rotated_maps)
# self.rotated_maps = None
# state = self.__dict__.copy()
# return state
#
# def __setstate__(self, state):
# self.__dict__.update(state)
# with open(self.data_file, 'r') as f:
# self.rotated_maps = np.load(f)
if __name__ == "__main__":
img = np.zeros((103, 107, 3))
img[57, 84] = 255.
homography = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
m = Map(data=img, homography=homography)
m.calculate_rotations()
t = m.to_rotated_map_points(np.array([[57, 84]]), 0).astype(int)
print(m.rotated_maps[0, t[0, 0], t[0, 1]])

View File

@ -1,264 +0,0 @@
import numpy as np
from pyquaternion import Quaternion
from . import ActuatorAngle
from scipy.interpolate import splrep, splev, CubicSpline
from scipy.integrate import cumtrapz
class Node(object):
def __init__(self, type, position=None, velocity=None, acceleration=None, heading=None, orientation=None,
length=None, width=None, height=None, first_timestep=0, is_robot=False):
self.type = type
self.position = position
self.heading = heading
self.length = length
self.wdith = width
self.height = height
self.orientation = orientation
self.velocity = velocity
self.acceleration = acceleration
self.first_timestep = first_timestep
self.dimensions = ['x', 'y', 'z']
self.is_robot = is_robot
self._last_timestep = None
self.description = ""
def __repr__(self):
return self.type.name
def scene_ts_to_node_ts(self, scene_ts):
"""
Transforms timestamp from scene into timeframe of node data.
:param scene_ts: Scene timesteps
:return: ts: Transformed timesteps, paddingl: Number of timesteps in scene range which are not available in
node data before data is available. paddingu: Number of timesteps in scene range which are not
available in node data after data is available.
"""
paddingl = (self.first_timestep - scene_ts[0]).clip(0)
paddingu = (scene_ts[1] - self.last_timestep).clip(0)
ts = np.array(scene_ts).clip(min=self.first_timestep, max=self.last_timestep) - self.first_timestep
return ts, paddingl, paddingu
def history_points_at(self, ts):
"""
Number of history points in trajectory. Timestep is exclusive.
:param ts: Scene timestep where the number of history points are queried.
:return: Number of history timesteps.
"""
return ts - self.first_timestep
def get_entity(self, ts_scene, entity, dims, padding=np.nan):
if ts_scene.size == 1:
ts_scene = np.array([ts_scene, ts_scene])
length = ts_scene[1] - ts_scene[0] + 1 # ts is inclusive
entity_array = np.zeros((length, len(dims))) * padding
ts, paddingl, paddingu = self.scene_ts_to_node_ts(ts_scene)
entity_array[paddingl:length - paddingu] = np.array([getattr(getattr(self, entity), d)[ts[0]:ts[1]+1] for d in dims]).T
return entity_array
def get(self, ts_scene, state, padding=np.nan):
return np.hstack([self.get_entity(ts_scene, entity, dims, padding) for entity, dims in state.items()])
@property
def timesteps(self):
return self.position.x.size
@property
def last_timestep(self):
if self._last_timestep is None:
self._last_timestep = self.first_timestep + self.timesteps - 1
return self._last_timestep
class BicycleNode(Node):
def __init__(self, type, position=None, velocity=None, acceleration=None, heading=None, orientation=None,
length=None, width=None, height=None, first_timestep=0, actuator_angle=None):
super().__init__(type, position=position, velocity=velocity, acceleration=acceleration, heading=heading,
orientation=orientation, length=length, width=width, height=height,
first_timestep=first_timestep)
self.actuator_angle = actuator_angle
# TODO Probably wrong. Differential of magnitude is not euqal to the the magnitude of the differentials
def calculate_steering_angle_old(self, vel_tresh=0.0):
vel = np.linalg.norm(np.hstack((np.expand_dims(self.velocity.x, 1), np.expand_dims(self.velocity.y, 1))), axis=1)
beta = np.arctan2(self.velocity.y, self.velocity.x) - self.heading.value
beta[vel < vel_tresh] = 0.
steering_angle = np.arctan2(2 * np.sin(beta), np.cos(beta))
steering_angle[np.abs(steering_angle) > np.pi / 2] = 0 # Velocity Outlier
aa = ActuatorAngle()
aa.steering_angle = np.zeros_like(np.arctan2(2 * np.sin(beta), np.cos(beta)))
self.actuator_angle = aa
def calculate_steering_angle(self, dt, steering_tresh=0.0, vel_tresh=0.0):
t = np.arange(0, self.timesteps * dt, dt)
s = 0.01 * len(t)
#c_pos_x_g_x_tck = CubicSpline(t, np.array(pos_x_filtert))
#c_pos_y_g_x_tck = CubicSpline(t, np.array(pos_y_filtert))
#c_pos_x_g_x_tck = splrep(t, self.position.x, s=s)
#c_pos_y_g_x_tck = splrep(t, self.position.y, s=s)
#vel_x_g = c_pos_x_g_x_tck(t, 1)
#vel_y_g = c_pos_y_g_x_tck(t, 1)
#vel_x_g = splev(t, c_pos_x_g_x_tck, der=1)
#vel_y_g = splev(t, c_pos_y_g_x_tck, der=1)
vel_x_g = self.velocity.x
vel_y_g = self.velocity.y
v_x_ego = []
h = []
for t in range(self.timesteps):
dh_max = 1.2 / self.length * (np.linalg.norm(np.array([vel_x_g[t], vel_y_g[t]])))
heading = np.arctan2(vel_y_g[t], vel_x_g[t])
#if len(h) > 0 and np.abs(heading - h[-1]) > dh_max:
# heading = h[-1]
h.append(heading)
q = Quaternion(axis=(0.0, 0.0, 1.0), radians=heading)
v_x_ego_t = q.inverse.rotate(np.array([vel_x_g[t], vel_y_g[t], 1]))[0]
if v_x_ego_t < 0.0:
v_x_ego_t = 0.
v_x_ego.append(v_x_ego_t)
v_x_ego = np.stack(v_x_ego, axis=0)
h = np.stack(h, axis=0)
dh = np.gradient(h, dt)
sa = np.arctan2(dh * self.length, v_x_ego)
sa[(dh == 0.) | (v_x_ego == 0.)] = 0.
sa = sa.clip(min=-steering_tresh, max=steering_tresh)
a = np.gradient(v_x_ego, dt)
# int = self.integrate_bicycle_model(np.array([a]),
# sa,
# np.array([h[0]]),
# np.array([self.position.x[0],
# self.position.y[0]]),
# v_x_ego[0],
# self.length, 0.5)
# p = np.stack((self.position.x, self.position.y), axis=1)
#
# #assert ((int[0] - p) < 1.0).all()
aa = ActuatorAngle()
aa.steering_angle = sa
self.acceleration.m = a
self.actuator_angle = aa
def inverse_np_gradient(self, f, dx, F0=0.):
N = f.shape[0]
l = f.shape[-1]
l2 = np.ceil(l / 2).astype(int)
return (F0 +
((2 * dx) *
np.c_['-1',
np.r_['-1', np.zeros((N, 1)), f[..., 1:-1:2].cumsum(axis=-1)],
f[..., ::2].cumsum(axis=-1) - f[..., [0]] / 2]
).reshape((N, 2, l2)).reshape(N, 2 * l2, order='F')[:, :l]
)
def integrate_trajectory(self, v, x0, dt):
xd_ = self.inverse_np_gradient(v[..., 0], dx=dt, F0=x0[0])
yd_ = self.inverse_np_gradient(v[..., 1], dx=dt, F0=x0[1])
integrated = np.stack([xd_, yd_], axis=2)
return integrated
def integrate_bicycle_model(self, a, sa, h0, x0, v0, l, dt):
v_m = self.inverse_np_gradient(a, dx=0.5, F0=v0)
dh = (np.tan(sa) / l) * v_m[0]
h = self.inverse_np_gradient(np.array([dh]), dx=dt, F0=h0)
vx = np.cos(h) * v_m
vy = np.sin(h) * v_m
v = np.stack((vx, vy), axis=2)
return self.integrate_trajectory(v, x0, dt)
def calculate_steering_angle_keep(self, dt, steering_tresh=0.0, vel_tresh=0.0):
vel_approx = np.linalg.norm(np.stack((self.velocity.x, self.velocity.y), axis=0), axis=0)
mask = np.ones_like(vel_approx)
mask[vel_approx < vel_tresh] = 0
t = np.arange(0, self.timesteps * dt, dt)
pos_x_filtert = []
pos_y_filtert = []
s = None
for i in range(mask.size):
if mask[i] == 0 and s is None:
s = i
elif mask[i] != 0 and s is not None:
t_start = t[s-1]
pos_x_start = self.position.x[s-1]
pos_y_start = self.position.y[s-1]
t_mean = t[s:i].mean()
pos_x_mean = self.position.x[s:i].mean()
pos_y_mean = self.position.y[s:i].mean()
t_end = t[i]
pos_x_end = self.position.x[i]
pos_y_end = self.position.y[i]
for step in range(s, i+1):
if t[step] <= t_mean:
pos_x_filtert.append(pos_x_start + ((t[step] - t_start) / (t_mean - t_start)) * (pos_x_mean - pos_x_start))
pos_y_filtert.append(pos_y_start + ((t[step] - t_start) / (t_mean - t_start)) * (pos_y_mean - pos_y_start))
else:
pos_x_filtert.append(pos_x_mean + ((t[step] - t_end) / (t_end - t_mean)) * (pos_x_end - pos_x_mean))
pos_y_filtert.append(pos_y_mean + ((t[step] - t_end) / (t_end - t_mean)) * (pos_y_end - pos_y_mean))
s = None
elif mask[i] != 0 and s is None:
pos_x_filtert.append(self.position.x[i].mean())
pos_y_filtert.append(self.position.y[i].mean())
if s is not None:
t_start = t[s - 1]
pos_x_start = self.position.x[s - 1]
pos_y_start = self.position.y[s - 1]
t_mean = t[s:i].max()
pos_x_mean = self.position.x[s:i].mean()
pos_y_mean = self.position.y[s:i].mean()
for step in range(s, i+1):
pos_x_filtert.append(
pos_x_start + ((t[step] - t_start) / (t_mean - t_start)) * (pos_x_mean - pos_x_start))
pos_y_filtert.append(
pos_y_start + ((t[step] - t_start) / (t_mean - t_start)) * (pos_y_mean - pos_y_start))
s = 0.001 * len(t)
#c_pos_x_g_x_tck = CubicSpline(t, np.array(pos_x_filtert))
#c_pos_y_g_x_tck = CubicSpline(t, np.array(pos_y_filtert))
c_pos_x_g_x_tck = splrep(t, np.array(pos_x_filtert), s=s)
c_pos_y_g_x_tck = splrep(t, np.array(pos_y_filtert), s=s)
#vel_x_g = c_pos_x_g_x_tck(t, 1)
#vel_y_g = c_pos_y_g_x_tck(t, 1)
vel_x_g = splev(t, c_pos_x_g_x_tck, der=1)
vel_y_g = splev(t, c_pos_y_g_x_tck, der=1)
v_x_ego = []
h = []
for t in range(self.timesteps):
dh_max = 1.19 / self.length * (np.linalg.norm(np.array([vel_x_g[t], vel_y_g[t]])))
heading = np.arctan2(vel_y_g[t], vel_x_g[t])
if len(h) > 0 and np.abs(heading - h[-1]) > dh_max:
heading = h[-1]
h.append(heading)
q = Quaternion(axis=(0.0, 0.0, 1.0), radians=heading)
v_x_ego_t = q.inverse.rotate(np.array([vel_x_g[t], vel_y_g[t], 1]))[0]
if v_x_ego_t < 0.0:
v_x_ego_t = 0.
v_x_ego.append(v_x_ego_t)
v_x_ego = np.stack(v_x_ego, axis=0)
h = np.stack(h, axis=0)
dh = np.gradient(h, dt)
sa = np.arctan2(dh * self.length, v_x_ego)
sa[dh == 0.] = 0.
aa = ActuatorAngle()
aa.steering_angle = sa
self.actuator_angle = aa

View File

@ -1,102 +0,0 @@
import numpy as np
from .scene_graph import TemporalSceneGraph
class Scene(object):
def __init__(self, map=None, timesteps=0, dt=1, name=""):
self.map = map
self.timesteps = timesteps
self.dt = dt
self.name = name
self.nodes = []
self.robot = None
self.temporal_scene_graph = None
self.description = ""
def get_scene_graph(self, timestep, attention_radius=None, edge_addition_filter=None, edge_removal_filter=None):
if self.temporal_scene_graph is None:
timestep_range = np.array([timestep - len(edge_addition_filter), timestep + len(edge_removal_filter)])
node_pos_dict = dict()
present_nodes = self.present_nodes(np.array([timestep]))
for node in present_nodes[timestep]:
node_pos_dict[node] = np.squeeze(node.get(timestep_range, {'position': ['x', 'y']}))
tsg = TemporalSceneGraph.create_from_temp_scene_dict(node_pos_dict,
attention_radius,
duration=(len(edge_addition_filter) +
len(edge_removal_filter) + 1),
edge_addition_filter=edge_addition_filter,
edge_removal_filter=edge_removal_filter
)
return tsg.to_scene_graph(t=len(edge_addition_filter),
t_hist=len(edge_addition_filter),
t_fut=len(edge_removal_filter))
else:
return self.temporal_scene_graph.to_scene_graph(timestep,
len(edge_addition_filter),
len(edge_removal_filter))
def calculate_scene_graph(self, attention_radius, state, edge_addition_filter=None, edge_removal_filter=None):
timestep_range = np.array([0, self.timesteps-1])
node_pos_dict = dict()
for node in self.nodes:
node_pos_dict[node] = np.squeeze(node.get(timestep_range, {'position': ['x', 'y']}))
self.temporal_scene_graph = TemporalSceneGraph.create_from_temp_scene_dict(node_pos_dict,
attention_radius,
duration=self.timesteps,
edge_addition_filter=edge_addition_filter,
edge_removal_filter=edge_removal_filter)
def length(self):
return self.timesteps * self.dt
def present_nodes(self, timesteps, type=None, min_history_timesteps=0, min_future_timesteps=0, include_robot=True, max_nodes=None, curve=False): # TODO REMOVE
present_nodes = {}
picked_nodes = 0
rand_idx = np.random.choice(len(self.nodes), len(self.nodes), replace=False)
for i in rand_idx:
node = self.nodes[i]
if node.is_robot and not include_robot:
continue
if type is None or node.type == type:
if curve and node.type.name == 'VEHICLE':
if 'curve' not in node.description and np.random.rand() > 0.1:
continue
lower_bound = timesteps - min_history_timesteps
upper_bound = timesteps + min_future_timesteps
mask = (node.first_timestep <= lower_bound) & (upper_bound <= node.last_timestep)
if mask.any():
timestep_indices_present = np.nonzero(mask)[0]
for timestep_index_present in timestep_indices_present:
if timesteps[timestep_index_present] in present_nodes.keys():
present_nodes[timesteps[timestep_index_present]].append(node)
else:
present_nodes[timesteps[timestep_index_present]] = [node]
picked_nodes += 1
if max_nodes is not None and picked_nodes >= max_nodes:
break
if max_nodes is not None and picked_nodes >= max_nodes:
break
return present_nodes
def sample_timesteps(self, batch_size, min_future_timesteps=0):
if batch_size > self.timesteps:
batch_size = self.timesteps
return np.random.choice(np.arange(0, self.timesteps-min_future_timesteps), size=batch_size, replace=False)
def __repr__(self):
return f"Scene: Duration: {self.length()}s," \
f" Nodes: {len(self.nodes)}," \
f" Map: {'Yes' if self.map is not None else 'No'}."

View File

@ -1,173 +0,0 @@
import numpy as np
from scipy.spatial.distance import pdist, squareform
import scipy.signal as ss
from collections import defaultdict
import warnings
class TemporalSceneGraph(object):
def __init__(self,
edge_radius,
nodes=None,
adj_cube=np.zeros((1, 0, 0)),
weight_cube=np.zeros((1, 0, 0)),
node_type_mat=np.zeros((0, 0)),
edge_scaling=None):
self.edge_radius = edge_radius
self.nodes = nodes
if nodes is None:
self.nodes = np.array([])
self.adj_cube = adj_cube
self.weight_cube = weight_cube
self.node_type_mat = node_type_mat
self.adj_mat = np.max(self.adj_cube, axis=0).clip(max=1.0)
self.edge_scaling = edge_scaling
self.node_index_lookup = None
self.calculate_node_index_lookup()
def calculate_node_index_lookup(self):
node_index_lookup = dict()
for i, node in enumerate(self.nodes):
node_index_lookup[node] = i
self.node_index_lookup = node_index_lookup
def get_num_edges(self, t=0):
return np.sum(self.adj_cube[t]) // 2
def get_index(self, node):
return self.node_index_lookup[node]
@staticmethod
def get_edge_type(n1, n2):
return '-'.join(sorted([str(n1), str(n2)]))
@classmethod
def create_from_temp_scene_dict(cls,
scene_temp_dict,
attention_radius,
duration=1,
edge_addition_filter=None,
edge_removal_filter=None):
"""
Construct a spatiotemporal graph from agent positions in a dataset.
returns: sg: An aggregate SceneGraph of the dataset.
"""
nodes = scene_temp_dict.keys()
N = len(nodes)
total_timesteps = duration
position_cube = np.zeros((total_timesteps, N, 2))
adj_cube = np.zeros((total_timesteps, N, N), dtype=np.int8)
dist_cube = np.zeros((total_timesteps, N, N), dtype=np.float)
node_type_mat = np.zeros((N, N), dtype=np.int8)
node_attention_mat = np.zeros((N, N), dtype=np.float)
for node_idx, node in enumerate(nodes):
position_cube[:, node_idx] = scene_temp_dict[node]
node_type_mat[:, node_idx] = node.type.value
for node_idx_from, node_from in enumerate(nodes):
node_attention_mat[node_idx_from, node_idx] = attention_radius[(node_from.type, node.type)]
np.fill_diagonal(node_type_mat, 0)
agg_adj_matrix = np.zeros((N, N), dtype=np.int8)
for timestep in range(position_cube.shape[0]):
dists = squareform(pdist(position_cube[timestep], metric='euclidean'))
# Put a 1 for all agent pairs which are closer than the edge_radius.
# Can produce a warning as dists can be nan if no data for node is available.
# This is accepted as nan <= x evaluates to False
with warnings.catch_warnings():
warnings.simplefilter("ignore")
adj_matrix = (dists <= node_attention_mat).astype(np.int8) * node_type_mat
# Remove self-loops.
np.fill_diagonal(adj_matrix, 0)
agg_adj_matrix |= adj_matrix
adj_cube[timestep] = adj_matrix
dist_cube[timestep] = dists
dist_cube[np.isnan(dist_cube)] = 0.
weight_cube = np.divide(1.,
dist_cube,
out=np.zeros_like(dist_cube),
where=(dist_cube > 0.))
edge_scaling = None
if edge_addition_filter is not None and edge_removal_filter is not None:
edge_scaling = cls.calculate_edge_scaling(adj_cube, edge_addition_filter, edge_removal_filter)
sg = cls(attention_radius, np.array(list(nodes)), adj_cube, weight_cube, node_type_mat, edge_scaling=edge_scaling)
return sg
@staticmethod
def calculate_edge_scaling(adj_cube, edge_addition_filter, edge_removal_filter):
new_edges = np.minimum(
ss.convolve(adj_cube, np.reshape(edge_addition_filter, (-1, 1, 1)), 'same'), 1.
)
old_edges = np.minimum(
ss.convolve(adj_cube, np.reshape(edge_removal_filter, (-1, 1, 1)), 'same'), 1.
)
return np.minimum(new_edges + old_edges, 1.)
def to_scene_graph(self, t, t_hist=0, t_fut=0):
lower_t = np.clip(t-t_hist, a_min=0, a_max=None)
higher_t = np.clip(t + t_fut + 1, a_min=None, a_max=self.adj_cube.shape[0] + 1)
adj_mat = np.max(self.adj_cube[lower_t:higher_t], axis=0)
weight_mat = np.max(self.weight_cube[lower_t:higher_t], axis=0)
return SceneGraph(self.edge_radius,
self.nodes,
adj_mat,
weight_mat,
self.node_type_mat,
self.node_index_lookup,
edge_scaling=self.edge_scaling[t])
class SceneGraph(object):
def __init__(self,
edge_radius,
nodes=None,
adj_mat=np.zeros((0, 0)),
weight_mat=np.zeros((0, 0)),
node_type_mat=np.zeros((0, 0)),
node_index_lookup=None,
edge_scaling=None):
self.edge_radius = edge_radius
self.nodes = nodes
if nodes is None:
self.nodes = np.array([])
self.node_type_mat = node_type_mat
self.adj_mat = adj_mat
self.weight_mat = weight_mat
self.edge_scaling = edge_scaling
self.node_index_lookup = node_index_lookup
def get_index(self, node):
return self.node_index_lookup[node]
def get_neighbors(self, node, type):
node_index = self.get_index(node)
connection_mask = self.adj_mat[node_index].astype(bool)
mask = ((self.node_type_mat[node_index] == type.value) * connection_mask)
return self.nodes[mask]
def get_edge_scaling(self, node=None):
if node is None:
return self.edge_scaling
else:
node_index = self.get_index(node)
return self.edge_scaling[node_index, self.adj_mat[node_index] > 0.]
def get_edge_weight(self, node=None):
if node is None:
return self.weight_mat
else:
node_index = self.get_index(node)
return self.weight_mat[node_index, self.adj_mat[node_index] > 0.]

View File

@ -1,61 +0,0 @@
import torch
import torch.distributions as td
import numpy as np
from model.model_utils import to_one_hot
class GMM2D(object):
def __init__(self, log_pis, mus, log_sigmas, corrs, pred_state_length, device,
clip_lo=-10, clip_hi=10):
self.device = device
self.pred_state_length = pred_state_length
# input shapes
# pis: [..., GMM_c]
# mus: [..., GMM_c*2]
# sigmas: [..., GMM_c*2]
# corrs: [..., GMM_c]
GMM_c = log_pis.shape[-1]
# Sigma = [s1^2 p*s1*s2 L = [s1 0
# p*s1*s2 s2^2 ] p*s2 sqrt(1-p^2)*s2]
log_pis = log_pis - torch.logsumexp(log_pis, dim=-1, keepdim=True)
mus = self.reshape_to_components(mus, GMM_c) # [..., GMM_c, 2]
log_sigmas = self.reshape_to_components(torch.clamp(log_sigmas, min=clip_lo, max=clip_hi), GMM_c)
sigmas = torch.exp(log_sigmas) # [..., GMM_c, 2]
one_minus_rho2 = 1 - corrs**2 # [..., GMM_c]
self.L1 = sigmas*torch.stack([torch.ones_like(corrs, device=self.device), corrs], dim=-1)
self.L2 = sigmas*torch.stack([torch.zeros_like(corrs, device=self.device), torch.sqrt(one_minus_rho2)], dim=-1)
self.batch_shape = log_pis.shape[:-1]
self.GMM_c = GMM_c
self.log_pis = log_pis # [..., GMM_c]
self.mus = mus # [..., GMM_c, 2]
self.log_sigmas = log_sigmas # [..., GMM_c, 2]
self.sigmas = sigmas # [..., GMM_c, 2]
self.corrs = corrs # [..., GMM_c]
self.one_minus_rho2 = one_minus_rho2 # [..., GMM_c]
self.cat = td.Categorical(logits=log_pis)
def sample(self):
MVN_samples = (self.mus
+ self.L1*torch.unsqueeze(torch.randn_like(self.corrs, device=self.device), dim=-1) # [..., GMM_c, 2]
+ self.L2*torch.unsqueeze(torch.randn_like(self.corrs, device=self.device), dim=-1)) # (manual 2x2 matmul)
cat_samples = self.cat.sample() # [...]
selector = torch.unsqueeze(to_one_hot(cat_samples, self.GMM_c, self.device), dim=-1)
return torch.sum(MVN_samples*selector, dim=-2)
def log_prob(self, x):
# x: [..., 2]
x = torch.unsqueeze(x, dim=-2) # [..., 1, 2]
dx = x - self.mus # [..., GMM_c, 2]
z = (torch.sum((dx/self.sigmas)**2, dim=-1) -
2*self.corrs*torch.prod(dx, dim=-1)/torch.prod(self.sigmas, dim=-1)) # [..., GMM_c]
component_log_p = -(torch.log(self.one_minus_rho2) + 2*torch.sum(self.log_sigmas, dim=-1) +
z/self.one_minus_rho2 +
2*np.log(2*np.pi))/2
return torch.logsumexp(self.log_pis + component_log_p, dim=-1)
def reshape_to_components(self, tensor, GMM_c):
return torch.reshape(tensor, list(tensor.shape[:-1]) + [GMM_c, self.pred_state_length])

View File

@ -1,20 +0,0 @@
import torch
import torch.nn as nn
import torch.nn.functional as F
class CNNMapEncoder(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(CNNMapEncoder, self).__init__()
self.conv1 = nn.Conv2d(3, 128, 5, stride=2)
self.conv2 = nn.Conv2d(128, 256, 5, stride=3)
self.conv3 = nn.Conv2d(256, 64, 5, stride=2)
self.fc = nn.Linear(7 * 7 * 64, 512)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = x.view(-1, 7 * 7 * 64)
x = F.relu(self.fc(x))
return x

View File

@ -1,454 +0,0 @@
import numpy as np
import torch
from model.node_model import MultimodalGenerativeCVAE
class SpatioTemporalGraphCVAEModel(object):
def __init__(self, model_registrar,
hyperparams, log_writer,
device):
super(SpatioTemporalGraphCVAEModel, self).__init__()
self.hyperparams = hyperparams
self.log_writer = log_writer
self.device = device
self.curr_iter = 0
self.model_registrar = model_registrar
self.node_models_dict = dict()
self.nodes = set()
self.env = None
self.min_hl = self.hyperparams['minimum_history_length']
self.max_hl = self.hyperparams['maximum_history_length']
self.ph = self.hyperparams['prediction_horizon']
self.state = self.hyperparams['state']
self.state_length = dict()
for type in self.state.keys():
self.state_length[type] = int(np.sum([len(entity_dims) for entity_dims in self.state[type].values()]))
self.pred_state = self.hyperparams['pred_state']
def set_scene_graph(self, env):
self.env = env
self.node_models_dict.clear()
edge_types = env.get_edge_types()
for node_type in env.NodeType:
self.node_models_dict[node_type] = MultimodalGenerativeCVAE(env,
node_type,
self.model_registrar,
self.hyperparams,
self.device,
edge_types,
log_writer=self.log_writer)
def set_curr_iter(self, curr_iter):
self.curr_iter = curr_iter
for node_str, model in self.node_models_dict.items():
model.set_curr_iter(curr_iter)
def set_annealing_params(self):
for node_str, model in self.node_models_dict.items():
model.set_annealing_params()
def step_annealers(self):
for node in self.node_models_dict:
self.node_models_dict[node].step_annealers()
def get_input(self, scene, timesteps, node_type, min_future_timesteps, max_nodes=None, curve=False): # Curve is there to resample during training
inputs = list()
labels = list()
first_history_indices = list()
nodes = list()
node_scene_graph_batched = list()
timesteps_in_scene = list()
nodes_per_ts = scene.present_nodes(timesteps,
type=node_type,
min_history_timesteps=self.min_hl,
min_future_timesteps=min_future_timesteps,
include_robot=not self.hyperparams['incl_robot_node'],
max_nodes=max_nodes,
curve=curve)
# Get Inputs for each node present in Scene
for timestep in timesteps:
if timestep in nodes_per_ts.keys():
present_nodes = nodes_per_ts[timestep]
timestep_range = np.array([timestep - self.max_hl, timestep + min_future_timesteps])
scene_graph_t = scene.get_scene_graph(timestep,
self.env.attention_radius,
self.hyperparams['edge_addition_filter'],
self.hyperparams['edge_removal_filter'])
for node in present_nodes:
timesteps_in_scene.append(timestep)
input = node.get(timestep_range, self.state[node.type.name])
label = node.get(timestep_range, self.pred_state[node.type.name])
first_history_index = (self.max_hl - node.history_points_at(timestep)).clip(0)
inputs.append(input)
labels.append(label)
first_history_indices.append(first_history_index)
nodes.append(node)
node_scene_graph_batched.append((node, scene_graph_t))
return inputs, labels, first_history_indices, timesteps_in_scene, node_scene_graph_batched, nodes
def train_loss(self, scene, timesteps, max_nodes=None):
losses = dict()
for node_type in self.env.NodeType:
losses[node_type] = []
for node_type in self.env.NodeType:
# Get Input data for node type and given timesteps
(inputs,
labels,
first_history_indices,
timesteps_in_scene,
node_scene_graph_batched, _) = self.get_input(scene,
timesteps,
node_type,
self.ph,
max_nodes=max_nodes,
curve=True) # Curve is there to resample during training
# There are no nodes of type present for timestep
if len(inputs) == 0:
continue
uniform_t = self.max_hl
inputs = np.array(inputs)
labels = np.array(labels)
# Vehicles are rotated such that the x axis is lateral
if node_type == self.env.NodeType.VEHICLE:
# transform x y to ego.
pos = inputs[..., 0:2]
pos_org = pos.copy()
vel = inputs[..., 2:4]
acc = inputs[..., 5:7]
heading = inputs[:, uniform_t, -1]
rot_mat = np.zeros((pos.shape[0], pos.shape[1], 3, 3))
rot_mat[:, :, 0, 0] = np.cos(heading)[:, np.newaxis]
rot_mat[:, :, 0, 1] = np.sin(heading)[:, np.newaxis]
rot_mat[:, :, 1, 0] = -np.sin(heading)[:, np.newaxis]
rot_mat[:, :, 1, 1] = np.cos(heading)[:, np.newaxis]
rot_mat[:, :, 2, 2] = 1.
pos = pos - pos[:, uniform_t, np.newaxis, :]
pos_with_one = np.ones((pos.shape[0], pos.shape[1], 3, 1))
pos_with_one[:, :, :2] = pos[..., np.newaxis]
pos_rot = np.squeeze(rot_mat @ pos_with_one, axis=-1)[..., :2]
vel_with_one = np.ones((vel.shape[0], vel.shape[1], 3, 1))
vel_with_one[:, :, :2] = vel[..., np.newaxis]
vel_rot = np.squeeze(rot_mat @ vel_with_one, axis=-1)[..., :2]
acc_with_one = np.ones((acc.shape[0], acc.shape[1], 3, 1))
acc_with_one[:, :, :2] = acc[..., np.newaxis]
acc_rot = np.squeeze(rot_mat @ acc_with_one, axis=-1)[..., :2]
inputs[..., 0:2] = pos_rot
inputs[..., 2:4] = vel_rot
inputs[..., 5:7] = acc_rot
l_vel_with_one = np.ones((labels.shape[0], labels.shape[1], 3, 1))
l_vel_with_one[:, :, :2] = labels[..., np.newaxis]
labels = np.squeeze(rot_mat @ l_vel_with_one, axis=-1)[..., :2]
# Standardize, Position is standardized relative to current pos and attention_radius for node_type-node_type
_, std = self.env.get_standardize_params(self.state[node_type.name], node_type=node_type)
# std[0:2] = self.env.attention_radius[(node_type, node_type)]
rel_state = np.array(inputs)[:, uniform_t]
rel_state = np.hstack((rel_state, np.zeros_like(rel_state)))
rel_state = np.expand_dims(rel_state, 1)
std = np.tile(std, 2)
inputs = np.tile(inputs, 2)
inputs[..., self.state_length[node_type.name]:self.state_length[node_type.name]+2] = 0.
inputs_st = self.env.standardize(inputs,
self.state[node_type.name],
mean=rel_state,
std=std,
node_type=node_type)
labels_st = self.env.standardize(labels, self.pred_state[node_type.name], node_type=node_type)
if node_type == self.env.NodeType.VEHICLE:
inputs[..., 0:2] = pos_org
# Convert to torch tensors
inputs = torch.tensor(inputs).float().to(self.device)
inputs_st = torch.tensor(inputs_st).float().to(self.device)
first_history_indices = torch.tensor(first_history_indices).float().to(self.device).long()
labels = torch.tensor(labels).float().to(self.device)
labels_st = torch.tensor(labels_st).float().to(self.device)
# Run forward pass
model = self.node_models_dict[node_type]
loss = model.train_loss(inputs,
inputs_st,
first_history_indices,
labels,
labels_st,
scene,
node_scene_graph_batched,
timestep=uniform_t,
timesteps_in_scene=timesteps_in_scene,
prediction_horizon=self.ph)
losses[node_type].append(loss)
for node_type in self.env.NodeType:
losses[node_type] = torch.mean(torch.stack(losses[node_type])) if len(losses[node_type]) > 0 else None
return losses
def eval_loss(self, scene, timesteps, max_nodes=None):
losses = dict()
for node_type in self.env.NodeType:
losses[node_type] = {'nll_q_is': list(), 'nll_p': list(), 'nll_exact': list(), 'nll_sampled': list()}
for node_type in self.env.NodeType:
# Get Input data for node type and given timesteps
(inputs,
labels,
first_history_indices,
timesteps_in_scene,
node_scene_graph_batched, _) = self.get_input(scene, timesteps, node_type, self.ph, max_nodes=max_nodes)
# There are no nodes of type present for timestep
if len(inputs) == 0:
continue
uniform_t = self.max_hl
inputs = np.array(inputs)
labels = np.array(labels)
# Vehicles are rotated such that the x axis is lateral
if node_type == self.env.NodeType.VEHICLE:
# transform x y to ego.
pos = inputs[..., 0:2]
pos_org = pos.copy()
vel = inputs[..., 2:4]
acc = inputs[..., 5:7]
heading = inputs[:, uniform_t, -1]
rot_mat = np.zeros((pos.shape[0], pos.shape[1], 3, 3))
rot_mat[:, :, 0, 0] = np.cos(heading)[:, np.newaxis]
rot_mat[:, :, 0, 1] = np.sin(heading)[:, np.newaxis]
rot_mat[:, :, 1, 0] = -np.sin(heading)[:, np.newaxis]
rot_mat[:, :, 1, 1] = np.cos(heading)[:, np.newaxis]
rot_mat[:, :, 2, 2] = 1.
pos = pos - pos[:, uniform_t, np.newaxis, :]
pos_with_one = np.ones((pos.shape[0], pos.shape[1], 3, 1))
pos_with_one[:, :, :2] = pos[..., np.newaxis]
pos_rot = np.squeeze(rot_mat @ pos_with_one, axis=-1)[..., :2]
vel_with_one = np.ones((vel.shape[0], vel.shape[1], 3, 1))
vel_with_one[:, :, :2] = vel[..., np.newaxis]
vel_rot = np.squeeze(rot_mat @ vel_with_one, axis=-1)[..., :2]
acc_with_one = np.ones((acc.shape[0], acc.shape[1], 3, 1))
acc_with_one[:, :, :2] = acc[..., np.newaxis]
acc_rot = np.squeeze(rot_mat @ acc_with_one, axis=-1)[..., :2]
inputs[..., 0:2] = pos_rot
inputs[..., 2:4] = vel_rot
inputs[..., 5:7] = acc_rot
l_vel_with_one = np.ones((labels.shape[0], labels.shape[1], 3, 1))
l_vel_with_one[:, :, :2] = labels[..., np.newaxis]
labels = np.squeeze(rot_mat @ l_vel_with_one, axis=-1)[..., :2]
# Standardize, Position is standardized relative to current pos and attention_radius for node_type-node_type
_, std = self.env.get_standardize_params(self.state[node_type.name], node_type=node_type)
rel_state = np.array(inputs)[:, uniform_t]
rel_state = np.hstack((rel_state, np.zeros_like(rel_state)))
rel_state = np.expand_dims(rel_state, 1)
std = np.tile(std, 2)
inputs = np.tile(inputs, 2)
inputs[..., self.state_length[node_type.name]:self.state_length[node_type.name]+2] = 0.
inputs_st = self.env.standardize(inputs,
self.state[node_type.name],
mean=rel_state,
std=std,
node_type=node_type)
labels_st = self.env.standardize(labels, self.pred_state[node_type.name], node_type=node_type)
if node_type == self.env.NodeType.VEHICLE:
inputs[..., 0:2] = pos_org
# Convert to torch tensors
inputs = torch.tensor(inputs).float().to(self.device)
inputs_st = torch.tensor(inputs_st).float().to(self.device)
first_history_indices = torch.tensor(first_history_indices).float().to(self.device).long()
labels = torch.tensor(labels).float().to(self.device)
labels_st = torch.tensor(labels_st).float().to(self.device)
# Run forward pass
model = self.node_models_dict[node_type]
(nll_q_is, nll_p, nll_exact, nll_sampled) = model.eval_loss(inputs,
inputs_st,
first_history_indices,
labels,
labels_st,
scene,
node_scene_graph_batched,
timestep=uniform_t,
timesteps_in_scene=timesteps_in_scene,
prediction_horizon=self.ph)
if nll_q_is is not None:
losses[node_type]['nll_q_is'].append(nll_q_is.cpu().numpy())
losses[node_type]['nll_p'].append(nll_p.cpu().numpy())
losses[node_type]['nll_exact'].append(nll_exact.cpu().numpy())
losses[node_type]['nll_sampled'].append(nll_sampled.cpu().numpy())
return losses
def predict(self,
scene,
timesteps,
ph,
num_samples_z=1,
num_samples_gmm=1,
min_future_timesteps=0,
most_likely_z=False,
most_likely_gmm=False,
all_z=False,
max_nodes=None):
predictions_dict = {}
for node_type in self.env.NodeType:
# Get Input data for node type and given timesteps
(inputs,
labels,
first_history_indices,
timesteps_in_scene,
node_scene_graph_batched,
nodes) = self.get_input(scene, timesteps, node_type, min_future_timesteps, max_nodes=max_nodes)
# There are no nodes of type present for timestep
if len(inputs) == 0:
continue
uniform_t = self.max_hl
inputs = np.array(inputs)
labels = np.array(labels)
# Vehicles are rotated such that the x axis is lateral
if node_type == self.env.NodeType.VEHICLE:
# transform x y to ego.
pos = inputs[..., 0:2]
pos_org = pos.copy()
vel = inputs[..., 2:4]
acc = inputs[..., 5:7]
heading = inputs[:, uniform_t, -1]
rot_mat = np.zeros((pos.shape[0], pos.shape[1], 3, 3))
rot_mat[:, :, 0, 0] = np.cos(heading)[:, np.newaxis]
rot_mat[:, :, 0, 1] = np.sin(heading)[:, np.newaxis]
rot_mat[:, :, 1, 0] = -np.sin(heading)[:, np.newaxis]
rot_mat[:, :, 1, 1] = np.cos(heading)[:, np.newaxis]
rot_mat[:, :, 2, 2] = 1.
pos = pos - pos[:, uniform_t, np.newaxis, :]
pos_with_one = np.ones((pos.shape[0], pos.shape[1], 3, 1))
pos_with_one[:, :, :2] = pos[..., np.newaxis]
pos_rot = np.squeeze(rot_mat @ pos_with_one, axis=-1)[..., :2]
vel_with_one = np.ones((vel.shape[0], vel.shape[1], 3, 1))
vel_with_one[:, :, :2] = vel[..., np.newaxis]
vel_rot = np.squeeze(rot_mat @ vel_with_one, axis=-1)[..., :2]
acc_with_one = np.ones((acc.shape[0], acc.shape[1], 3, 1))
acc_with_one[:, :, :2] = acc[..., np.newaxis]
acc_rot = np.squeeze(rot_mat @ acc_with_one, axis=-1)[..., :2]
inputs[..., 0:2] = pos_rot
inputs[..., 2:4] = vel_rot
inputs[..., 5:7] = acc_rot
l_vel_with_one = np.ones((labels.shape[0], labels.shape[1], 3, 1))
l_vel_with_one[:, :, :2] = labels[..., np.newaxis]
labels = np.squeeze(rot_mat @ l_vel_with_one, axis=-1)[..., :2]
# Standardize, Position is standardized relative to current pos and attention_radius for node_type-node_type
_, std = self.env.get_standardize_params(self.state[node_type.name], node_type=node_type)
rel_state = np.array(inputs)[:, uniform_t]
rel_state = np.hstack((rel_state, np.zeros_like(rel_state)))
rel_state = np.expand_dims(rel_state, 1)
std = np.tile(std, 2)
inputs = np.tile(inputs, 2)
inputs[..., self.state_length[node_type.name]:self.state_length[node_type.name]+2] = 0.
inputs_st = self.env.standardize(inputs,
self.state[node_type.name],
mean=rel_state,
std=std,
node_type=node_type)
labels_st = self.env.standardize(labels, self.pred_state[node_type.name], node_type=node_type)
if node_type == self.env.NodeType.VEHICLE:
inputs[..., 0:2] = pos_org
# Convert to torch tensors
inputs = torch.tensor(inputs).float().to(self.device)
inputs_st = torch.tensor(inputs_st).float().to(self.device)
first_history_indices = torch.tensor(first_history_indices).float().to(self.device).long()
labels = torch.tensor(labels).float().to(self.device)
labels_st = torch.tensor(labels_st).float().to(self.device)
# Run forward pass
model = self.node_models_dict[node_type]
predictions = model.predict(inputs,
inputs_st,
labels,
labels_st,
first_history_indices,
scene,
node_scene_graph_batched,
timestep=uniform_t,
timesteps_in_scene=timesteps_in_scene,
prediction_horizon=ph,
num_samples_z=num_samples_z,
num_samples_gmm=num_samples_gmm,
most_likely_z=most_likely_z,
most_likely_gmm=most_likely_gmm,
all_z=all_z)
predictions_uns = self.env.unstandardize(predictions.cpu().detach().numpy(),
self.pred_state[node_type.name],
node_type)
# Vehicles are rotated such that the x axis is lateral. For output rotation has to be reversed
if node_type == self.env.NodeType.VEHICLE:
heading = inputs.cpu().detach().numpy()[:, uniform_t, -1]
rot_mat = np.zeros((predictions_uns.shape[0],
predictions_uns.shape[1],
predictions_uns.shape[2],
predictions_uns.shape[3], 3, 3))
rot_mat[:, :, :, :, 0, 0] = np.cos(-heading)[:, np.newaxis]
rot_mat[:, :, :, :, 0, 1] = np.sin(-heading)[:, np.newaxis]
rot_mat[:, :, :, :, 1, 0] = -np.sin(-heading)[:, np.newaxis]
rot_mat[:, :, :, :, 1, 1] = np.cos(-heading)[:, np.newaxis]
rot_mat[:, :, :, :, 2, 2] = 1.
p_vel_with_one = np.ones((predictions_uns.shape[0],
predictions_uns.shape[1],
predictions_uns.shape[2],
predictions_uns.shape[3], 3, 1))
p_vel_with_one[:, :, :, :, :2] = predictions_uns[..., np.newaxis]
predictions_uns = np.squeeze(rot_mat @ p_vel_with_one, axis=-1)[..., :2]
# Assign predictions to node
for i, ts in enumerate(timesteps_in_scene):
if not ts in predictions_dict.keys():
predictions_dict[ts] = dict()
predictions_dict[ts][nodes[i]] = predictions_uns[:, :, i]
return predictions_dict

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,182 +0,0 @@
import sys
sys.path.append('../../code')
import os
import pickle
import json
import argparse
import torch
import numpy as np
import pandas as pd
from tqdm import tqdm
from model.model_registrar import ModelRegistrar
from model.dyn_stg import SpatioTemporalGraphCVAEModel
import evaluation
from utils import prediction_output_to_trajectories
from scipy.interpolate import RectBivariateSpline
parser = argparse.ArgumentParser()
parser.add_argument("--model", help="model full path", type=str)
parser.add_argument("--checkpoint", help="model checkpoint to evaluate", type=int)
parser.add_argument("--data", help="full path to data file", type=str)
parser.add_argument("--output", help="full path to output csv file", type=str)
parser.add_argument("--node_type", help="Node Type to evaluate", type=str)
parser.add_argument("--prediction_horizon", nargs='+', help="prediction horizon", type=int, default=None)
args = parser.parse_args()
def compute_obs_violations(predicted_trajs, map):
obs_map = 1 - map.fdata[..., 0]
interp_obs_map = RectBivariateSpline(range(obs_map.shape[0]),
range(obs_map.shape[1]),
obs_map,
kx=1, ky=1)
old_shape = predicted_trajs.shape
pred_trajs_map = map.to_map_points(predicted_trajs.reshape((-1, 2)))
traj_obs_values = interp_obs_map(pred_trajs_map[:, 0], pred_trajs_map[:, 1], grid=False)
traj_obs_values = traj_obs_values.reshape((old_shape[0], old_shape[1]))
num_viol_trajs = np.sum(traj_obs_values.max(axis=1) > 0, dtype=float)
return num_viol_trajs
def compute_heading_error(prediction_output_dict, dt, max_hl, ph, node_type_enum, kde=True, obs=False, map=None):
heading_error = list()
for t in prediction_output_dict.keys():
for node in prediction_output_dict[t].keys():
if node.type.name == 'VEHICLE':
gt_vel = node.get(t + ph - 1, {'velocity': ['x', 'y']})[0]
gt_heading = np.arctan2(gt_vel[1], gt_vel[0])
our_heading = np.arctan2(prediction_output_dict[t][node][..., -2, 1], prediction_output_dict[t][node][..., -2, 0])
he = np.mean(np.abs(gt_heading - our_heading)) % (2 * np.pi)
heading_error.append(he)
return heading_error
def load_model(model_dir, env, ts=99):
model_registrar = ModelRegistrar(model_dir, 'cpu')
model_registrar.load_models(ts)
with open(os.path.join(model_dir, 'config.json'), 'r') as config_json:
hyperparams = json.load(config_json)
stg = SpatioTemporalGraphCVAEModel(model_registrar,
hyperparams,
None, 'cuda:0')
hyperparams['incl_robot_node'] = False
stg.set_scene_graph(env)
stg.set_annealing_params()
return stg, hyperparams
if __name__ == "__main__":
with open(args.data, 'rb') as f:
env = pickle.load(f, encoding='latin1')
scenes = env.scenes
eval_stg, hyperparams = load_model(args.model, env, ts=args.checkpoint)
print("-- Preparing Node Graph")
for scene in tqdm(scenes):
scene.calculate_scene_graph(hyperparams['edge_radius'],
hyperparams['state'],
hyperparams['edge_addition_filter'],
hyperparams['edge_removal_filter'])
if args.prediction_horizon is None:
args.prediction_horizon = [hyperparams['prediction_horizon']]
for ph in args.prediction_horizon:
print(f"Prediction Horizon: {ph}")
max_hl = hyperparams['maximum_history_length']
node_type = env.NodeType[args.node_type]
print(f"Node Type: {node_type.name}")
print(f"Edge Radius: {hyperparams['edge_radius']}")
with torch.no_grad():
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
eval_kde_nll = np.array([])
eval_obs_viols = np.array([])
print("-- Evaluating Full")
for i, scene in enumerate(tqdm(scenes)):
for timestep in range(scene.timesteps):
predictions = eval_stg.predict(scene,
np.array([timestep]),
ph,
num_samples_z=2000,
most_likely_z=False,
min_future_timesteps=8)
if not predictions:
continue
eval_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
node_type_enum=env.NodeType,
max_hl=max_hl,
ph=ph,
map=scene.map[node_type.name],
obs=True)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, eval_error_dict[node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, eval_error_dict[node_type]['fde']))
eval_kde_nll = np.hstack((eval_kde_nll, eval_error_dict[node_type]['kde']))
eval_obs_viols = np.hstack((eval_obs_viols, eval_error_dict[node_type]['obs_viols']))
del predictions
del eval_error_dict
print(f"Final Mean Displacement Error @{ph * scene.dt}s: {np.mean(eval_fde_batch_errors)}")
print(f"Road Violations @{ph * scene.dt}s: {100 * np.sum(eval_obs_viols) / (eval_obs_viols.shape[0] * 2000)}%")
pd.DataFrame({'error_value': eval_ade_batch_errors, 'error_type': 'ade', 'type': 'full', 'ph': ph}).to_csv(args.output + '_ade_full_' + str(ph)+'ph' + '.csv')
pd.DataFrame({'error_value': eval_fde_batch_errors, 'error_type': 'fde', 'type': 'full', 'ph': ph}).to_csv(args.output + '_fde_full' + str(ph)+'ph' + '.csv')
pd.DataFrame({'error_value': eval_kde_nll, 'error_type': 'kde', 'type': 'full', 'ph': ph}).to_csv(args.output + '_kde_full' + str(ph)+'ph' + '.csv')
pd.DataFrame({'error_value': eval_obs_viols, 'error_type': 'obs', 'type': 'full', 'ph': ph}).to_csv(args.output + '_obs_full' + str(ph)+'ph' + '.csv')
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
eval_heading_err = np.array([])
eval_obs_viols = np.array([])
print("-- Evaluating most likely Z and GMM")
for i, scene in enumerate(scenes):
print(f"---- Evaluating Scene {i+1}/{len(scenes)}")
for t in np.arange(0, scene.timesteps, 20):
timesteps = np.arange(t, t+20)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples_z=1,
most_likely_z=True,
most_likely_gmm=True,
min_future_timesteps=8)
eval_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
node_type_enum=env.NodeType,
max_hl=max_hl,
ph=ph,
map=1 - scene.map[node_type.name].fdata[..., 0],
kde=False)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, eval_error_dict[node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, eval_error_dict[node_type]['fde']))
eval_obs_viols = np.hstack((eval_obs_viols, eval_error_dict[node_type]['obs_viols']))
heading_error = compute_heading_error(predictions,
scene.dt,
node_type_enum=env.NodeType,
max_hl=max_hl,
ph=ph,
map=1 - scene.map[node_type.name].fdata[..., 0],
kde=False)
eval_heading_err = np.hstack((eval_heading_err, heading_error))
print(f"Final Displacement Error @{ph * scene.dt}s: {np.mean(eval_fde_batch_errors)}")
pd.DataFrame({'error_value': eval_ade_batch_errors, 'error_type': 'ade', 'type': 'mm', 'ph': ph}).to_csv(args.output + '_ade_mm' + str(ph)+'ph' + '.csv')
pd.DataFrame({'error_value': eval_fde_batch_errors, 'error_type': 'fde', 'type': 'mm', 'ph': ph}).to_csv(args.output + '_fde_mm' + str(ph)+'ph' + '.csv')
pd.DataFrame({'error_value': eval_obs_viols, 'error_type': 'obs', 'type': 'mm', 'ph': ph}).to_csv( args.output + '_obs_mm' + str(ph)+'ph' + '.csv')

View File

@ -1,6 +0,0 @@
#!/bin/bash
python model_to_metric_nuScenes.py --model "../../data/nuScenes/models/full" --checkpoint 1 --data "../../data/processed/nuScenes_test.pkl" --output "./csv/full_veh" --node_type "VEHICLE" --prediction_horizon 2 4 6 8 > full_out.txt
python model_to_metric_nuScenes.py --model "../../data/nuScenes/models/me_demo" --checkpoint 1 --data "../../data/processed/nuScenes_test.pkl" --output "./csv/me_veh" --node_type "VEHICLE" --prediction_horizon 2 4 6 8 > me_out.txt
python model_to_metric_nuScenes.py --model "../../data/nuScenes/models/edge" --checkpoint 1 --data "../../data/processed/nuScenes_test.pkl" --output "./csv/edge_veh" --node_type "VEHICLE" --prediction_horizon 2 4 6 8 > edge_out.txt
python model_to_metric_nuScenes.py --model "../../data/nuScenes/models/baseline" --checkpoint 1 --data "../../data/processed/nuScenes_test.pkl" --output "./csv/baseline_veh" --node_type "VEHICLE" --prediction_horizon 2 4 6 8 > baseline_out.txt
python model_to_metric_nuScenes.py --model "../../data/nuScenes/models/full" --checkpoint 1 --data "../../data/processed/nuScenes_test.pkl" --output "./csv/full_ped" --node_type "PEDESTRIAN" --prediction_horizon 2 4 6 8 > full_out_ped.txt

View File

@ -1,476 +0,0 @@
import torch
from torch import nn, optim
import numpy as np
import os
import time
import psutil
import pickle
import json
import random
import argparse
import pathlib
import visualization
import evaluation
import matplotlib.pyplot as plt
from model.dyn_stg import SpatioTemporalGraphCVAEModel
from model.model_registrar import ModelRegistrar
from model.model_utils import cyclical_lr
from tensorboardX import SummaryWriter
#torch.autograd.set_detect_anomaly(True) # TODO Remove for speed
parser = argparse.ArgumentParser()
parser.add_argument("--conf", help="path to json config file for hyperparameters",
type=str, default='config.json')
parser.add_argument("--offline_scene_graph", help="whether to precompute the scene graphs offline, options are 'no' and 'yes'",
type=str, default='yes')
parser.add_argument("--dynamic_edges", help="whether to use dynamic edges or not, options are 'no' and 'yes'",
type=str, default='yes')
parser.add_argument("--edge_radius", help="the radius (in meters) within which two nodes will be connected by an edge",
type=float, default=3.0)
parser.add_argument("--edge_state_combine_method", help="the method to use for combining edges of the same type",
type=str, default='sum')
parser.add_argument("--edge_influence_combine_method", help="the method to use for combining edge influences",
type=str, default='attention')
parser.add_argument('--edge_addition_filter', nargs='+', help="what scaling to use for edges as they're created",
type=float, default=[0.25, 0.5, 0.75, 1.0]) # We automatically pad left with 0.0
parser.add_argument('--edge_removal_filter', nargs='+', help="what scaling to use for edges as they're removed",
type=float, default=[1.0, 0.0]) # We automatically pad right with 0.0
parser.add_argument('--incl_robot_node', help="whether to include a robot node in the graph or simply model all agents",
action='store_true')
parser.add_argument('--use_map_encoding', help="Whether to use map encoding or not",
action='store_true')
parser.add_argument("--data_dir", help="what dir to look in for data",
type=str, default='../data/processed')
parser.add_argument("--train_data_dict", help="what file to load for training data",
type=str, default='nuScenes_train.pkl')
parser.add_argument("--eval_data_dict", help="what file to load for evaluation data",
type=str, default='nuScenes_val.pkl')
parser.add_argument("--log_dir", help="what dir to save training information (i.e., saved models, logs, etc)",
type=str, default='../data/nuScenes/logs')
parser.add_argument("--log_tag", help="tag for the log folder",
type=str, default='')
parser.add_argument('--device', help='what device to perform training on',
type=str, default='cuda:1')
parser.add_argument("--eval_device", help="what device to use during evaluation",
type=str, default=None)
parser.add_argument("--num_iters", help="number of iterations to train for",
type=int, default=2000)
parser.add_argument('--batch_multiplier', help='how many minibatches to run per iteration of training',
type=int, default=1)
parser.add_argument('--batch_size', help='training batch size',
type=int, default=256)
parser.add_argument('--eval_batch_size', help='evaluation batch size',
type=int, default=256)
parser.add_argument('--k_eval', help='how many samples to take during evaluation',
type=int, default=50)
parser.add_argument('--seed', help='manual seed to use, default is 123',
type=int, default=123)
parser.add_argument('--eval_every', help='how often to evaluate during training, never if None',
type=int, default=50)
parser.add_argument('--vis_every', help='how often to visualize during training, never if None',
type=int, default=50)
parser.add_argument('--save_every', help='how often to save during training, never if None',
type=int, default=100)
args = parser.parse_args()
if not torch.cuda.is_available() or args.device == 'cpu':
args.device = torch.device('cpu')
else:
if torch.cuda.device_count() == 1:
# If you have CUDA_VISIBLE_DEVICES set, which you should,
# then this will prevent leftover flag arguments from
# messing with the device allocation.
args.device = 'cuda:0'
args.device = torch.device(args.device)
if args.eval_device is None:
args.eval_device = 'cpu'
if args.seed is not None:
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(args.seed)
def main():
# Load hyperparameters from json
if not os.path.exists(args.conf):
print('Config json not found!')
with open(args.conf, 'r') as conf_json:
hyperparams = json.load(conf_json)
# Add hyperparams from arguments
hyperparams['dynamic_edges'] = args.dynamic_edges
hyperparams['edge_state_combine_method'] = args.edge_state_combine_method
hyperparams['edge_influence_combine_method'] = args.edge_influence_combine_method
hyperparams['edge_radius'] = args.edge_radius
hyperparams['use_map_encoding'] = args.use_map_encoding
hyperparams['edge_addition_filter'] = args.edge_addition_filter
hyperparams['edge_removal_filter'] = args.edge_removal_filter
hyperparams['batch_size'] = args.batch_size
hyperparams['k_eval'] = args.k_eval
hyperparams['offline_scene_graph'] = args.offline_scene_graph
hyperparams['incl_robot_node'] = args.incl_robot_node
print('-----------------------')
print('| TRAINING PARAMETERS |')
print('-----------------------')
print('| iterations: %d' % args.num_iters)
print('| batch_size: %d' % args.batch_size)
print('| batch_multiplier: %d' % args.batch_multiplier)
print('| effective batch size: %d (= %d * %d)' % (args.batch_size * args.batch_multiplier, args.batch_size, args.batch_multiplier))
print('| device: %s' % args.device)
print('| eval_device: %s' % args.eval_device)
print('| Offline Scene Graph Calculation: %s' % args.offline_scene_graph)
print('| edge_radius: %s' % args.edge_radius)
print('| EE state_combine_method: %s' % args.edge_state_combine_method)
print('| EIE scheme: %s' % args.edge_influence_combine_method)
print('| dynamic_edges: %s' % args.dynamic_edges)
print('| robot node: %s' % args.incl_robot_node)
print('| map encoding: %s' % args.use_map_encoding)
print('| edge_addition_filter: %s' % args.edge_addition_filter)
print('| edge_removal_filter: %s' % args.edge_removal_filter)
print('| MHL: %s' % hyperparams['minimum_history_length'])
print('| PH: %s' % hyperparams['prediction_horizon'])
print('-----------------------')
# Create the log and model directiory if they're not present.
model_dir = os.path.join(args.log_dir, 'models_' + time.strftime('%d_%b_%Y_%H_%M_%S', time.localtime()) + args.log_tag)
pathlib.Path(model_dir).mkdir(parents=True, exist_ok=True)
# Save config to model directory
with open(os.path.join(model_dir, 'config.json'), 'w') as conf_json:
json.dump(hyperparams, conf_json)
log_writer = SummaryWriter(log_dir=model_dir)
train_scenes = []
train_data_path = os.path.join(args.data_dir, args.train_data_dict)
with open(train_data_path, 'rb') as f:
train_env = pickle.load(f, encoding='latin1')
train_scenes = train_env.scenes
print('Loaded training data from %s' % (train_data_path,))
eval_scenes = []
if args.eval_every is not None:
eval_data_path = os.path.join(args.data_dir, args.eval_data_dict)
with open(eval_data_path, 'rb') as f:
eval_env = pickle.load(f, encoding='latin1')
eval_scenes = eval_env.scenes
print('Loaded evaluation data from %s' % (eval_data_path, ))
# Calculate Scene Graph
if hyperparams['offline_scene_graph'] == 'yes':
print(f"Offline calculating scene graphs")
for i, scene in enumerate(train_scenes):
scene.calculate_scene_graph(train_env.attention_radius,
hyperparams['state'],
hyperparams['edge_addition_filter'],
hyperparams['edge_removal_filter'])
print(f"Created Scene Graph for Scene {i}")
for i, scene in enumerate(eval_scenes):
scene.calculate_scene_graph(eval_env.attention_radius,
hyperparams['state'],
hyperparams['edge_addition_filter'],
hyperparams['edge_removal_filter'])
print(f"Created Scene Graph for Scene {i}")
model_registrar = ModelRegistrar(model_dir, args.device)
# We use pre trained weights for the map CNN
if args.use_map_encoding:
inf_encoder_registrar = os.path.join(args.log_dir, 'weight_trans/model_registrar-1499.pt')
model_dict = torch.load(inf_encoder_registrar, map_location=args.device)
for key in model_dict.keys():
if 'map_encoder' in key:
model_registrar.model_dict[key] = model_dict[key]
assert model_registrar.get_model(key) is model_dict[key]
stg = SpatioTemporalGraphCVAEModel(model_registrar,
hyperparams,
log_writer, args.device)
stg.set_scene_graph(train_env)
stg.set_annealing_params()
print('Created training STG model.')
eval_stg = None
if args.eval_every is not None or args.vis_ervery is not None:
eval_stg = SpatioTemporalGraphCVAEModel(model_registrar,
hyperparams,
log_writer, args.device)
eval_stg.set_scene_graph(eval_env)
eval_stg.set_annealing_params() # TODO Check if necessary
if hyperparams['learning_rate_style'] == 'const':
optimizer = optim.Adam(model_registrar.parameters(), lr=hyperparams['learning_rate'])
lr_scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=1.0)
elif hyperparams['learning_rate_style'] == 'exp':
optimizer = optim.Adam(model_registrar.parameters(), lr=hyperparams['learning_rate'])
lr_scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=hyperparams['learning_decay_rate'])
elif hyperparams['learning_rate_style'] == 'triangle':
optimizer = optim.Adam(model_registrar.parameters(), lr=1.0)
clr = cyclical_lr(100, min_lr=hyperparams['min_learning_rate'], max_lr=hyperparams['learning_rate'], decay=hyperparams['learning_decay_rate'])
lr_scheduler = optim.lr_scheduler.LambdaLR(optimizer, [clr])
print_training_header(newline_start=True)
for curr_iter in range(args.num_iters):
# Necessary because we flip the weights contained between GPU and CPU sometimes.
model_registrar.to(args.device)
# Setting the current iterator value for internal logging.
stg.set_curr_iter(curr_iter)
if args.vis_every is not None:
eval_stg.set_curr_iter(curr_iter)
# Stepping forward the learning rate scheduler and annealers.
lr_scheduler.step()
log_writer.add_scalar('train/learning_rate',
lr_scheduler.get_lr()[0],
curr_iter)
stg.step_annealers()
# Zeroing gradients for the upcoming iteration.
optimizer.zero_grad()
train_losses = dict()
for node_type in train_env.NodeType:
train_losses[node_type] = []
for scene in np.random.choice(train_scenes, 10):
for mb_num in range(args.batch_multiplier):
# Obtaining the batch's training loss.
timesteps = scene.sample_timesteps(hyperparams['batch_size'])
# Compute the training loss.
train_loss_by_type = stg.train_loss(scene, timesteps, max_nodes=hyperparams['batch_size'])
for node_type, train_loss in train_loss_by_type.items():
if train_loss is not None:
train_loss = train_loss / (args.batch_multiplier * 10)
train_losses[node_type].append(train_loss.item())
# Calculating gradients.
train_loss.backward()
# Print training information. Also, no newline here. It's added in at a later line.
print('{:9} | '.format(curr_iter), end='', flush=True)
for node_type in train_env.NodeType:
print('{}:{:10} | '.format(node_type.name[0], '%.2f' % sum(train_losses[node_type])), end='', flush=True)
for node_type in train_env.NodeType:
if len(train_losses[node_type]) > 0:
log_writer.add_histogram(f"{node_type.name}/train/minibatch_losses", np.asarray(train_losses[node_type]), curr_iter)
log_writer.add_scalar(f"{node_type.name}/train/loss", sum(train_losses[node_type]), curr_iter)
# Clipping gradients.
if hyperparams['grad_clip'] is not None:
nn.utils.clip_grad_value_(model_registrar.parameters(), hyperparams['grad_clip'])
# Performing a gradient step.
optimizer.step()
del train_loss # TODO Necessary?
if args.vis_every is not None and (curr_iter + 1) % args.vis_every == 0:
max_hl = hyperparams['maximum_history_length']
ph = hyperparams['prediction_horizon']
with torch.no_grad():
# Predict random timestep to plot for train data set
scene = np.random.choice(train_scenes)
timestep = scene.sample_timesteps(1, min_future_timesteps=ph)
predictions = stg.predict(scene,
timestep,
ph,
num_samples_z=100,
most_likely_z=False,
all_z=False)
# Plot predicted timestep for random scene
fig, ax = plt.subplots(figsize=(5, 5))
visualization.visualize_prediction(ax,
predictions,
scene.dt,
max_hl=max_hl,
ph=ph)
ax.set_title(f"{scene.name}-t: {timestep}")
log_writer.add_figure('train/prediction', fig, curr_iter)
# Predict random timestep to plot for eval data set
scene = np.random.choice(eval_scenes)
timestep = scene.sample_timesteps(1, min_future_timesteps=ph)
predictions = eval_stg.predict(scene,
timestep,
ph,
num_samples_z=100,
most_likely_z=False,
all_z=False,
max_nodes=4 * args.eval_batch_size)
# Plot predicted timestep for random scene
fig, ax = plt.subplots(figsize=(5, 5))
visualization.visualize_prediction(ax,
predictions,
scene.dt,
max_hl=max_hl,
ph=ph)
ax.set_title(f"{scene.name}-t: {timestep}")
log_writer.add_figure('eval/prediction', fig, curr_iter)
# Plot predicted timestep for random scene in map
fig, ax = plt.subplots(figsize=(15, 15))
visualization.visualize_prediction(ax,
predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
map=scene.map['PLOT'])
ax.set_title(f"{scene.name}-t: {timestep}")
log_writer.add_figure('eval/prediction_map', fig, curr_iter)
# Predict random timestep to plot for eval data set
predictions = eval_stg.predict(scene,
timestep,
ph,
num_samples_gmm=50,
most_likely_z=False,
all_z=True,
max_nodes=4 * args.eval_batch_size)
# Plot predicted timestep for random scene
fig, ax = plt.subplots(figsize=(5, 5))
visualization.visualize_prediction(ax,
predictions,
scene.dt,
max_hl=max_hl,
ph=ph)
ax.set_title(f"{scene.name}-t: {timestep}")
log_writer.add_figure('eval/prediction_all_z', fig, curr_iter)
if args.eval_every is not None and (curr_iter + 1) % args.eval_every == 0:
max_hl = hyperparams['maximum_history_length']
ph = hyperparams['prediction_horizon']
with torch.no_grad():
# Predict batch timesteps for training dataset evaluation
train_batch_errors = []
max_scenes = np.min([len(train_scenes), 5])
for scene in np.random.choice(train_scenes, max_scenes):
timesteps = scene.sample_timesteps(args.eval_batch_size)
predictions = stg.predict(scene,
timesteps,
ph,
num_samples_z=100,
min_future_timesteps=ph,
max_nodes=4*args.eval_batch_size)
train_batch_errors.append(evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=train_env.NodeType,
map=scene.map))
evaluation.log_batch_errors(train_batch_errors,
log_writer,
'train',
curr_iter,
bar_plot=['kde'],
box_plot=['ade', 'fde'])
# Predict batch timesteps for evaluation dataset evaluation
eval_batch_errors = []
for scene in eval_scenes:
timesteps = scene.sample_timesteps(args.eval_batch_size)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples_z=100,
min_future_timesteps=ph,
max_nodes=4 * args.eval_batch_size)
eval_batch_errors.append(evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=eval_env.NodeType,
map=scene.map))
evaluation.log_batch_errors(eval_batch_errors,
log_writer,
'eval',
curr_iter,
bar_plot=['kde'],
box_plot=['ade', 'fde'])
# Predict maximum likelihood batch timesteps for evaluation dataset evaluation
eval_batch_errors_ml = []
for scene in eval_scenes:
timesteps = scene.sample_timesteps(scene.timesteps)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples_z=1,
min_future_timesteps=ph,
most_likely_z=True,
most_likely_gmm=True)
eval_batch_errors_ml.append(evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
map=scene.map,
node_type_enum=eval_env.NodeType,
kde=False))
evaluation.log_batch_errors(eval_batch_errors_ml,
log_writer,
'eval/ml',
curr_iter)
eval_loss = []
max_scenes = np.min([len(eval_scenes), 25])
for scene in np.random.choice(eval_scenes, max_scenes):
eval_loss.append(eval_stg.eval_loss(scene, timesteps))
evaluation.log_batch_errors(eval_loss,
log_writer,
'eval/loss',
curr_iter)
else:
print('{:15} | {:10} | {:14}'.format('', '', ''),
end='', flush=True)
# Here's the newline that ends the current training information printing.
print('')
if args.save_every is not None and (curr_iter + 1) % args.save_every == 0:
model_registrar.save_models(curr_iter)
print_training_header()
def print_training_header(newline_start=False):
if newline_start:
print('')
print('Iteration | Train Loss | Eval NLL Q (IS) | Eval NLL P | Eval NLL Exact')
print('----------------------------------------------------------------------')
def memInUse():
pid = os.getpid()
py = psutil.Process(pid)
memoryUse = py.memory_info()[0] / 2. ** 30 # memory use in GB...I think
print('memory GB:', memoryUse)
if __name__ == '__main__':
main()

View File

@ -1 +0,0 @@
from .trajectory_utils import integrate_trajectory, prediction_output_to_trajectories

90
config/config.json Normal file
View File

@ -0,0 +1,90 @@
{
"batch_size": 256,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.001,
"min_learning_rate": 0.00001,
"learning_decay_rate": 0.9999,
"prediction_horizon": 12,
"minimum_history_length": 1,
"maximum_history_length": 8,
"map_encoder": {
"PEDESTRIAN": {
"heading_state_index": 6,
"patch_size": [50, 10, 50, 90],
"map_channels": 3,
"hidden_channels": [10, 20, 10, 1],
"output_size": 32,
"masks": [5, 5, 5, 5],
"strides": [1, 1, 1, 1],
"dropout": 0.5
}
},
"k": 1,
"k_eval": 1,
"kl_min": 0.07,
"kl_weight": 100.0,
"kl_weight_start": 0,
"kl_decay_rate": 0.99995,
"kl_crossover": 400,
"kl_sigmoid_divisor": 4,
"rnn_kwargs": {
"dropout_keep_prob": 0.75
},
"MLP_dropout_keep_prob": 0.9,
"enc_rnn_dim_edge": 32,
"enc_rnn_dim_edge_influence": 32,
"enc_rnn_dim_history": 32,
"enc_rnn_dim_future": 32,
"dec_rnn_dim": 128,
"q_z_xy_MLP_dims": null,
"p_z_x_MLP_dims": 32,
"GMM_components": 1,
"log_p_yt_xz_max": 6,
"N": 1,
"K": 25,
"tau_init": 2.0,
"tau_final": 0.05,
"tau_decay_rate": 0.997,
"use_z_logit_clipping": true,
"z_logit_clip_start": 0.05,
"z_logit_clip_final": 5.0,
"z_logit_clip_crossover": 300,
"z_logit_clip_divisor": 5,
"dynamic": {
"PEDESTRIAN": {
"name": "SingleIntegrator",
"distribution": true,
"limits": {}
}
},
"state": {
"PEDESTRIAN": {
"position": ["x", "y"],
"velocity": ["x", "y"],
"acceleration": ["x", "y"]
}
},
"pred_state": {
"PEDESTRIAN": {
"position": ["x", "y"]
}
},
"log_histograms": false
}

109
config/nuScenes.json Normal file
View File

@ -0,0 +1,109 @@
{
"batch_size": 256,
"grad_clip": 1.0,
"learning_rate_style": "exp",
"learning_rate": 0.003,
"min_learning_rate": 0.00001,
"learning_decay_rate": 0.9999,
"prediction_horizon": 6,
"minimum_history_length": 1,
"maximum_history_length": 8,
"map_encoder": {
"VEHICLE": {
"heading_state_index": 6,
"patch_size": [50, 10, 50, 90],
"map_channels": 3,
"hidden_channels": [10, 20, 10, 1],
"output_size": 32,
"masks": [5, 5, 5, 3],
"strides": [2, 2, 1, 1],
"dropout": 0.5
}
},
"k": 1,
"k_eval": 1,
"kl_min": 0.07,
"kl_weight": 100.0,
"kl_weight_start": 0,
"kl_decay_rate": 0.99995,
"kl_crossover": 400,
"kl_sigmoid_divisor": 4,
"rnn_kwargs": {
"dropout_keep_prob": 0.75
},
"MLP_dropout_keep_prob": 0.9,
"enc_rnn_dim_edge": 32,
"enc_rnn_dim_edge_influence": 32,
"enc_rnn_dim_history": 32,
"enc_rnn_dim_future": 32,
"dec_rnn_dim": 128,
"q_z_xy_MLP_dims": null,
"p_z_x_MLP_dims": 32,
"GMM_components": 1,
"log_p_yt_xz_max": 6,
"N": 1,
"K": 25,
"tau_init": 2.0,
"tau_final": 0.05,
"tau_decay_rate": 0.997,
"use_z_logit_clipping": true,
"z_logit_clip_start": 0.05,
"z_logit_clip_final": 5.0,
"z_logit_clip_crossover": 300,
"z_logit_clip_divisor": 5,
"dynamic": {
"PEDESTRIAN": {
"name": "SingleIntegrator",
"distribution": true,
"limits": {}
},
"VEHICLE": {
"name": "Unicycle",
"distribution": true,
"limits": {
"max_a": 4,
"min_a": -5,
"max_heading_change": 0.7,
"min_heading_change": -0.7
}
}
},
"state": {
"PEDESTRIAN": {
"position": ["x", "y"],
"velocity": ["x", "y"],
"acceleration": ["x", "y"]
},
"VEHICLE": {
"position": ["x", "y"],
"velocity": ["x", "y"],
"acceleration": ["x", "y"],
"heading": ["°", "d°"]
}
},
"pred_state": {
"VEHICLE": {
"position": ["x", "y"]
},
"PEDESTRIAN": {
"position": ["x", "y"]
}
},
"log_histograms": false
}

View File

@ -1,11 +0,0 @@
#!/usr/bin/python
import sys
import urllib.request
def main():
# print command line arguments
urllib.request.urlretrieve(sys.argv[1], sys.argv[2])
if __name__ == "__main__":
main()

View File

@ -1,191 +0,0 @@
import numpy as np
class LinearPointMass:
"""Linear Kalman Filter for an autonomous point mass system, assuming constant velocity"""
def __init__(self, dt, sPos=None, sVel=None, sMeasurement=None):
"""
input matrices must be numpy arrays
:param A: state transition matrix
:param B: state control matrix
:param C: measurement matrix
:param Q: covariance of the Gaussian error in state transition
:param R: covariance of the Gaussain error in measurement
"""
self.dt = dt
# matrices of state transition and measurement
self.A = np.array([[1, dt, 0, 0], [0, 1, 0, 0], [0, 0, 1, dt], [0, 0, 0, 1]])
self.B = np.array([[0, 0], [dt, 0], [0, 0], [0, dt]])
self.C = np.array([[1, 0, 0, 0], [0, 0, 1, 0]])
# default noise covariance
if (sPos is None) and (sVel is None) and (sMeasurement is None):
# sPos = 0.5 * 5 * dt ** 2 # assume 5m/s2 as maximum acceleration
# sVel = 5.0 * dt # assume 8.8m/s2 as maximum acceleration
sPos = 1.3*self.dt # assume 5m/s2 as maximum acceleration
sVel = 4*self.dt # assume 8.8m/s2 as maximum acceleration
sMeasurement = 0.2 # 68% of the measurement is within [-sMeasurement, sMeasurement]
# state transition noise
self.Q = np.diag([sPos ** 2, sVel ** 2, sPos ** 2, sVel ** 2])
# measurement noise
self.R = np.diag([sMeasurement ** 2, sMeasurement ** 2])
def predict_and_update(self, x_vec_est, u_vec, P_matrix, z_new):
"""
for background please refer to wikipedia: https://en.wikipedia.org/wiki/Kalman_filter
:param x_vec_est:
:param u_vec:
:param P_matrix:
:param z_new:
:return:
"""
## Prediction Step
# predicted state estimate
x_pred = self.A.dot(x_vec_est) + self.B.dot(u_vec)
# predicted error covariance
P_pred = self.A.dot(P_matrix.dot(self.A.transpose())) + self.Q
## Update Step
# innovation or measurement pre-fit residual
y_telda = z_new - self.C.dot(x_pred)
# innovation covariance
S = self.C.dot(P_pred.dot(self.C.transpose())) + self.R
# optimal Kalman gain
K = P_pred.dot(self.C.transpose().dot(np.linalg.inv(S)))
# updated (a posteriori) state estimate
x_vec_est_new = x_pred + K.dot(y_telda)
# updated (a posteriori) estimate covariance
P_matrix_new = np.dot((np.identity(4) - K.dot(self.C)), P_pred)
return x_vec_est_new, P_matrix_new
class NonlinearKinematicBicycle:
"""
Nonlinear Kalman Filter for a kinematic bicycle model, assuming constant longitudinal speed
and constant heading angle
"""
def __init__(self, lf, lr, dt, sPos=None, sHeading=None, sVel=None, sMeasurement=None):
self.dt = dt
# params for state transition
self.lf = lf
self.lr = lr
# measurement matrix
self.C = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
# default noise covariance
if (sPos is None) and (sHeading is None) and (sVel is None) and (sMeasurement is None):
# TODO need to further check
# sPos = 0.5 * 8.8 * dt ** 2 # assume 8.8m/s2 as maximum acceleration
# sHeading = 0.5 * dt # assume 0.5rad/s as maximum turn rate
# sVel = 8.8 * dt # assume 8.8m/s2 as maximum acceleration
# sMeasurement = 1.0
sPos = 16 * self.dt # assume 8.8m/s2 as maximum acceleration
sHeading = np.pi/2 * self.dt # assume 0.5rad/s as maximum turn rate
sVel = 8 * self.dt # assume 8.8m/s2 as maximum acceleration
sMeasurement = 0.8
# state transition noise
self.Q = np.diag([sPos ** 2, sPos ** 2, sHeading ** 2, sVel ** 2])
# measurement noise
self.R = np.diag([sMeasurement ** 2, sMeasurement ** 2, sMeasurement ** 2, sMeasurement ** 2])
def predict_and_update(self, x_vec_est, u_vec, P_matrix, z_new):
"""
for background please refer to wikipedia: https://en.wikipedia.org/wiki/Extended_Kalman_filter
:param x_vec_est:
:param u_vec:
:param P_matrix:
:param z_new:
:return:
"""
## Prediction Step
# predicted state estimate
x_pred = self._kinematic_bicycle_model_rearCG(x_vec_est, u_vec)
# Compute Jacobian to obtain the state transition matrix
A = self._cal_state_Jacobian(x_vec_est, u_vec)
# predicted error covariance
P_pred = A.dot(P_matrix.dot(A.transpose())) + self.Q
## Update Step
# innovation or measurement pre-fit residual
y_telda = z_new - self.C.dot(x_pred)
# innovation covariance
S = self.C.dot(P_pred.dot(self.C.transpose())) + self.R
# near-optimal Kalman gain
K = P_pred.dot(self.C.transpose().dot(np.linalg.inv(S)))
# updated (a posteriori) state estimate
x_vec_est_new = x_pred + K.dot(y_telda)
# updated (a posteriori) estimate covariance
P_matrix_new = np.dot((np.identity(4) - K.dot(self.C)), P_pred)
return x_vec_est_new, P_matrix_new
def _kinematic_bicycle_model_rearCG(self, x_old, u):
"""
:param x: vehicle state vector = [x position, y position, heading, velocity]
:param u: control vector = [acceleration, steering angle]
:param dt:
:return:
"""
acc = u[0]
delta = u[1]
x = x_old[0]
y = x_old[1]
psi = x_old[2]
vel = x_old[3]
x_new = np.array([[0.], [0.], [0.], [0.]])
beta = np.arctan(self.lr * np.tan(delta) / (self.lf + self.lr))
x_new[0] = x + self.dt * vel * np.cos(psi + beta)
x_new[1] = y + self.dt * vel * np.sin(psi + beta)
x_new[2] = psi + self.dt * vel * np.cos(beta) / (self.lf + self.lr) * np.tan(delta)
#x_new[2] = _heading_angle_correction(x_new[2])
x_new[3] = vel + self.dt * acc
return x_new
def _cal_state_Jacobian(self, x_vec, u_vec):
acc = u_vec[0]
delta = u_vec[1]
x = x_vec[0]
y = x_vec[1]
psi = x_vec[2]
vel = x_vec[3]
beta = np.arctan(self.lr * np.tan(delta) / (self.lf + self.lr))
a13 = -self.dt * vel * np.sin(psi + beta)
a14 = self.dt * np.cos(psi + beta)
a23 = self.dt * vel * np.cos(psi + beta)
a24 = self.dt * np.sin(psi + beta)
a34 = self.dt * np.cos(beta) / (self.lf + self.lr) * np.tan(delta)
JA = np.array([[1.0, 0.0, a13[0], a14[0]],
[0.0, 1.0, a23[0], a24[0]],
[0.0, 0.0, 1.0, a34[0]],
[0.0, 0.0, 0.0, 1.0]])
return JA
def _heading_angle_correction(theta):
"""
correct heading angle so that it always remains in [-pi, pi]
:param theta:
:return:
"""
theta_corrected = (theta + np.pi) % (2.0 * np.pi) - np.pi
return theta_corrected

View File

@ -1 +0,0 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 0.0005, "learning_decay_rate": 0.9995, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_context": 120, "map_enc_num_layers": 4, "map_enc_hidden_size": 512, "map_enc_output_size": 512, "map_enc_dropout": 0.3, "alpha": 1, "k": 30, "k_eval": 50, "use_iwae": false, "kl_exact": true, "kl_min": 0.07, "kl_weight": 5.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 500, "kl_sigmoid_divisor": 4, "inf_warmup": 1.0, "inf_warmup_start": 1.0, "inf_warmup_crossover": 1500, "inf_warmup_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.7}, "MLP_dropout_keep_prob": 0.9, "rnn_io_dropout_keep_prob": 1.0, "enc_rnn_dim_multiple_inputs": 8, "enc_rnn_dim_edge": 8, "enc_rnn_dim_edge_influence": 8, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 512, "dec_GMM_proj_MLP_dims": null, "sample_model_during_dec": true, "dec_sample_model_prob_start": 1.0, "dec_sample_model_prob_final": 1.0, "dec_sample_model_prob_crossover": 200, "dec_sample_model_prob_divisor": 4, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "fuzz_factor": 0.05, "GMM_components": 12, "log_sigma_min": -10, "log_sigma_max": 10, "log_p_yt_xz_max": 50, "N": 2, "K": 5, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 500, "z_logit_clip_divisor": 5, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["value"]}, "BICYCLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}, "BICYCLE": {"velocity": ["x", "y"]}, "VEHICLE": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_radius": 0.0, "use_map_encoding": false, "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes"}

View File

@ -1 +0,0 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 0.0005, "learning_decay_rate": 0.9995, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_context": 120, "map_enc_num_layers": 4, "map_enc_hidden_size": 512, "map_enc_output_size": 512, "map_enc_dropout": 0.15, "alpha": 1, "k": 30, "k_eval": 50, "use_iwae": false, "kl_exact": true, "kl_min": 0.07, "kl_weight": 5.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 500, "kl_sigmoid_divisor": 4, "inf_warmup": 1.0, "inf_warmup_start": 1.0, "inf_warmup_crossover": 1500, "inf_warmup_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.7}, "MLP_dropout_keep_prob": 0.9, "rnn_io_dropout_keep_prob": 1.0, "enc_rnn_dim_multiple_inputs": 8, "enc_rnn_dim_edge": 8, "enc_rnn_dim_edge_influence": 8, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 512, "dec_GMM_proj_MLP_dims": null, "sample_model_during_dec": true, "dec_sample_model_prob_start": 1.0, "dec_sample_model_prob_final": 1.0, "dec_sample_model_prob_crossover": 200, "dec_sample_model_prob_divisor": 4, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "fuzz_factor": 0.05, "GMM_components": 12, "log_sigma_min": -10, "log_sigma_max": 10, "log_p_yt_xz_max": 50, "N": 2, "K": 5, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 500, "z_logit_clip_divisor": 5, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["value"]}, "BICYCLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}, "BICYCLE": {"velocity": ["x", "y"]}, "VEHICLE": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_radius": 20.0, "use_map_encoding": false, "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes"}

View File

@ -1 +0,0 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "const", "learning_rate": 0.002, "min_learning_rate": 0.0005, "learning_decay_rate": 0.9995, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_context": 120, "map_enc_num_layers": 4, "map_enc_hidden_size": 512, "map_enc_output_size": 512, "map_enc_dropout": 0.5, "alpha": 1, "k": 30, "k_eval": 50, "use_iwae": false, "kl_exact": true, "kl_min": 0.07, "kl_weight": 5.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 500, "kl_sigmoid_divisor": 4, "inf_warmup": 1.0, "inf_warmup_start": 1.0, "inf_warmup_crossover": 1500, "inf_warmup_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.5}, "MLP_dropout_keep_prob": 0.9, "rnn_io_dropout_keep_prob": 1.0, "enc_rnn_dim_multiple_inputs": 8, "enc_rnn_dim_edge": 8, "enc_rnn_dim_edge_influence": 8, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 512, "dec_GMM_proj_MLP_dims": null, "sample_model_during_dec": true, "dec_sample_model_prob_start": 1.0, "dec_sample_model_prob_final": 1.0, "dec_sample_model_prob_crossover": 200, "dec_sample_model_prob_divisor": 4, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "fuzz_factor": 0.05, "GMM_components": 12, "log_sigma_min": -10, "log_sigma_max": 10, "log_p_yt_xz_max": 50, "N": 2, "K": 5, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 500, "z_logit_clip_divisor": 5, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["value"]}, "BICYCLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}, "BICYCLE": {"velocity": ["x", "y"]}, "VEHICLE": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_radius": 20.0, "use_map_encoding": true, "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false}

View File

@ -1 +0,0 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.002, "min_learning_rate": 0.0005, "learning_decay_rate": 0.9995, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_context": 120, "map_enc_num_layers": 4, "map_enc_hidden_size": 512, "map_enc_output_size": 512, "map_enc_dropout": 0.5, "alpha": 1, "k": 30, "k_eval": 50, "use_iwae": false, "kl_exact": true, "kl_min": 0.07, "kl_weight": 5.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 500, "kl_sigmoid_divisor": 4, "inf_warmup": 1.0, "inf_warmup_start": 1.0, "inf_warmup_crossover": 1500, "inf_warmup_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.5}, "MLP_dropout_keep_prob": 0.9, "rnn_io_dropout_keep_prob": 1.0, "enc_rnn_dim_multiple_inputs": 8, "enc_rnn_dim_edge": 8, "enc_rnn_dim_edge_influence": 8, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 512, "dec_GMM_proj_MLP_dims": null, "sample_model_during_dec": true, "dec_sample_model_prob_start": 1.0, "dec_sample_model_prob_final": 1.0, "dec_sample_model_prob_crossover": 200, "dec_sample_model_prob_divisor": 4, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "fuzz_factor": 0.05, "GMM_components": 12, "log_sigma_min": -10, "log_sigma_max": 10, "log_p_yt_xz_max": 50, "N": 2, "K": 5, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 500, "z_logit_clip_divisor": 5, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["value"]}, "BICYCLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}, "BICYCLE": {"velocity": ["x", "y"]}, "VEHICLE": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_radius": 0.0, "use_map_encoding": true, "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes"}

View File

@ -1 +0,0 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 0.0005, "learning_decay_rate": 0.9995, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_context": 120, "map_enc_num_layers": 4, "map_enc_hidden_size": 512, "map_enc_output_size": 512, "map_enc_dropout": 0.5, "alpha": 1, "k": 30, "k_eval": 50, "use_iwae": false, "kl_exact": true, "kl_min": 0.07, "kl_weight": 5.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 500, "kl_sigmoid_divisor": 4, "inf_warmup": 1.0, "inf_warmup_start": 1.0, "inf_warmup_crossover": 1500, "inf_warmup_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.5}, "MLP_dropout_keep_prob": 0.9, "rnn_io_dropout_keep_prob": 1.0, "enc_rnn_dim_multiple_inputs": 8, "enc_rnn_dim_edge": 8, "enc_rnn_dim_edge_influence": 8, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 512, "dec_GMM_proj_MLP_dims": null, "sample_model_during_dec": true, "dec_sample_model_prob_start": 1.0, "dec_sample_model_prob_final": 1.0, "dec_sample_model_prob_crossover": 200, "dec_sample_model_prob_divisor": 4, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "fuzz_factor": 0.05, "GMM_components": 12, "log_sigma_min": -10, "log_sigma_max": 10, "log_p_yt_xz_max": 50, "N": 2, "K": 5, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 500, "z_logit_clip_divisor": 5, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["value"]}, "BICYCLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y", "m"], "acceleration": ["x", "y", "m"], "heading": ["value"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}, "BICYCLE": {"velocity": ["x", "y"]}, "VEHICLE": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_radius": 20.0, "use_map_encoding": false, "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": true}

@ -1 +0,0 @@
Subproject commit f3594b967cbf42396da5c6cb08bd714437b53111

View File

@ -1,491 +0,0 @@
import sys
import os
import numpy as np
import pandas as pd
import pickle
import json
from tqdm import tqdm
from pyquaternion import Quaternion
from kalman_filter import LinearPointMass, NonlinearKinematicBicycle
from scipy.integrate import cumtrapz
from scipy.ndimage.morphology import binary_dilation, generate_binary_structure
nu_path = './nuscenes-devkit/python-sdk/'
#op_path = './pytorch-openpose/python/'
sys.path.append(nu_path)
sys.path.append("../../code")
#sys.path.append(op_path)
from nuscenes.nuscenes import NuScenes
from nuscenes.map_expansion.map_api import NuScenesMap
from data import Environment, Scene, Node, BicycleNode, Position, Velocity, Acceleration, ActuatorAngle, Map, Scalar
scene_blacklist = [3, 12, 18, 19, 33, 35, 36, 41, 45, 50, 54, 55, 61, 120, 121, 123, 126, 132, 133, 134, 149,
154, 159, 196, 268, 278, 351, 365, 367, 368, 369, 372, 376, 377, 382, 385, 499, 515, 517,
945, 947, 952, 955, 962, 963, 968] + [969]
types = ['PEDESTRIAN',
'BICYCLE',
'VEHICLE']
standardization = {
'PEDESTRIAN': {
'position': {
'x': {'mean': 0, 'std': 25},
'y': {'mean': 0, 'std': 25}
},
'velocity': {
'x': {'mean': 0, 'std': 2},
'y': {'mean': 0, 'std': 2}
},
'acceleration': {
'x': {'mean': 0, 'std': 1},
'y': {'mean': 0, 'std': 1}
},
'heading': {
'value': {'mean': 0, 'std': np.pi},
'derivative': {'mean': 0, 'std': np.pi / 4}
}
},
'BICYCLE': {
'position': {
'x': {'mean': 0, 'std': 50},
'y': {'mean': 0, 'std': 50}
},
'velocity': {
'x': {'mean': 0, 'std': 6},
'y': {'mean': 0, 'std': 6},
'm': {'mean': 0, 'std': 6}
},
'acceleration': {
'x': {'mean': 0, 'std': 4},
'y': {'mean': 0, 'std': 4},
'm': {'mean': 0, 'std': 4}
},
'actuator_angle': {
'steering_angle': {'mean': 0, 'std': np.pi/2}
},
'heading': {
'value': {'mean': 0, 'std': np.pi},
'derivative': {'mean': 0, 'std': np.pi / 4}
}
},
'VEHICLE': {
'position': {
'x': {'mean': 0, 'std': 100},
'y': {'mean': 0, 'std': 100}
},
'velocity': {
'x': {'mean': 0, 'std': 20},
'y': {'mean': 0, 'std': 20},
'm': {'mean': 0, 'std': 20}
},
'acceleration': {
'x': {'mean': 0, 'std': 4},
'y': {'mean': 0, 'std': 4},
'm': {'mean': 0, 'std': 4}
},
'actuator_angle': {
'steering_angle': {'mean': 0, 'std': np.pi/2}
},
'heading': {
'value': {'mean': 0, 'std': np.pi},
'derivative': {'mean': 0, 'std': np.pi / 4}
}
}
}
def inverse_np_gradient(f, dx, F0=0.):
N = f.shape[0]
return F0 + np.hstack((np.zeros((N, 1)), cumtrapz(f, axis=1, dx=dx)))
def integrate_trajectory(v, x0, dt):
xd_ = inverse_np_gradient(v[..., 0], dx=dt, F0=x0[0])
yd_ = inverse_np_gradient(v[..., 1], dx=dt, F0=x0[1])
integrated = np.stack([xd_, yd_], axis=2)
return integrated
def integrate_heading_model(a, dh, h0, x0, v0, dt):
h = inverse_np_gradient(dh, dx=dt, F0=h0)
v_m = inverse_np_gradient(a, dx=dt, F0=v0)
vx = np.cos(h) * v_m
vy = np.sin(h) * v_m
v = np.stack((vx, vy), axis=2)
return integrate_trajectory(v, x0, dt)
if __name__ == "__main__":
num_global_straight = 0
num_global_curve = 0
test = False
if sys.argv[1] == 'mini':
data_path = './raw_data/mini'
nusc = NuScenes(version='v1.0-mini', dataroot=data_path, verbose=True)
add = "_mini"
train_scenes = nusc.scene[0:7]
val_scenes = nusc.scene[7:]
test_scenes = []
elif sys.argv[1] == 'test':
test = True
data_path = './raw_data'
nusc = NuScenes(version='v1.0-test', dataroot=data_path, verbose=True)
train_scenes = []
val_scenes = []
test_scenes = nusc.scene
with open(os.path.join('./raw_data/results_test_megvii.json'), 'r') as test_json:
test_annotations = json.load(test_json)
else:
data_path = '/home/timsal/Documents/code/GenTrajectron_nuScenes_ssh/data/nuScenes/raw_data'
nusc = NuScenes(version='v1.0-trainval', dataroot=data_path, verbose=True)
add = ""
train_scenes = nusc.scene[0:]
val_scenes = nusc.scene[700:]
test_scenes = []
for data_class, nuscenes in [('train', train_scenes), ('val', val_scenes), ('test', test_scenes)]:
print(f"Processing data class {data_class}")
data_dict_path = os.path.join('../processed', '_'.join(['nuScenes', data_class])+ 'samp.pkl')
env = Environment(node_type_list=types, standardization=standardization)
attention_radius = dict()
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.PEDESTRIAN)] = 3.0
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.VEHICLE)] = 20.0
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.BICYCLE)] = 10.0
attention_radius[(env.NodeType.VEHICLE, env.NodeType.PEDESTRIAN)] = 20.0
attention_radius[(env.NodeType.VEHICLE, env.NodeType.VEHICLE)] = 20.0
attention_radius[(env.NodeType.VEHICLE, env.NodeType.BICYCLE)] = 20.0
attention_radius[(env.NodeType.BICYCLE, env.NodeType.PEDESTRIAN)] = 10.0
attention_radius[(env.NodeType.BICYCLE, env.NodeType.VEHICLE)] = 20.0
attention_radius[(env.NodeType.BICYCLE, env.NodeType.BICYCLE)] = 10.0
env.attention_radius = attention_radius
scenes = []
pbar = tqdm(nuscenes, ncols=100)
for nuscene in pbar:
scene_id = int(nuscene['name'].replace('scene-', ''))
if scene_id in scene_blacklist: # Some scenes have bad localization
continue
if not (scene_id == 1002 or scene_id == 234):
continue
data = pd.DataFrame(columns=['frame_id',
'type',
'node_id',
'robot',
'x', 'y', 'z',
'length',
'width',
'height',
'heading',
'orientation'])
sample_token = nuscene['first_sample_token']
sample = nusc.get('sample', sample_token)
frame_id = 0
while sample['next']:
if not test:
annotation_tokens = sample['anns']
else:
annotation_tokens = test_annotations['results'][sample['token']]
for annotation_token in annotation_tokens:
if not test:
annotation = nusc.get('sample_annotation', annotation_token)
category = annotation['category_name']
if len(annotation['attribute_tokens']):
attribute = nusc.get('attribute', annotation['attribute_tokens'][0])['name']
if 'pedestrian' in category and not 'stroller' in category and not 'wheelchair' in category:
our_category = env.NodeType.PEDESTRIAN
elif ('vehicle.bicycle' in category) and 'with_rider' in attribute:
continue
our_category = env.NodeType.BICYCLE
elif 'vehicle' in category and 'bicycle' not in category and 'motorcycle' not in category and 'parked' not in attribute:
our_category = env.NodeType.VEHICLE
# elif ('vehicle.motorcycle' in category) and 'with_rider' in attribute:
# our_category = env.NodeType.VEHICLE
else:
continue
else:
annotation = annotation_token
category = annotation['tracking_name']
attribute = ""#annotation['attribute_name']
if 'pedestrian' in category :
our_category = env.NodeType.PEDESTRIAN
elif (('car' in category or 'bus' in category or 'construction_vehicle' in category) and 'parked' not in attribute):
our_category = env.NodeType.VEHICLE
# elif ('vehicle.motorcycle' in category) and 'with_rider' in attribute:
# our_category = env.NodeType.VEHICLE
else:
continue
data_point = pd.Series({'frame_id': frame_id,
'type': our_category,
'node_id': annotation['instance_token'] if not test else annotation['tracking_id'],
'robot': False,
'x': annotation['translation'][0],
'y': annotation['translation'][1],
'z': annotation['translation'][2],
'length': annotation['size'][0],
'width': annotation['size'][1],
'height': annotation['size'][2],
'heading': Quaternion(annotation['rotation']).yaw_pitch_roll[0],
'orientation': None})
data = data.append(data_point, ignore_index=True)
# Ego Vehicle
our_category = env.NodeType.VEHICLE
sample_data = nusc.get('sample_data', sample['data']['CAM_FRONT'])
annotation = nusc.get('ego_pose', sample_data['ego_pose_token'])
data_point = pd.Series({'frame_id': frame_id,
'type': our_category,
'node_id': 'ego',
'robot': True,
'x': annotation['translation'][0],
'y': annotation['translation'][1],
'z': annotation['translation'][2],
'length': 4,
'width': 1.7,
'height': 1.5,
'heading': Quaternion(annotation['rotation']).yaw_pitch_roll[0],
'orientation': None})
data = data.append(data_point, ignore_index=True)
sample = nusc.get('sample', sample['next'])
frame_id += 1
if len(data.index) == 0:
continue
data.sort_values('frame_id', inplace=True)
max_timesteps = data['frame_id'].max()
x_min = np.round(data['x'].min() - 50)
x_max = np.round(data['x'].max() + 50)
y_min = np.round(data['y'].min() - 50)
y_max = np.round(data['y'].max() + 50)
data['x'] = data['x'] - x_min
data['y'] = data['y'] - y_min
scene = Scene(timesteps=max_timesteps + 1, dt=0.5, name=str(scene_id))
# Generate Maps
map_name = nusc.get('log', nuscene['log_token'])['location']
nusc_map = NuScenesMap(dataroot=data_path, map_name=map_name)
type_map = dict()
x_size = x_max - x_min
y_size = y_max - y_min
patch_box = (x_min + 0.5 * (x_max - x_min), y_min + 0.5 * (y_max - y_min), y_size, x_size)
patch_angle = 0 # Default orientation where North is up
canvas_size = (np.round(3 * y_size).astype(int), np.round(3 * x_size).astype(int))
homography = np.array([[3., 0., 0.], [0., 3., 0.], [0., 0., 3.]])
layer_names = ['lane', 'road_segment', 'drivable_area', 'road_divider', 'lane_divider', 'stop_line',
'ped_crossing', 'stop_line', 'ped_crossing', 'walkway']
map_mask = (nusc_map.get_map_mask(patch_box, patch_angle, layer_names, canvas_size) * 255.0).astype(
np.uint8)
map_mask = np.swapaxes(map_mask, 1, 2) # x axis comes first
# PEDESTRIANS
map_mask_pedestrian = np.stack((map_mask[9], map_mask[8], np.max(map_mask[:3], axis=0)), axis=2)
type_map['PEDESTRIAN'] = Map(data=map_mask_pedestrian, homography=homography,
description=', '.join(layer_names))
# Bicycles
map_mask_bicycles = np.stack((map_mask[9], map_mask[8], np.max(map_mask[:3], axis=0)), axis=2)
type_map['BICYCLE'] = Map(data=map_mask_bicycles, homography=homography, description=', '.join(layer_names))
# VEHICLES
map_mask_vehicle = np.stack((np.max(map_mask[:3], axis=0), map_mask[3], map_mask[4]), axis=2)
type_map['VEHICLE'] = Map(data=map_mask_vehicle, homography=homography, description=', '.join(layer_names))
map_mask_plot = np.stack(((np.max(map_mask[:3], axis=0) - (map_mask[3] + 0.5 * map_mask[4]).clip(
max=255)).clip(min=0).astype(np.uint8), map_mask[8], map_mask[9]), axis=2)
type_map['PLOT'] = Map(data=map_mask_plot, homography=homography, description=', '.join(layer_names))
scene.map = type_map
del map_mask
del map_mask_pedestrian
del map_mask_vehicle
del map_mask_bicycles
del map_mask_plot
for node_id in pd.unique(data['node_id']):
node_df = data[data['node_id'] == node_id]
if node_df['x'].shape[0] < 2:
continue
if not np.all(np.diff(node_df['frame_id']) == 1):
#print('Occlusion')
continue # TODO Make better
node_values = node_df['x'].values
if node_df.iloc[0]['type'] == env.NodeType.PEDESTRIAN:
node = Node(type=node_df.iloc[0]['type'])
else:
node = BicycleNode(type=node_df.iloc[0]['type'])
node.first_timestep = node_df['frame_id'].iloc[0]
node.position = Position(node_df['x'].values, node_df['y'].values)
node.velocity = Velocity.from_position(node.position, scene.dt)
node.velocity.m = np.linalg.norm(np.vstack((node.velocity.x, node.velocity.y)), axis=0)
node.acceleration = Acceleration.from_velocity(node.velocity, scene.dt)
node.heading = Scalar(node_df['heading'].values)
heading_t = node_df['heading'].values.copy()
shifted_heading = np.zeros_like(node.heading.value)
shifted_heading[0] = node.heading.value[0]
for i in range(1, len(node.heading.value)):
if not (np.sign(node.heading.value[i]) == np.sign(node.heading.value[i - 1])) and np.abs(
node.heading.value[i]) > np.pi / 2:
shifted_heading[i] = shifted_heading[i - 1] + (
node.heading.value[i] - node.heading.value[i - 1]) - np.sign(
(node.heading.value[i] - node.heading.value[i - 1])) * 2 * np.pi
else:
shifted_heading[i] = shifted_heading[i - 1] + (
node.heading.value[i] - node.heading.value[i - 1])
node.heading.value = shifted_heading
node.length = node_df.iloc[0]['length']
node.width = node_df.iloc[0]['width']
if node_df.iloc[0]['robot'] == True:
node.is_robot = True
if node_df.iloc[0]['type'] == env.NodeType.PEDESTRIAN:
filter_ped = LinearPointMass(dt=scene.dt)
for i in range(len(node.position.x)):
if i == 0: # initalize KF
P_matrix = np.identity(4)
elif i < len(node.position.x):
# assign new est values
node.position.x[i] = x_vec_est_new[0][0]
node.velocity.x[i] = x_vec_est_new[1][0]
node.position.y[i] = x_vec_est_new[2][0]
node.velocity.y[i] = x_vec_est_new[3][0]
if i < len(node.position.x) - 1: # no action on last data
# filtering
x_vec_est = np.array([[node.position.x[i]],
[node.velocity.x[i]],
[node.position.y[i]],
[node.velocity.y[i]]])
z_new = np.array([[node.position.x[i+1]],
[node.position.y[i+1]]])
x_vec_est_new, P_matrix_new = filter_ped.predict_and_update(
x_vec_est=x_vec_est,
u_vec=np.array([[0.], [0.]]),
P_matrix=P_matrix,
z_new=z_new
)
P_matrix = P_matrix_new
else:
filter_veh = NonlinearKinematicBicycle(lf=node.length*0.6, lr=node.length*0.4, dt=scene.dt)
for i in range(len(node.position.x)):
if i == 0: # initalize KF
# initial P_matrix
P_matrix = np.identity(4)
elif i < len(node.position.x):
# assign new est values
node.position.x[i] = x_vec_est_new[0][0]
node.position.y[i] = x_vec_est_new[1][0]
node.heading.value[i] = x_vec_est_new[2][0]
node.velocity.m[i] = x_vec_est_new[3][0]
if i < len(node.position.x) - 1: # no action on last data
# filtering
x_vec_est = np.array([[node.position.x[i]],
[node.position.y[i]],
[node.heading.value[i]],
[node.velocity.m[i]]])
z_new = np.array([[node.position.x[i+1]],
[node.position.y[i+1]],
[node.heading.value[i+1]],
[node.velocity.m[i+1]]])
x_vec_est_new, P_matrix_new = filter_veh.predict_and_update(
x_vec_est=x_vec_est,
u_vec=np.array([[0.], [0.]]),
P_matrix=P_matrix,
z_new=z_new
)
P_matrix = P_matrix_new
v_tmp = node.velocity.m
node.velocity = Velocity.from_position(node.position, scene.dt)
node.velocity.m = v_tmp
#if (np.abs(np.linalg.norm(np.vstack((node.velocity.x, node.velocity.y)), axis=0) - v_tmp) > 0.4).any():
# print(np.abs(np.linalg.norm(np.vstack((node.velocity.x, node.velocity.y)), axis=0) - v_tmp))
node.acceleration = Acceleration.from_velocity(node.velocity, scene.dt)
node.acceleration.m = np.gradient(v_tmp, scene.dt)
node.heading.derivative = np.gradient(node.heading.value, scene.dt)
node.heading.value = (node.heading.value + np.pi) % (2.0 * np.pi) - np.pi
if node_df.iloc[0]['type'] == env.NodeType.VEHICLE:
node_pos = np.stack((node.position.x, node.position.y), axis=1)
node_pos_map = scene.map[env.NodeType.VEHICLE.name].to_map_points(node_pos)
node_pos_int = np.round(node_pos_map).astype(int)
dilated_map = binary_dilation(scene.map[env.NodeType.VEHICLE.name].data[..., 0], generate_binary_structure(2, 2))
if np.sum((dilated_map[node_pos_int[:, 0], node_pos_int[:, 1]] == 0))/node_pos_int.shape[0] > 0.1:
del node
continue # Out of map
if not node_df.iloc[0]['type'] == env.NodeType.PEDESTRIAN:
# Re Integrate:
i_pos = integrate_heading_model(np.array([node.acceleration.m[1:]]),
np.array([node.heading.derivative[1:]]),
node.heading.value[0],
np.vstack((node.position.x[0], node.position.y[0])),
node.velocity.m[0], 0.5)
#if (np.abs(node.heading.derivative) > np.pi/8).any():
# print(np.abs(node.heading.derivative).max())
scene.nodes.append(node)
if node.is_robot is True:
scene.robot = node
robot = False
num_heading_changed = 0
num_moving_vehicles = 0
for node in scene.nodes:
node.description = "straight"
num_global_straight += 1
if node.type == env.NodeType.VEHICLE:
if np.linalg.norm((node.position.x[0] - node.position.x[-1], node.position.y[0] - node.position.y[-1])) > 10:
num_moving_vehicles += 1
if np.abs(node.heading.value[0] - node.heading.value[-1]) > np.pi / 6:
if not np.sign(node.heading.value[0]) == np.sign(node.heading.value[-1]) and np.abs(node.heading.value[0] > 1/2 * np.pi):
if (node.heading.value[0] - node.heading.value[-1]) - np.sign((node.heading.value[0] - node.heading.value[-1])) * 2 * np.pi > np.pi / 6:
node.description = "curve"
num_global_curve += 1
num_global_straight -= 1
num_heading_changed += 1
else:
node.description = "curve"
num_global_curve += 1
num_global_straight -= 1
num_heading_changed += 1
if node.is_robot:
robot = True
if num_moving_vehicles > 0 and num_heading_changed / num_moving_vehicles > 0.4:
scene.description = "curvy"
else:
scene.description = "straight"
if robot: # If we dont have a ego vehicle there was bad localization
pbar.set_description(str(scene))
scenes.append(scene)
del data
env.scenes = scenes
if len(scenes) > 0:
with open(data_dict_path, 'wb') as f:
pickle.dump(env, f, protocol=pickle.HIGHEST_PROTOCOL)
print(num_global_straight)
print(num_global_curve)

Binary file not shown.

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,456 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import glob\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.ticker as ticker"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vehicles"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"FDE Results for: int_ee\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.15211091398370916\n",
"RB Viols @1.0s: 0.0024947383125349227\n",
"FDE @1.0s: 0.06731242196606527\n",
"KDE @1.0s: -4.282745726122304\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.6665500898672814\n",
"RB Viols @2.0s: 0.006711305643509033\n",
"FDE @2.0s: 0.4448624482250238\n",
"KDE @2.0s: -2.817182897766484\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 1.6002716865301074\n",
"RB Viols @3.0s: 0.03183521139877072\n",
"FDE @3.0s: 1.1269273173570473\n",
"KDE @3.0s: -1.672070802164807\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 2.9889876939803504\n",
"RB Viols @4.0s: 0.08143642515676414\n",
"FDE @4.0s: 2.1726876209006143\n",
"KDE @4.0s: -0.7623941477599536\n",
"----------------------------------------------\n",
"\n",
"FDE Results for: int_ee_me\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.17146557916606217\n",
"RB Viols @1.0s: 0.002893866020984665\n",
"FDE @1.0s: 0.06825468723974978\n",
"KDE @1.0s: -4.174627329473856\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.6874934647937203\n",
"RB Viols @2.0s: 0.006347814614763767\n",
"FDE @2.0s: 0.4473287549407142\n",
"KDE @2.0s: -2.7424043655184898\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 1.6150508554078604\n",
"RB Viols @3.0s: 0.027944558266592166\n",
"FDE @3.0s: 1.1370181075818808\n",
"KDE @3.0s: -1.616241617749356\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 2.9834139311814645\n",
"RB Viols @4.0s: 0.07611557086980816\n",
"FDE @4.0s: 2.2067347028461923\n",
"KDE @4.0s: -0.7050671606779637\n",
"----------------------------------------------\n",
"\n",
"FDE Results for: vel_ee_me\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.21398885662219846\n",
"RB Viols @1.0s: 0.0024283075681380767\n",
"FDE @1.0s: 0.1792272294232774\n",
"KDE @1.0s: 0.8111385940397233\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.715463329547642\n",
"RB Viols @2.0s: 0.006407897187558204\n",
"FDE @2.0s: 0.5706283482566946\n",
"KDE @2.0s: 0.051893685490453464\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 1.5440473025828012\n",
"RB Viols @3.0s: 0.02805111131806047\n",
"FDE @3.0s: 1.2515989489585615\n",
"KDE @3.0s: 0.371638561867866\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 2.714255228812044\n",
"RB Viols @4.0s: 0.06920216365555348\n",
"FDE @4.0s: 2.2400267464847876\n",
"KDE @4.0s: 0.8726346089263975\n",
"----------------------------------------------\n",
"\n",
"FDE Results for: robot\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.1295215269389519\n",
"RB Viols @1.0s: 0.0026757717999638924\n",
"FDE @1.0s: 0.07820393052295552\n",
"KDE @1.0s: -3.906838146881899\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.45962341869964574\n",
"RB Viols @2.0s: 0.0053363964614551365\n",
"FDE @2.0s: 0.3403511030418785\n",
"KDE @2.0s: -2.7593676749477294\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 1.02267032097404\n",
"RB Viols @3.0s: 0.016484509839321176\n",
"FDE @3.0s: 0.805915047871091\n",
"KDE @3.0s: -1.7502450775203158\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 1.8380306576706953\n",
"RB Viols @4.0s: 0.042144791478606246\n",
"FDE @4.0s: 1.4979755853506684\n",
"KDE @4.0s: -0.9291549495198915\n",
"----------------------------------------------\n",
"\n"
]
}
],
"source": [
"for model in ['int_ee', 'int_ee_me', 'vel_ee', 'robot']:\n",
" print(f\"FDE Results for: {model}\")\n",
" for ph in [2, 4, 6, 8]:\n",
" print(f\"-----------------PH: {ph} -------------------\")\n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}_{ph}_fde_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" \n",
" print(f\"FDE Mean @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'full'].mean()}\")\n",
" del perf_df \n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}_{ph}_rv_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"RB Viols @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'full'].sum() / (len(perf_df['value'][perf_df['type'] == 'full'].index)*2000)}\")\n",
" del perf_df\n",
"\n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_fde_most_likely_z.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"FDE @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'ml'].mean()}\") \n",
" del perf_df\n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_kde_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"KDE @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'full'].mean()}\") \n",
" print(\"----------------------------------------------\")\n",
" del perf_df\n",
" print(\"\")"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"FDE Results for: int_ee_me_no_ego\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.16815279412540554\n",
"RB Viols @1.0s: 0.002895014589929844\n",
"FDE @1.0s: 0.06937045846177256\n",
"KDE @1.0s: -4.262019931215572\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.6655379721067188\n",
"RB Viols @2.0s: 0.006153364996585336\n",
"FDE @2.0s: 0.4359008486971371\n",
"KDE @2.0s: -2.856656149202157\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 1.546091556287448\n",
"RB Viols @3.0s: 0.027780530204259017\n",
"FDE @3.0s: 1.0896218245514429\n",
"KDE @3.0s: -1.7563896369106704\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 2.8358865412257397\n",
"RB Viols @4.0s: 0.07581256596510834\n",
"FDE @4.0s: 2.0939721352439022\n",
"KDE @4.0s: -0.8690706892091696\n",
"----------------------------------------------\n",
"\n",
"FDE Results for: robot\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.1295215269389519\n",
"RB Viols @1.0s: 0.0026757717999638924\n",
"FDE @1.0s: 0.07820393052295552\n",
"KDE @1.0s: -3.906838146881899\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.45962341869964574\n",
"RB Viols @2.0s: 0.0053363964614551365\n",
"FDE @2.0s: 0.3403511030418785\n",
"KDE @2.0s: -2.7593676749477294\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 1.02267032097404\n",
"RB Viols @3.0s: 0.016484509839321176\n",
"FDE @3.0s: 0.805915047871091\n",
"KDE @3.0s: -1.7502450775203158\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 1.8380306576706953\n",
"RB Viols @4.0s: 0.042144791478606246\n",
"FDE @4.0s: 1.4979755853506684\n",
"KDE @4.0s: -0.9291549495198915\n",
"----------------------------------------------\n",
"\n"
]
}
],
"source": [
"for model in ['int_ee_me_no_ego', 'robot']:\n",
" print(f\"FDE Results for: {model}\")\n",
" for ph in [2, 4, 6, 8]:\n",
" print(f\"-----------------PH: {ph} -------------------\")\n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}_{ph}_fde_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" \n",
" print(f\"FDE Mean @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'full'].mean()}\")\n",
" del perf_df \n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}_{ph}_rv_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"RB Viols @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'full'].sum() / (len(perf_df['value'][perf_df['type'] == 'full'].index)*2000)}\")\n",
" del perf_df\n",
"\n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_fde_most_likely_z.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"FDE @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'ml'].mean()}\") \n",
" del perf_df\n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_kde_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"KDE @{ph*0.5}s: {perf_df['value'][perf_df['type'] == 'full'].mean()}\") \n",
" print(\"----------------------------------------------\")\n",
" del perf_df\n",
" print(\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Pedestrians"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"FDE Results for: int_ee_me_ped\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.03182535279935429\n",
"ADE Mean @1.0s: 0.034975306849922005\n",
"KDE Mean @1.0s: -5.577685316351455\n",
"FDE @1.0s: 0.014470668260911932\n",
"ADE @1.0s: 0.021401672730783382\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.21879313416975887\n",
"ADE Mean @2.0s: 0.10080166010252017\n",
"KDE Mean @2.0s: -3.9582677566570568\n",
"FDE @2.0s: 0.1656927524369561\n",
"ADE @2.0s: 0.07265244240382243\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 0.48124106327369537\n",
"ADE Mean @3.0s: 0.20455084715465008\n",
"KDE Mean @3.0s: -2.768212012793919\n",
"FDE @3.0s: 0.36991744855974507\n",
"ADE @3.0s: 0.1538591151610063\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 0.7897925016736143\n",
"ADE Mean @4.0s: 0.3309282373807616\n",
"KDE Mean @4.0s: -1.891451489507079\n",
"FDE @4.0s: 0.61780508431085\n",
"ADE @4.0s: 0.2535511093237994\n",
"----------------------------------------------\n",
"\n",
"FDE Results for: vel_ee_ped\n",
"-----------------PH: 2 -------------------\n",
"FDE Mean @1.0s: 0.05470159146400349\n",
"ADE Mean @1.0s: 0.04723856023122099\n",
"KDE Mean @1.0s: -2.693286369409014\n",
"FDE @1.0s: 0.03272132837594798\n",
"ADE @1.0s: 0.03440844320849249\n",
"----------------------------------------------\n",
"-----------------PH: 4 -------------------\n",
"FDE Mean @2.0s: 0.235549582909888\n",
"ADE Mean @2.0s: 0.11606559815399368\n",
"KDE Mean @2.0s: -2.4601640447400186\n",
"FDE @2.0s: 0.17398568920641183\n",
"ADE @2.0s: 0.08409326559182477\n",
"----------------------------------------------\n",
"-----------------PH: 6 -------------------\n",
"FDE Mean @3.0s: 0.4833427705400407\n",
"ADE Mean @3.0s: 0.21676831990727596\n",
"KDE Mean @3.0s: -1.7550238928047612\n",
"FDE @3.0s: 0.3705610422470493\n",
"ADE @3.0s: 0.16234687699669642\n",
"----------------------------------------------\n",
"-----------------PH: 8 -------------------\n",
"FDE Mean @4.0s: 0.7761647665317681\n",
"ADE Mean @4.0s: 0.3376368652760976\n",
"KDE Mean @4.0s: -1.0900967343150951\n",
"FDE @4.0s: 0.6033992852865975\n",
"ADE @4.0s: 0.25754615271005243\n",
"----------------------------------------------\n",
"\n"
]
}
],
"source": [
"for model in ['int_ee_me_ped', 'vel_ee_ped']:\n",
" print(f\"FDE Results for: {model}\")\n",
" for ph in [2, 4, 6, 8]:\n",
" print(f\"-----------------PH: {ph} -------------------\")\n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_fde_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"FDE Mean @{ph*0.5}s: {perf_df['value'][perf_df['metric'] == 'fde'].mean()}\")\n",
" del perf_df \n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_ade_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"ADE Mean @{ph*0.5}s: {perf_df['value'][perf_df['metric'] == 'ade'].mean()}\")\n",
" del perf_df\n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_kde_full.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"KDE Mean @{ph*0.5}s: {perf_df['value'][perf_df['metric'] == 'kde'].mean()}\")\n",
" del perf_df \n",
"\n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_fde_most_likely_z.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"FDE @{ph*0.5}s: {perf_df['value'][perf_df['metric'] == 'fde'].mean()}\") \n",
" del perf_df\n",
" \n",
" perf_df = pd.DataFrame()\n",
" for f in glob.glob(f\"results/{model}*_{ph}_ade_most_likely_z.csv\"):\n",
" dataset_df = pd.read_csv(f)\n",
" dataset_df['model'] = model\n",
" perf_df = perf_df.append(dataset_df, ignore_index=True)\n",
" del perf_df['Unnamed: 0']\n",
" print(f\"ADE @{ph*0.5}s: {perf_df['value'][perf_df['metric'] == 'ade'].mean()}\") \n",
" del perf_df\n",
" print(\"----------------------------------------------\")\n",
" print(\"\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:trajectron] *",
"language": "python",
"name": "conda-env-trajectron-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -0,0 +1 @@
Subproject commit 1050c3d11b5413a1fc5ca4e73a9e426747263297

View File

@ -0,0 +1,185 @@
import sys
import os
import dill
import json
import argparse
import torch
import numpy as np
import pandas as pd
sys.path.append("../../trajectron")
from tqdm import tqdm
from model.model_registrar import ModelRegistrar
from model.trajectron import Trajectron
import evaluation
import utils
from scipy.interpolate import RectBivariateSpline
seed = 0
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
parser = argparse.ArgumentParser()
parser.add_argument("--model", help="model full path", type=str)
parser.add_argument("--checkpoint", help="model checkpoint to evaluate", type=int)
parser.add_argument("--data", help="full path to data file", type=str)
parser.add_argument("--output_path", help="path to output csv file", type=str)
parser.add_argument("--output_tag", help="name tag for output file", type=str)
parser.add_argument("--node_type", help="node type to evaluate", type=str)
parser.add_argument("--prediction_horizon", nargs='+', help="prediction horizon", type=int, default=None)
args = parser.parse_args()
def compute_road_violations(predicted_trajs, map, channel):
obs_map = 1 - map.data[..., channel, :, :] / 255
interp_obs_map = RectBivariateSpline(range(obs_map.shape[0]),
range(obs_map.shape[1]),
obs_map,
kx=1, ky=1)
old_shape = predicted_trajs.shape
pred_trajs_map = map.to_map_points(predicted_trajs.reshape((-1, 2)))
traj_obs_values = interp_obs_map(pred_trajs_map[:, 0], pred_trajs_map[:, 1], grid=False)
traj_obs_values = traj_obs_values.reshape((old_shape[0], old_shape[1], old_shape[2]))
num_viol_trajs = np.sum(traj_obs_values.max(axis=2) > 0, dtype=float)
return num_viol_trajs
def load_model(model_dir, env, ts=100):
model_registrar = ModelRegistrar(model_dir, 'cpu')
model_registrar.load_models(ts)
with open(os.path.join(model_dir, 'config.json'), 'r') as config_json:
hyperparams = json.load(config_json)
trajectron = Trajectron(model_registrar, hyperparams, None, 'cpu')
trajectron.set_environment(env)
trajectron.set_annealing_params()
return trajectron, hyperparams
if __name__ == "__main__":
with open(args.data, 'rb') as f:
env = dill.load(f, encoding='latin1')
eval_stg, hyperparams = load_model(args.model, env, ts=args.checkpoint)
if 'override_attention_radius' in hyperparams:
for attention_radius_override in hyperparams['override_attention_radius']:
node_type1, node_type2, attention_radius = attention_radius_override.split(' ')
env.attention_radius[(node_type1, node_type2)] = float(attention_radius)
scenes = env.scenes
print("-- Preparing Node Graph")
for scene in tqdm(scenes):
scene.calculate_scene_graph(env.attention_radius,
hyperparams['edge_addition_filter'],
hyperparams['edge_removal_filter'])
for ph in args.prediction_horizon:
print(f"Prediction Horizon: {ph}")
max_hl = hyperparams['maximum_history_length']
with torch.no_grad():
############### MOST LIKELY Z ###############
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
print("-- Evaluating GMM Z Mode (Most Likely)")
for scene in tqdm(scenes):
timesteps = np.arange(scene.timesteps)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples=1,
min_future_timesteps=8,
z_mode=True,
gmm_mode=True,
full_dist=False) # This will trigger grid sampling
batch_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=env.NodeType,
map=None,
prune_ph_to_future=False,
kde=False)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, batch_error_dict[args.node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, batch_error_dict[args.node_type]['fde']))
print(np.mean(eval_fde_batch_errors))
pd.DataFrame({'value': eval_ade_batch_errors, 'metric': 'ade', 'type': 'ml'}
).to_csv(os.path.join(args.output_path, args.output_tag + "_" + str(ph) + '_ade_most_likely_z.csv'))
pd.DataFrame({'value': eval_fde_batch_errors, 'metric': 'fde', 'type': 'ml'}
).to_csv(os.path.join(args.output_path, args.output_tag + "_" + str(ph) + '_fde_most_likely_z.csv'))
############### FULL ###############
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
eval_kde_nll = np.array([])
eval_road_viols = np.array([])
print("-- Evaluating Full")
for scene in tqdm(scenes):
timesteps = np.arange(scene.timesteps)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples=2000,
min_future_timesteps=8,
z_mode=False,
gmm_mode=False,
full_dist=False)
if not predictions:
continue
prediction_dict, _, _ = utils.prediction_output_to_trajectories(predictions,
scene.dt,
max_hl,
ph,
prune_ph_to_future=False)
eval_road_viols_batch = []
for t in prediction_dict.keys():
for node in prediction_dict[t].keys():
if node.type == args.node_type:
viols = compute_road_violations(prediction_dict[t][node],
scene.map[args.node_type],
channel=0)
if viols == 2000:
viols = 0
eval_road_viols_batch.append(viols)
eval_road_viols = np.hstack((eval_road_viols, eval_road_viols_batch))
batch_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=env.NodeType,
map=None,
prune_ph_to_future=False)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, batch_error_dict[args.node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, batch_error_dict[args.node_type]['fde']))
eval_kde_nll = np.hstack((eval_kde_nll, batch_error_dict[args.node_type]['kde']))
pd.DataFrame({'value': eval_ade_batch_errors, 'metric': 'ade', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + "_" + str(ph) + '_ade_full.csv'))
pd.DataFrame({'value': eval_fde_batch_errors, 'metric': 'fde', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + "_" + str(ph) + '_fde_full.csv'))
pd.DataFrame({'value': eval_kde_nll, 'metric': 'kde', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + "_" + str(ph) + '_kde_full.csv'))
pd.DataFrame({'value': eval_road_viols, 'metric': 'road_viols', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + "_" + str(ph) + '_rv_full.csv'))

View File

@ -8,20 +8,20 @@ from scipy.ndimage import rotate
import seaborn as sns
from model.model_registrar import ModelRegistrar
from model.dyn_stg import SpatioTemporalGraphCVAEModel
from model import Trajectron
from utils import prediction_output_to_trajectories
from scipy.integrate import cumtrapz
line_colors = ['#80CBE5', '#375397', '#F05F78', '#ABCB51', '#C8B0B0']
line_colors = ['#375397', '#F05F78', '#80CBE5', '#ABCB51', '#C8B0B0']
cars = [plt.imread('Car TOP_VIEW 80CBE5.png'),
plt.imread('Car TOP_VIEW 375397.png'),
plt.imread('Car TOP_VIEW F05F78.png'),
plt.imread('Car TOP_VIEW ABCB51.png'),
plt.imread('Car TOP_VIEW C8B0B0.png')]
cars = [plt.imread('icons/Car TOP_VIEW 375397.png'),
plt.imread('icons/Car TOP_VIEW F05F78.png'),
plt.imread('icons/Car TOP_VIEW 80CBE5.png'),
plt.imread('icons/Car TOP_VIEW ABCB51.png'),
plt.imread('icons/Car TOP_VIEW C8B0B0.png')]
robot = plt.imread('Car TOP_VIEW ROBOT.png')
robot = plt.imread('icons/Car TOP_VIEW ROBOT.png')
def load_model(model_dir, env, ts=3999):
@ -34,11 +34,9 @@ def load_model(model_dir, env, ts=3999):
if 'incl_robot_node' not in hyperparams:
hyperparams['incl_robot_node'] = False
stg = SpatioTemporalGraphCVAEModel(model_registrar,
hyperparams,
None, 'cpu')
stg = Trajectron(model_registrar, hyperparams, None, 'cpu')
stg.set_scene_graph(env)
stg.set_environment(env)
stg.set_annealing_params()
@ -71,7 +69,7 @@ def plot_vehicle_nice(ax, predictions, dt, max_hl=10, ph=6, map=None, x_min=0, y
node_circle_size = 0.3
a = []
i = 0
node_list = sorted(histories_dict.keys(), key=lambda x: x.length)
node_list = sorted(histories_dict.keys(), key=lambda x: x.id)
for node in node_list:
history = histories_dict[node] + np.array([x_min, y_min])
future = futures_dict[node] + np.array([x_min, y_min])
@ -87,16 +85,16 @@ def plot_vehicle_nice(ax, predictions, dt, max_hl=10, ph=6, map=None, x_min=0, y
zorder=650,
path_effects=[pe.Stroke(linewidth=5, foreground='k'), pe.Normal()])
for t in range(predictions.shape[1]):
sns.kdeplot(predictions[:, t, 0], predictions[:, t, 1],
for t in range(predictions.shape[2]):
sns.kdeplot(predictions[0, :, t, 0], predictions[0, :, t, 1],
ax=ax, shade=True, shade_lowest=False,
color=line_colors[i % len(line_colors)], zorder=600, alpha=0.8)
vel = node.get(ts_key, {'velocity': ['x', 'y']})
vel = node.get(np.array([ts_key]), {'velocity': ['x', 'y']})
h = np.arctan2(vel[0, 1], vel[0, 0])
r_img = rotate(cars[i % len(cars)], node.get(ts_key, {'heading': ['value']})[0, 0] * 180 / np.pi,
r_img = rotate(cars[i % len(cars)], node.get(np.array([ts_key]), {'heading': ['°']})[0, 0] * 180 / np.pi,
reshape=True)
oi = OffsetImage(r_img, zoom=0.035, zorder=700)
oi = OffsetImage(r_img, zoom=0.025, zorder=700)
veh_box = AnnotationBbox(oi, (history[-1, 0], history[-1, 1]), frameon=False)
veh_box.zorder = 700
ax.add_artist(veh_box)
@ -104,8 +102,8 @@ def plot_vehicle_nice(ax, predictions, dt, max_hl=10, ph=6, map=None, x_min=0, y
else:
# ax.plot(history[:, 0], history[:, 1], 'k--')
for t in range(predictions.shape[1]):
sns.kdeplot(predictions[:, t, 0], predictions[:, t, 1],
for t in range(predictions.shape[2]):
sns.kdeplot(predictions[0, :, t, 0], predictions[0, :, t, 1],
ax=ax, shade=True, shade_lowest=False,
color='b', zorder=600, alpha=0.8)
@ -151,21 +149,21 @@ def plot_vehicle_mm(ax, predictions, dt, max_hl=10, ph=6, map=None, x_min=0, y_m
node_circle_size = 0.5
a = []
i = 0
node_list = sorted(histories_dict.keys(), key=lambda x: x.length)
node_list = sorted(histories_dict.keys(), key=lambda x: x.id)
for node in node_list:
history = histories_dict[node] + np.array([x_min, y_min])
future = futures_dict[node] + np.array([x_min, y_min])
predictions = prediction_dict[node] + np.array([x_min, y_min])
if node.type.name == 'VEHICLE':
for sample_num in range(prediction_dict[node].shape[0]):
ax.plot(predictions[sample_num, :, 0], predictions[sample_num, :, 1], 'ko-',
for sample_num in range(prediction_dict[node].shape[1]):
ax.plot(predictions[:, sample_num, :, 0], predictions[:, sample_num, :, 1], 'ko-',
zorder=620,
markersize=5,
linewidth=3, alpha=0.7)
else:
for sample_num in range(prediction_dict[node].shape[0]):
ax.plot(predictions[sample_num, :, 0], predictions[sample_num, :, 1], 'ko-',
for sample_num in range(prediction_dict[node].shape[1]):
ax.plot(predictions[:, sample_num, :, 0], predictions[:, sample_num, :, 1], 'ko-',
zorder=620,
markersize=2,
linewidth=1, alpha=0.7)
@ -197,20 +195,20 @@ def plot_vehicle_nice_mv(ax, predictions, dt, max_hl=10, ph=6, map=None, x_min=0
node_circle_size = 0.3
a = []
i = 0
node_list = sorted(histories_dict.keys(), key=lambda x: x.length)
node_list = sorted(histories_dict.keys(), key=lambda x: x.id)
for node in node_list:
h = node.get(ts_key, {'heading': ['value']})[0, 0]
h = node.get(np.array([ts_key]), {'heading': ['°']})[0, 0]
history_org = histories_dict[node] + np.array([x_min, y_min])
history = histories_dict[node] + np.array([x_min, y_min]) + node.length * np.array([np.cos(h), np.sin(h)])
future = futures_dict[node] + np.array([x_min, y_min]) + node.length * np.array([np.cos(h), np.sin(h)])
predictions = prediction_dict[node] + np.array([x_min, y_min]) + node.length * np.array([np.cos(h), np.sin(h)])
history = histories_dict[node] + np.array([x_min, y_min]) + 5 * np.array([np.cos(h), np.sin(h)])
future = futures_dict[node] + np.array([x_min, y_min]) + 5 * np.array([np.cos(h), np.sin(h)])
predictions = prediction_dict[node] + np.array([x_min, y_min]) + 5 * np.array([np.cos(h), np.sin(h)])
if node.type.name == 'VEHICLE':
for t in range(predictions.shape[1]):
sns.kdeplot(predictions[:, t, 0], predictions[:, t, 1],
for t in range(predictions.shape[2]):
sns.kdeplot(predictions[0, :, t, 0], predictions[0, :, t, 1],
ax=ax, shade=True, shade_lowest=False,
color=line_colors[i % len(line_colors)], zorder=600, alpha=1.0)
r_img = rotate(cars[i % len(cars)], node.get(ts_key, {'heading': ['value']})[0, 0] * 180 / np.pi,
r_img = rotate(cars[i % len(cars)], node.get(np.array([ts_key]), {'heading': ['°']})[0, 0] * 180 / np.pi,
reshape=True)
oi = OffsetImage(r_img, zoom=0.08, zorder=700)
veh_box = AnnotationBbox(oi, (history_org[-1, 0], history_org[-1, 1]), frameon=False)
@ -219,7 +217,7 @@ def plot_vehicle_nice_mv(ax, predictions, dt, max_hl=10, ph=6, map=None, x_min=0
i += 1
else:
for t in range(predictions.shape[1]):
for t in range(predictions.shape[2]):
sns.kdeplot(predictions[:, t, 0], predictions[:, t, 1],
ax=ax, shade=True, shade_lowest=False,
color='b', zorder=600, alpha=0.8)
@ -260,12 +258,12 @@ def plot_vehicle_nice_mv_robot(ax, predictions, dt, max_hl=10, ph=6, map=None, x
circle_edge_width = 0.5
node_circle_size = 0.3
node_list = sorted(histories_dict.keys(), key=lambda x: x.length)
node_list = sorted(histories_dict.keys(), key=lambda x: x.id)
for node in node_list:
h = node.get(ts_key, {'heading': ['value']})[0, 0]
history_org = histories_dict[node] + np.array([x_min, y_min]) + node.length / 2 * np.array(
h = node.get(np.array([ts_key]), {'heading': ['°']})[0, 0]
history_org = histories_dict[node] + np.array([x_min, y_min]) + 5 / 2 * np.array(
[np.cos(h), np.sin(h)])
future = futures_dict[node] + np.array([x_min, y_min]) + node.length * np.array([np.cos(h), np.sin(h)])
future = futures_dict[node] + np.array([x_min, y_min]) + 5 * np.array([np.cos(h), np.sin(h)])
ax.plot(future[:, 0],
future[:, 1],
@ -276,7 +274,7 @@ def plot_vehicle_nice_mv_robot(ax, predictions, dt, max_hl=10, ph=6, map=None, x
zorder=650,
path_effects=[pe.Stroke(linewidth=5, foreground='k'), pe.Normal()])
r_img = rotate(robot, node.get(ts_key, {'heading': ['value']})[0, 0] * 180 / np.pi, reshape=True)
r_img = rotate(robot, node.get(np.array([ts_key]), {'heading': ['°']})[0, 0] * 180 / np.pi, reshape=True)
oi = OffsetImage(r_img, zoom=0.08, zorder=700)
veh_box = AnnotationBbox(oi, (history_org[-1, 0], history_org[-1, 1]), frameon=False)
veh_box.zorder = 700
@ -285,4 +283,4 @@ def plot_vehicle_nice_mv_robot(ax, predictions, dt, max_hl=10, ph=6, map=None, x
def integrate(f, dx, F0=0.):
N = f.shape[0]
return F0 + np.hstack((np.zeros((N, 1)), cumtrapz(f, axis=1, dx=dx)))
return F0 + np.hstack((np.zeros((N, 1)), cumtrapz(f, axis=1, dx=dx)))

View File

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 61 KiB

View File

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

@ -0,0 +1,124 @@
import numpy as np
import numpy as np
class NonlinearKinematicBicycle:
"""
Nonlinear Kalman Filter for a kinematic bicycle model, assuming constant longitudinal speed
and constant heading array
"""
def __init__(self, dt, sPos=None, sHeading=None, sVel=None, sMeasurement=None):
self.dt = dt
# measurement matrix
self.C = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
# default noise covariance
if (sPos is None) and (sHeading is None) and (sVel is None):
# TODO need to further check
# sPos = 0.5 * 8.8 * dt ** 2 # assume 8.8m/s2 as maximum acceleration
# sHeading = 0.5 * dt # assume 0.5rad/s as maximum turn rate
# sVel = 8.8 * dt # assume 8.8m/s2 as maximum acceleration
# sMeasurement = 1.0
sPos = 12 * self.dt # assume 6m/s2 as maximum acceleration
sHeading = 0.5 * self.dt # assume 0.5rad/s as maximum turn rate
sVel = 6 * self.dt # assume 6m/s2 as maximum acceleration
if sMeasurement is None:
sMeasurement = 5.0
# state transition noise
self.Q = np.diag([sPos ** 2, sPos ** 2, sHeading ** 2, sVel ** 2])
# measurement noise
self.R = np.diag([sMeasurement ** 2, sMeasurement ** 2, sMeasurement ** 2, sMeasurement ** 2])
def predict_and_update(self, x_vec_est, u_vec, P_matrix, z_new):
"""
for background please refer to wikipedia: https://en.wikipedia.org/wiki/Extended_Kalman_filter
:param x_vec_est:
:param u_vec:
:param P_matrix:
:param z_new:
:return:
"""
## Prediction Step
# predicted state estimate
x_pred = self._kinematic_bicycle_model_rearCG(x_vec_est, u_vec)
# Compute Jacobian to obtain the state transition matrix
A = self._cal_state_Jacobian(x_vec_est, u_vec)
# predicted error covariance
P_pred = A.dot(P_matrix.dot(A.transpose())) + self.Q
## Update Step
# innovation or measurement pre-fit residual
y_telda = z_new - self.C.dot(x_pred)
# innovation covariance
S = self.C.dot(P_pred.dot(self.C.transpose())) + self.R
# near-optimal Kalman gain
K = P_pred.dot(self.C.transpose().dot(np.linalg.inv(S)))
# updated (a posteriori) state estimate
x_vec_est_new = x_pred + K.dot(y_telda)
# updated (a posteriori) estimate covariance
P_matrix_new = np.dot((np.identity(4) - K.dot(self.C)), P_pred)
return x_vec_est_new, P_matrix_new
def _kinematic_bicycle_model_rearCG(self, x_old, u):
"""
:param x: vehicle state vector = [x position, y position, heading, velocity]
:param u: control vector = [acceleration, steering array]
:param dt:
:return:
"""
acc = u[0]
delta = u[1]
x = x_old[0]
y = x_old[1]
psi = x_old[2]
vel = x_old[3]
x_new = np.array([[0.], [0.], [0.], [0.]])
x_new[0] = x + self.dt * vel * np.cos(psi + delta)
x_new[1] = y + self.dt * vel * np.sin(psi + delta)
x_new[2] = psi + self.dt * delta
#x_new[2] = _heading_angle_correction(x_new[2])
x_new[3] = vel + self.dt * acc
return x_new
def _cal_state_Jacobian(self, x_vec, u_vec):
acc = u_vec[0]
delta = u_vec[1]
x = x_vec[0]
y = x_vec[1]
psi = x_vec[2]
vel = x_vec[3]
a13 = -self.dt * vel * np.sin(psi + delta)
a14 = self.dt * np.cos(psi + delta)
a23 = self.dt * vel * np.cos(psi + delta)
a24 = self.dt * np.sin(psi + delta)
a34 = self.dt * delta
JA = np.array([[1.0, 0.0, a13[0], a14[0]],
[0.0, 1.0, a23[0], a24[0]],
[0.0, 0.0, 1.0, a34[0]],
[0.0, 0.0, 0.0, 1.0]])
return JA
def _heading_angle_correction(theta):
"""
correct heading array so that it always remains in [-pi, pi]
:param theta:
:return:
"""
theta_corrected = (theta + np.pi) % (2.0 * np.pi) - np.pi
return theta_corrected

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"VEHICLE": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 3], "strides": [2, 2, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}, "VEHICLE": {"name": "Unicycle", "distribution": true, "limits": {"max_a": 4, "min_a": -5, "max_heading_change": 0.7, "min_heading_change": -0.7}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["\u00b0", "d\u00b0"]}}, "pred_state": {"VEHICLE": {"position": ["x", "y"]}, "PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": true, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"VEHICLE": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 3], "strides": [2, 2, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}, "VEHICLE": {"name": "Unicycle", "distribution": true, "limits": {"max_a": 4, "min_a": -5, "max_heading_change": 0.7, "min_heading_change": -0.7}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["\u00b0", "d\u00b0"]}}, "pred_state": {"VEHICLE": {"position": ["x", "y"]}, "PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": true, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": true, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"VEHICLE": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 3], "strides": [2, 2, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}, "VEHICLE": {"name": "Unicycle", "distribution": true, "limits": {"max_a": 4, "min_a": -5, "max_heading_change": 0.7, "min_heading_change": -0.7}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["\u00b0", "d\u00b0"]}}, "pred_state": {"VEHICLE": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": true, "node_freq_mult_train": true, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": true, "augment": false, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.003, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 6, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"VEHICLE": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 3], "strides": [2, 2, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": false, "limits": {}}, "VEHICLE": {"name": "SingleIntegrator", "distribution": false, "limits": {"max_a": 4, "min_a": -5, "max_heading_change": 0.7, "min_heading_change": -0.7}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}, "VEHICLE": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"], "heading": ["\u00b0", "d\u00b0"]}}, "pred_state": {"VEHICLE": {"velocity": ["x", "y"]}, "PEDESTRIAN": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": true, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1,472 @@
import sys
import os
import numpy as np
import pandas as pd
import dill
import argparse
from tqdm import tqdm
from pyquaternion import Quaternion
from kalman_filter import NonlinearKinematicBicycle
from sklearn.model_selection import train_test_split
nu_path = './devkit/python-sdk/'
sys.path.append(nu_path)
sys.path.append("../../trajectron")
from nuscenes.nuscenes import NuScenes
from nuscenes.map_expansion.map_api import NuScenesMap
from nuscenes.utils.splits import create_splits_scenes
from environment import Environment, Scene, Node, GeometricMap, derivative_of
scene_blacklist = [499, 515, 517]
FREQUENCY = 2
dt = 1 / FREQUENCY
data_columns_vehicle = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration', 'heading'], ['x', 'y']])
data_columns_vehicle = data_columns_vehicle.append(pd.MultiIndex.from_tuples([('heading', '°'), ('heading', '')]))
data_columns_vehicle = data_columns_vehicle.append(pd.MultiIndex.from_product([['velocity', 'acceleration'], ['norm']]))
data_columns_pedestrian = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
curv_0_2 = 0
curv_0_1 = 0
total = 0
standardization = {
'PEDESTRIAN': {
'position': {
'x': {'mean': 0, 'std': 1},
'y': {'mean': 0, 'std': 1}
},
'velocity': {
'x': {'mean': 0, 'std': 2},
'y': {'mean': 0, 'std': 2}
},
'acceleration': {
'x': {'mean': 0, 'std': 1},
'y': {'mean': 0, 'std': 1}
}
},
'VEHICLE': {
'position': {
'x': {'mean': 0, 'std': 80},
'y': {'mean': 0, 'std': 80}
},
'velocity': {
'x': {'mean': 0, 'std': 15},
'y': {'mean': 0, 'std': 15},
'norm': {'mean': 0, 'std': 15}
},
'acceleration': {
'x': {'mean': 0, 'std': 4},
'y': {'mean': 0, 'std': 4},
'norm': {'mean': 0, 'std': 4}
},
'heading': {
'x': {'mean': 0, 'std': 1},
'y': {'mean': 0, 'std': 1},
'°': {'mean': 0, 'std': np.pi},
'': {'mean': 0, 'std': 1}
}
}
}
def augment_scene(scene, angle):
def rotate_pc(pc, alpha):
M = np.array([[np.cos(alpha), -np.sin(alpha)],
[np.sin(alpha), np.cos(alpha)]])
return M @ pc
data_columns_vehicle = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration', 'heading'], ['x', 'y']])
data_columns_vehicle = data_columns_vehicle.append(pd.MultiIndex.from_tuples([('heading', '°'), ('heading', '')]))
data_columns_vehicle = data_columns_vehicle.append(pd.MultiIndex.from_product([['velocity', 'acceleration'], ['norm']]))
data_columns_pedestrian = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
scene_aug = Scene(timesteps=scene.timesteps, dt=scene.dt, name=scene.name, non_aug_scene=scene)
alpha = angle * np.pi / 180
for node in scene.nodes:
if node.type == 'PEDESTRIAN':
x = node.data.position.x.copy()
y = node.data.position.y.copy()
x, y = rotate_pc(np.array([x, y]), alpha)
vx = derivative_of(x, scene.dt)
vy = derivative_of(y, scene.dt)
ax = derivative_of(vx, scene.dt)
ay = derivative_of(vy, scene.dt)
data_dict = {('position', 'x'): x,
('position', 'y'): y,
('velocity', 'x'): vx,
('velocity', 'y'): vy,
('acceleration', 'x'): ax,
('acceleration', 'y'): ay}
node_data = pd.DataFrame(data_dict, columns=data_columns_pedestrian)
node = Node(node_type=node.type, node_id=node.id, data=node_data, first_timestep=node.first_timestep)
elif node.type == 'VEHICLE':
x = node.data.position.x.copy()
y = node.data.position.y.copy()
heading = getattr(node.data.heading, '°').copy()
heading += alpha
heading = (heading + np.pi) % (2.0 * np.pi) - np.pi
x, y = rotate_pc(np.array([x, y]), alpha)
vx = derivative_of(x, scene.dt)
vy = derivative_of(y, scene.dt)
ax = derivative_of(vx, scene.dt)
ay = derivative_of(vy, scene.dt)
v = np.stack((vx, vy), axis=-1)
v_norm = np.linalg.norm(np.stack((vx, vy), axis=-1), axis=-1, keepdims=True)
heading_v = np.divide(v, v_norm, out=np.zeros_like(v), where=(v_norm > 1.))
heading_x = heading_v[:, 0]
heading_y = heading_v[:, 1]
data_dict = {('position', 'x'): x,
('position', 'y'): y,
('velocity', 'x'): vx,
('velocity', 'y'): vy,
('velocity', 'norm'): np.linalg.norm(np.stack((vx, vy), axis=-1), axis=-1),
('acceleration', 'x'): ax,
('acceleration', 'y'): ay,
('acceleration', 'norm'): np.linalg.norm(np.stack((ax, ay), axis=-1), axis=-1),
('heading', 'x'): heading_x,
('heading', 'y'): heading_y,
('heading', '°'): heading,
('heading', ''): derivative_of(heading, dt, radian=True)}
node_data = pd.DataFrame(data_dict, columns=data_columns_vehicle)
node = Node(node_type=node.type, node_id=node.id, data=node_data, first_timestep=node.first_timestep,
non_aug_node=node)
scene_aug.nodes.append(node)
return scene_aug
def augment(scene):
scene_aug = np.random.choice(scene.augmented)
scene_aug.temporal_scene_graph = scene.temporal_scene_graph
scene_aug.map = scene.map
return scene_aug
def trajectory_curvature(t):
path_distance = np.linalg.norm(t[-1] - t[0])
lengths = np.sqrt(np.sum(np.diff(t, axis=0) ** 2, axis=1)) # Length between points
path_length = np.sum(lengths)
if np.isclose(path_distance, 0.):
return 0, 0, 0
return (path_length / path_distance) - 1, path_length, path_distance
def process_scene(ns_scene, env, nusc, data_path):
scene_id = int(ns_scene['name'].replace('scene-', ''))
data = pd.DataFrame(columns=['frame_id',
'type',
'node_id',
'robot',
'x', 'y', 'z',
'length',
'width',
'height',
'heading'])
sample_token = ns_scene['first_sample_token']
sample = nusc.get('sample', sample_token)
frame_id = 0
while sample['next']:
annotation_tokens = sample['anns']
for annotation_token in annotation_tokens:
annotation = nusc.get('sample_annotation', annotation_token)
category = annotation['category_name']
if len(annotation['attribute_tokens']):
attribute = nusc.get('attribute', annotation['attribute_tokens'][0])['name']
else:
continue
if 'pedestrian' in category and not 'stroller' in category and not 'wheelchair' in category:
our_category = env.NodeType.PEDESTRIAN
elif 'vehicle' in category and 'bicycle' not in category and 'motorcycle' not in category and 'parked' not in attribute:
our_category = env.NodeType.VEHICLE
else:
continue
data_point = pd.Series({'frame_id': frame_id,
'type': our_category,
'node_id': annotation['instance_token'],
'robot': False,
'x': annotation['translation'][0],
'y': annotation['translation'][1],
'z': annotation['translation'][2],
'length': annotation['size'][0],
'width': annotation['size'][1],
'height': annotation['size'][2],
'heading': Quaternion(annotation['rotation']).yaw_pitch_roll[0]})
data = data.append(data_point, ignore_index=True)
# Ego Vehicle
our_category = env.NodeType.VEHICLE
sample_data = nusc.get('sample_data', sample['data']['CAM_FRONT'])
annotation = nusc.get('ego_pose', sample_data['ego_pose_token'])
data_point = pd.Series({'frame_id': frame_id,
'type': our_category,
'node_id': 'ego',
'robot': True,
'x': annotation['translation'][0],
'y': annotation['translation'][1],
'z': annotation['translation'][2],
'length': 4,
'width': 1.7,
'height': 1.5,
'heading': Quaternion(annotation['rotation']).yaw_pitch_roll[0],
'orientation': None})
data = data.append(data_point, ignore_index=True)
sample = nusc.get('sample', sample['next'])
frame_id += 1
if len(data.index) == 0:
return None
data.sort_values('frame_id', inplace=True)
max_timesteps = data['frame_id'].max()
x_min = np.round(data['x'].min() - 50)
x_max = np.round(data['x'].max() + 50)
y_min = np.round(data['y'].min() - 50)
y_max = np.round(data['y'].max() + 50)
data['x'] = data['x'] - x_min
data['y'] = data['y'] - y_min
scene = Scene(timesteps=max_timesteps + 1, dt=dt, name=str(scene_id), aug_func=augment)
# Generate Maps
map_name = nusc.get('log', ns_scene['log_token'])['location']
nusc_map = NuScenesMap(dataroot=data_path, map_name=map_name)
type_map = dict()
x_size = x_max - x_min
y_size = y_max - y_min
patch_box = (x_min + 0.5 * (x_max - x_min), y_min + 0.5 * (y_max - y_min), y_size, x_size)
patch_angle = 0 # Default orientation where North is up
canvas_size = (np.round(3 * y_size).astype(int), np.round(3 * x_size).astype(int))
homography = np.array([[3., 0., 0.], [0., 3., 0.], [0., 0., 3.]])
layer_names = ['lane', 'road_segment', 'drivable_area', 'road_divider', 'lane_divider', 'stop_line',
'ped_crossing', 'stop_line', 'ped_crossing', 'walkway']
map_mask = (nusc_map.get_map_mask(patch_box, patch_angle, layer_names, canvas_size) * 255.0).astype(
np.uint8)
map_mask = np.swapaxes(map_mask, 1, 2) # x axis comes first
# PEDESTRIANS
map_mask_pedestrian = np.stack((map_mask[9], map_mask[8], np.max(map_mask[:3], axis=0)), axis=0)
type_map['PEDESTRIAN'] = GeometricMap(data=map_mask_pedestrian, homography=homography, description=', '.join(layer_names))
# VEHICLES
map_mask_vehicle = np.stack((np.max(map_mask[:3], axis=0), map_mask[3], map_mask[4]), axis=0)
type_map['VEHICLE'] = GeometricMap(data=map_mask_vehicle, homography=homography, description=', '.join(layer_names))
map_mask_plot = np.stack(((np.max(map_mask[:3], axis=0) - (map_mask[3] + 0.5 * map_mask[4]).clip(
max=255)).clip(min=0).astype(np.uint8), map_mask[8], map_mask[9]), axis=0)
type_map['VISUALIZATION'] = GeometricMap(data=map_mask_plot, homography=homography, description=', '.join(layer_names))
scene.map = type_map
del map_mask
del map_mask_pedestrian
del map_mask_vehicle
del map_mask_plot
for node_id in pd.unique(data['node_id']):
node_frequency_multiplier = 1
node_df = data[data['node_id'] == node_id]
if node_df['x'].shape[0] < 2:
continue
if not np.all(np.diff(node_df['frame_id']) == 1):
# print('Occlusion')
continue # TODO Make better
node_values = node_df[['x', 'y']].values
x = node_values[:, 0]
y = node_values[:, 1]
heading = node_df['heading'].values
if node_df.iloc[0]['type'] == env.NodeType.VEHICLE and not node_id == 'ego':
# Kalman filter Agent
vx = derivative_of(x, scene.dt)
vy = derivative_of(y, scene.dt)
velocity = np.linalg.norm(np.stack((vx, vy), axis=-1), axis=-1)
filter_veh = NonlinearKinematicBicycle(dt=scene.dt, sMeasurement=1.0)
P_matrix = None
for i in range(len(x)):
if i == 0: # initalize KF
# initial P_matrix
P_matrix = np.identity(4)
elif i < len(x):
# assign new est values
x[i] = x_vec_est_new[0][0]
y[i] = x_vec_est_new[1][0]
heading[i] = x_vec_est_new[2][0]
velocity[i] = x_vec_est_new[3][0]
if i < len(x) - 1: # no action on last data
# filtering
x_vec_est = np.array([[x[i]],
[y[i]],
[heading[i]],
[velocity[i]]])
z_new = np.array([[x[i + 1]],
[y[i + 1]],
[heading[i + 1]],
[velocity[i + 1]]])
x_vec_est_new, P_matrix_new = filter_veh.predict_and_update(
x_vec_est=x_vec_est,
u_vec=np.array([[0.], [0.]]),
P_matrix=P_matrix,
z_new=z_new
)
P_matrix = P_matrix_new
curvature, pl, _ = trajectory_curvature(np.stack((x, y), axis=-1))
if pl < 1.0: # vehicle is "not" moving
x = x[0].repeat(max_timesteps + 1)
y = y[0].repeat(max_timesteps + 1)
heading = heading[0].repeat(max_timesteps + 1)
global total
global curv_0_2
global curv_0_1
total += 1
if pl > 1.0:
if curvature > .2:
curv_0_2 += 1
node_frequency_multiplier = 3*int(np.floor(total/curv_0_2))
elif curvature > .1:
curv_0_1 += 1
node_frequency_multiplier = 3*int(np.floor(total/curv_0_1))
vx = derivative_of(x, scene.dt)
vy = derivative_of(y, scene.dt)
ax = derivative_of(vx, scene.dt)
ay = derivative_of(vy, scene.dt)
if node_df.iloc[0]['type'] == env.NodeType.VEHICLE:
v = np.stack((vx, vy), axis=-1)
v_norm = np.linalg.norm(np.stack((vx, vy), axis=-1), axis=-1, keepdims=True)
heading_v = np.divide(v, v_norm, out=np.zeros_like(v), where=(v_norm > 1.))
heading_x = heading_v[:, 0]
heading_y = heading_v[:, 1]
data_dict = {('position', 'x'): x,
('position', 'y'): y,
('velocity', 'x'): vx,
('velocity', 'y'): vy,
('velocity', 'norm'): np.linalg.norm(np.stack((vx, vy), axis=-1), axis=-1),
('acceleration', 'x'): ax,
('acceleration', 'y'): ay,
('acceleration', 'norm'): np.linalg.norm(np.stack((ax, ay), axis=-1), axis=-1),
('heading', 'x'): heading_x,
('heading', 'y'): heading_y,
('heading', '°'): heading,
('heading', ''): derivative_of(heading, dt, radian=True)}
node_data = pd.DataFrame(data_dict, columns=data_columns_vehicle)
else:
data_dict = {('position', 'x'): x,
('position', 'y'): y,
('velocity', 'x'): vx,
('velocity', 'y'): vy,
('acceleration', 'x'): ax,
('acceleration', 'y'): ay}
node_data = pd.DataFrame(data_dict, columns=data_columns_pedestrian)
node = Node(node_type=node_df.iloc[0]['type'], node_id=node_id, data=node_data, frequency_multiplier=node_frequency_multiplier)
node.first_timestep = node_df['frame_id'].iloc[0]
if node_df.iloc[0]['robot'] == True:
node.is_robot = True
scene.robot = node
scene.nodes.append(node)
return scene
def process_data(data_path, version, output_path, val_split):
nusc = NuScenes(version=version, dataroot=data_path, verbose=True)
splits = create_splits_scenes()
train_scenes, val_scenes = train_test_split(splits['train' if 'mini' not in version else 'mini_train'], test_size=val_split)
train_scene_names = splits['train' if 'mini' not in version else 'mini_train']
val_scene_names = []#val_scenes
test_scene_names = splits['val' if 'mini' not in version else 'mini_val']
ns_scene_names = dict()
ns_scene_names['train'] = train_scene_names
ns_scene_names['val'] = val_scene_names
ns_scene_names['test'] = test_scene_names
for data_class in ['train', 'val', 'test']:
env = Environment(node_type_list=['VEHICLE', 'PEDESTRIAN'], standardization=standardization)
attention_radius = dict()
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.PEDESTRIAN)] = 10.0
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.VEHICLE)] = 20.0
attention_radius[(env.NodeType.VEHICLE, env.NodeType.PEDESTRIAN)] = 20.0
attention_radius[(env.NodeType.VEHICLE, env.NodeType.VEHICLE)] = 30.0
env.attention_radius = attention_radius
env.robot_type = env.NodeType.VEHICLE
scenes = []
for ns_scene_name in tqdm(ns_scene_names[data_class]):
ns_scene = nusc.get('scene', nusc.field2token('scene', 'name', ns_scene_name)[0])
scene_id = int(ns_scene['name'].replace('scene-', ''))
if scene_id in scene_blacklist: # Some scenes have bad localization
continue
scene = process_scene(ns_scene, env, nusc, data_path)
if scene is not None:
if data_class == 'train':
scene.augmented = list()
angles = np.arange(0, 360, 15)
for angle in angles:
scene.augmented.append(augment_scene(scene, angle))
scenes.append(scene)
print(f'Processed {len(scenes):.2f} scenes')
env.scenes = scenes
if len(scenes) > 0:
mini_string = ''
if 'mini' in version:
mini_string = '_mini'
data_dict_path = os.path.join(output_path, 'nuScenes_' + data_class + mini_string + '_full.pkl')
with open(data_dict_path, 'wb') as f:
dill.dump(env, f, protocol=dill.HIGHEST_PROTOCOL)
print('Saved Environment!')
global total
global curv_0_2
global curv_0_1
print(f"Total Nodes: {total}")
print(f"Curvature > 0.1 Nodes: {curv_0_1}")
print(f"Curvature > 0.2 Nodes: {curv_0_2}")
total = 0
curv_0_1 = 0
curv_0_2 = 0
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, required=True)
parser.add_argument('--version', type=str, required=True)
parser.add_argument('--output_path', type=str, required=True)
parser.add_argument('--val_split', type=int, default=0.15)
args = parser.parse_args()
process_data(args.data, args.version, args.output_path, args.val_split)

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,6 @@
python evaluate.py --model ./models/eth_vel --checkpoint 100 --data ../processed/eth_test.pkl --output_path ./results/ --output_tag eth_vel_12 --node_type PEDESTRIAN
python evaluate.py --model ./models/hotel_vel --checkpoint 100 --data ../processed/hotel_test.pkl --output_path ./results/ --output_tag hotel_vel_12 --node_type PEDESTRIAN
python evaluate.py --model ./models/zara1_vel --checkpoint 100 --data ../processed/zara1_test.pkl --output_path ./results/ --output_tag zara1_vel_12 --node_type PEDESTRIAN
python evaluate.py --model ./models/zara2_vel --checkpoint 100 --data ../processed/zara2_test.pkl --output_path ./results/ --output_tag zara2_vel_12 --node_type PEDESTRIAN
python evaluate.py --model ./models/univ_vel --checkpoint 100 --data ../processed/univ_test.pkl --output_path ./results/ --output_tag univ_vel_12 --node_type PEDESTRIAN

View File

@ -0,0 +1,222 @@
import sys
import os
import dill
import json
import argparse
import torch
import numpy as np
import pandas as pd
sys.path.append("../../trajectron")
from tqdm import tqdm
from model.model_registrar import ModelRegistrar
from model.trajectron import Trajectron
import evaluation
seed = 0
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
parser = argparse.ArgumentParser()
parser.add_argument("--model", help="model full path", type=str)
parser.add_argument("--checkpoint", help="model checkpoint to evaluate", type=int)
parser.add_argument("--data", help="full path to data file", type=str)
parser.add_argument("--output_path", help="path to output csv file", type=str)
parser.add_argument("--output_tag", help="name tag for output file", type=str)
parser.add_argument("--node_type", help="node type to evaluate", type=str)
args = parser.parse_args()
def load_model(model_dir, env, ts=100):
model_registrar = ModelRegistrar(model_dir, 'cpu')
model_registrar.load_models(ts)
with open(os.path.join(model_dir, 'config.json'), 'r') as config_json:
hyperparams = json.load(config_json)
trajectron = Trajectron(model_registrar, hyperparams, None, 'cpu')
trajectron.set_environment(env)
trajectron.set_annealing_params()
return trajectron, hyperparams
if __name__ == "__main__":
with open(args.data, 'rb') as f:
env = dill.load(f, encoding='latin1')
eval_stg, hyperparams = load_model(args.model, env, ts=args.checkpoint)
if 'override_attention_radius' in hyperparams:
for attention_radius_override in hyperparams['override_attention_radius']:
node_type1, node_type2, attention_radius = attention_radius_override.split(' ')
env.attention_radius[(node_type1, node_type2)] = float(attention_radius)
scenes = env.scenes
print("-- Preparing Node Graph")
for scene in tqdm(scenes):
scene.calculate_scene_graph(env.attention_radius,
hyperparams['edge_addition_filter'],
hyperparams['edge_removal_filter'])
ph = hyperparams['prediction_horizon']
max_hl = hyperparams['maximum_history_length']
with torch.no_grad():
############### MOST LIKELY ###############
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
print("-- Evaluating GMM Grid Sampled (Most Likely)")
for i, scene in enumerate(scenes):
print(f"---- Evaluating Scene {i + 1}/{len(scenes)}")
timesteps = np.arange(scene.timesteps)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples=1,
min_future_timesteps=12,
z_mode=False,
gmm_mode=True,
full_dist=True) # This will trigger grid sampling
batch_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=env.NodeType,
map=None,
prune_ph_to_future=True,
kde=False)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, batch_error_dict[args.node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, batch_error_dict[args.node_type]['fde']))
print(np.mean(eval_fde_batch_errors))
pd.DataFrame({'value': eval_ade_batch_errors, 'metric': 'ade', 'type': 'ml'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_ade_most_likely.csv'))
pd.DataFrame({'value': eval_fde_batch_errors, 'metric': 'fde', 'type': 'ml'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_fde_most_likely.csv'))
############### MODE Z ###############
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
eval_kde_nll = np.array([])
print("-- Evaluating Mode Z")
for i, scene in enumerate(scenes):
print(f"---- Evaluating Scene {i+1}/{len(scenes)}")
for t in tqdm(range(0, scene.timesteps, 10)):
timesteps = np.arange(t, t + 10)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples=2000,
min_future_timesteps=12,
z_mode=True,
full_dist=False)
if not predictions:
continue
batch_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=env.NodeType,
map=None,
prune_ph_to_future=True)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, batch_error_dict[args.node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, batch_error_dict[args.node_type]['fde']))
eval_kde_nll = np.hstack((eval_kde_nll, batch_error_dict[args.node_type]['kde']))
pd.DataFrame({'value': eval_ade_batch_errors, 'metric': 'ade', 'type': 'z_mode'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_ade_z_mode.csv'))
pd.DataFrame({'value': eval_fde_batch_errors, 'metric': 'fde', 'type': 'z_mode'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_fde_z_mode.csv'))
pd.DataFrame({'value': eval_kde_nll, 'metric': 'kde', 'type': 'z_mode'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_kde_z_mode.csv'))
############### BEST OF 20 ###############
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
eval_kde_nll = np.array([])
print("-- Evaluating best of 20")
for i, scene in enumerate(scenes):
print(f"---- Evaluating Scene {i + 1}/{len(scenes)}")
for t in tqdm(range(0, scene.timesteps, 10)):
timesteps = np.arange(t, t + 10)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples=20,
min_future_timesteps=12,
z_mode=False,
gmm_mode=False,
full_dist=False)
if not predictions:
continue
batch_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=env.NodeType,
map=None,
best_of=True,
prune_ph_to_future=True)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, batch_error_dict[args.node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, batch_error_dict[args.node_type]['fde']))
eval_kde_nll = np.hstack((eval_kde_nll, batch_error_dict[args.node_type]['kde']))
pd.DataFrame({'value': eval_ade_batch_errors, 'metric': 'ade', 'type': 'best_of'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_ade_best_of.csv'))
pd.DataFrame({'value': eval_fde_batch_errors, 'metric': 'fde', 'type': 'best_of'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_fde_best_of.csv'))
pd.DataFrame({'value': eval_kde_nll, 'metric': 'kde', 'type': 'best_of'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_kde_best_of.csv'))
############### FULL ###############
eval_ade_batch_errors = np.array([])
eval_fde_batch_errors = np.array([])
eval_kde_nll = np.array([])
print("-- Evaluating Full")
for i, scene in enumerate(scenes):
print(f"---- Evaluating Scene {i + 1}/{len(scenes)}")
for t in tqdm(range(0, scene.timesteps, 10)):
timesteps = np.arange(t, t + 10)
predictions = eval_stg.predict(scene,
timesteps,
ph,
num_samples=2000,
min_future_timesteps=12,
z_mode=False,
gmm_mode=False,
full_dist=False)
if not predictions:
continue
batch_error_dict = evaluation.compute_batch_statistics(predictions,
scene.dt,
max_hl=max_hl,
ph=ph,
node_type_enum=env.NodeType,
map=None,
prune_ph_to_future=True)
eval_ade_batch_errors = np.hstack((eval_ade_batch_errors, batch_error_dict[args.node_type]['ade']))
eval_fde_batch_errors = np.hstack((eval_fde_batch_errors, batch_error_dict[args.node_type]['fde']))
eval_kde_nll = np.hstack((eval_kde_nll, batch_error_dict[args.node_type]['kde']))
pd.DataFrame({'value': eval_ade_batch_errors, 'metric': 'ade', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_ade_full.csv'))
pd.DataFrame({'value': eval_fde_batch_errors, 'metric': 'fde', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_fde_full.csv'))
pd.DataFrame({'value': eval_kde_nll, 'metric': 'kde', 'type': 'full'}
).to_csv(os.path.join(args.output_path, args.output_tag + '_kde_full.csv'))

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": false, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": false, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": false, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": false, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": true, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"position": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1 @@
{"batch_size": 256, "grad_clip": 1.0, "learning_rate_style": "exp", "learning_rate": 0.001, "min_learning_rate": 1e-05, "learning_decay_rate": 0.9999, "prediction_horizon": 12, "minimum_history_length": 1, "maximum_history_length": 8, "map_encoder": {"PEDESTRIAN": {"heading_state_index": 6, "patch_size": [50, 10, 50, 90], "map_channels": 3, "hidden_channels": [10, 20, 10, 1], "output_size": 32, "masks": [5, 5, 5, 5], "strides": [1, 1, 1, 1], "dropout": 0.5}}, "k": 1, "k_eval": 25, "kl_min": 0.07, "kl_weight": 100.0, "kl_weight_start": 0, "kl_decay_rate": 0.99995, "kl_crossover": 400, "kl_sigmoid_divisor": 4, "rnn_kwargs": {"dropout_keep_prob": 0.75}, "MLP_dropout_keep_prob": 0.9, "enc_rnn_dim_edge": 32, "enc_rnn_dim_edge_influence": 32, "enc_rnn_dim_history": 32, "enc_rnn_dim_future": 32, "dec_rnn_dim": 128, "q_z_xy_MLP_dims": null, "p_z_x_MLP_dims": 32, "GMM_components": 1, "log_p_yt_xz_max": 6, "N": 1, "K": 25, "tau_init": 2.0, "tau_final": 0.05, "tau_decay_rate": 0.997, "use_z_logit_clipping": true, "z_logit_clip_start": 0.05, "z_logit_clip_final": 5.0, "z_logit_clip_crossover": 300, "z_logit_clip_divisor": 5, "dynamic": {"PEDESTRIAN": {"name": "SingleIntegrator", "distribution": false, "limits": {}}}, "state": {"PEDESTRIAN": {"position": ["x", "y"], "velocity": ["x", "y"], "acceleration": ["x", "y"]}}, "pred_state": {"PEDESTRIAN": {"velocity": ["x", "y"]}}, "log_histograms": false, "dynamic_edges": "yes", "edge_state_combine_method": "sum", "edge_influence_combine_method": "attention", "edge_addition_filter": [0.25, 0.5, 0.75, 1.0], "edge_removal_filter": [1.0, 0.0], "offline_scene_graph": "yes", "incl_robot_node": false, "node_freq_mult_train": false, "node_freq_mult_eval": false, "scene_freq_mult_train": false, "scene_freq_mult_eval": false, "scene_freq_mult_viz": false, "edge_encoding": true, "use_map_encoding": false, "augment": true, "override_attention_radius": []}

View File

@ -0,0 +1,171 @@
import sys
import os
import numpy as np
import pandas as pd
import dill
sys.path.append("../../trajectron")
from environment import Environment, Scene, Node
from utils import maybe_makedirs
from environment import derivative_of
desired_max_time = 100
pred_indices = [2, 3]
state_dim = 6
frame_diff = 10
desired_frame_diff = 1
dt = 0.4
standardization = {
'PEDESTRIAN': {
'position': {
'x': {'mean': 0, 'std': 1},
'y': {'mean': 0, 'std': 1}
},
'velocity': {
'x': {'mean': 0, 'std': 2},
'y': {'mean': 0, 'std': 2}
},
'acceleration': {
'x': {'mean': 0, 'std': 1},
'y': {'mean': 0, 'std': 1}
}
}
}
def augment_scene(scene, angle):
def rotate_pc(pc, alpha):
M = np.array([[np.cos(alpha), -np.sin(alpha)],
[np.sin(alpha), np.cos(alpha)]])
return M @ pc
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
scene_aug = Scene(timesteps=scene.timesteps, dt=scene.dt, name=scene.name)
alpha = angle * np.pi / 180
for node in scene.nodes:
x = node.data.position.x.copy()
y = node.data.position.y.copy()
x, y = rotate_pc(np.array([x, y]), alpha)
vx = derivative_of(x, scene.dt)
vy = derivative_of(y, scene.dt)
ax = derivative_of(vx, scene.dt)
ay = derivative_of(vy, scene.dt)
data_dict = {('position', 'x'): x,
('position', 'y'): y,
('velocity', 'x'): vx,
('velocity', 'y'): vy,
('acceleration', 'x'): ax,
('acceleration', 'y'): ay}
node_data = pd.DataFrame(data_dict, columns=data_columns)
node = Node(node_type=node.type, node_id=node.id, data=node_data, first_timestep=node.first_timestep)
scene_aug.nodes.append(node)
return scene_aug
def augment(scene):
scene_aug = np.random.choice(scene.augmented)
scene_aug.temporal_scene_graph = scene.temporal_scene_graph
return scene_aug
nl = 0
l = 0
maybe_makedirs('../processed')
data_columns = pd.MultiIndex.from_product([['position', 'velocity', 'acceleration'], ['x', 'y']])
for desired_source in ['eth', 'hotel', 'univ', 'zara1', 'zara2']:
for data_class in ['train', 'val', 'test']:
env = Environment(node_type_list=['PEDESTRIAN'], standardization=standardization)
attention_radius = dict()
attention_radius[(env.NodeType.PEDESTRIAN, env.NodeType.PEDESTRIAN)] = 3.0
env.attention_radius = attention_radius
scenes = []
data_dict_path = os.path.join('../processed', '_'.join([desired_source, data_class]) + '.pkl')
for subdir, dirs, files in os.walk(os.path.join('raw', desired_source, data_class)):
for file in files:
if file.endswith('.txt'):
input_data_dict = dict()
full_data_path = os.path.join(subdir, file)
print('At', full_data_path)
data = pd.read_csv(full_data_path, sep='\t', index_col=False, header=None)
data.columns = ['frame_id', 'track_id', 'pos_x', 'pos_y']
data['frame_id'] = pd.to_numeric(data['frame_id'], downcast='integer')
data['track_id'] = pd.to_numeric(data['track_id'], downcast='integer')
data['frame_id'] = data['frame_id'] // 10
data['frame_id'] -= data['frame_id'].min()
data['node_type'] = 'PEDESTRIAN'
data['node_id'] = data['track_id'].astype(str)
data.sort_values('frame_id', inplace=True)
# Mean Position
data['pos_x'] = data['pos_x'] - data['pos_x'].mean()
data['pos_y'] = data['pos_y'] - data['pos_y'].mean()
max_timesteps = data['frame_id'].max()
scene = Scene(timesteps=max_timesteps+1, dt=dt, name=desired_source + "_" + data_class, aug_func=augment if data_class == 'train' else None)
for node_id in pd.unique(data['node_id']):
node_df = data[data['node_id'] == node_id]
assert np.all(np.diff(node_df['frame_id']) == 1)
node_values = node_df[['pos_x', 'pos_y']].values
if node_values.shape[0] < 2:
continue
new_first_idx = node_df['frame_id'].iloc[0]
x = node_values[:, 0]
y = node_values[:, 1]
vx = derivative_of(x, scene.dt)
vy = derivative_of(y, scene.dt)
ax = derivative_of(vx, scene.dt)
ay = derivative_of(vy, scene.dt)
data_dict = {('position', 'x'): x,
('position', 'y'): y,
('velocity', 'x'): vx,
('velocity', 'y'): vy,
('acceleration', 'x'): ax,
('acceleration', 'y'): ay}
node_data = pd.DataFrame(data_dict, columns=data_columns)
node = Node(node_type=env.NodeType.PEDESTRIAN, node_id=node_id, data=node_data)
node.first_timestep = new_first_idx
scene.nodes.append(node)
if data_class == 'train':
scene.augmented = list()
angles = np.arange(0, 360, 15) if data_class == 'train' else [0]
for angle in angles:
scene.augmented.append(augment_scene(scene, angle))
print(scene)
scenes.append(scene)
print(f'Processed {len(scenes):.2f} scene for data class {data_class}')
env.scenes = scenes
if len(scenes) > 0:
with open(data_dict_path, 'wb') as f:
dill.dump(env, f, protocol=dill.HIGHEST_PROTOCOL)
print(f"Linear: {l}")
print(f"Non-Linear: {nl}")

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,846 @@
7110.0 120.0 14.404653033 4.57176712331
7110.0 121.0 14.9851158053 3.62762895564
7110.0 122.0 7.02721954529 4.18060371158
7110.0 123.0 6.97776024452 5.36626559656
7110.0 124.0 5.59563586961 4.78345836566
7110.0 125.0 5.58742773034 3.995403707
7120.0 120.0 14.9430227834 4.51067021458
7120.0 122.0 7.58032185338 4.19420732016
7120.0 123.0 7.50118697216 5.40588312331
7120.0 124.0 6.14473934063 4.85624960457
7120.0 125.0 6.14705445683 4.02786143976
7130.0 122.0 8.13363462659 4.20781092875
7130.0 123.0 7.98146835233 5.43046508268
7130.0 124.0 6.67911025397 4.89491249212
7130.0 125.0 6.70078816026 4.05626195593
7140.0 122.0 8.68673693468 4.22165319713
7140.0 123.0 8.46174973249 5.45480838225
7140.0 124.0 7.19874860963 4.89992434792
7140.0 125.0 7.20127419095 4.04552226494
7150.0 122.0 9.23268342905 4.21019752674
7150.0 123.0 8.94203111266 5.47939034162
7150.0 124.0 7.7183869653 4.90469754391
7150.0 125.0 7.70154975653 4.03502123376
7160.0 122.0 9.77694620254 4.19277536137
7160.0 123.0 9.45177760817 5.4934712698
7160.0 124.0 8.23823578607 4.90947073991
7160.0 125.0 8.20203578722 4.02428154277
7170.0 122.0 10.3209985109 4.17535319599
7170.0 123.0 9.98109735888 5.5006310638
7170.0 124.0 8.75787414174 4.9144825957
7170.0 125.0 8.70252181791 4.01378051158
7180.0 122.0 10.8650508193 4.15793103061
7180.0 123.0 10.5104171096 5.50802951759
7180.0 124.0 9.27372412543 4.92832486408
7180.0 125.0 9.21268924364 4.04528360514
7190.0 122.0 11.3819531285 4.13048515364
7190.0 123.0 11.0405787207 5.47724240342
7190.0 124.0 9.78662759759 4.94813362746
7190.0 125.0 9.72917062266 4.10494855507
7200.0 122.0 11.8870693916 4.09850474048
7200.0 123.0 11.5722135877 5.35791250357
7200.0 124.0 10.2997415349 4.96818105063
7200.0 125.0 10.2458624668 4.164613505
7210.0 122.0 12.3921856548 4.06676298712
7210.0 123.0 12.1040589197 5.23858260371
7210.0 124.0 10.812645007 4.98798981401
7210.0 125.0 10.7623438458 4.22427845493
7220.0 122.0 12.8973019179 4.03502123376
7220.0 123.0 12.6356937866 5.11925270385
7220.0 124.0 11.3257589443 5.00779857739
7220.0 125.0 11.2838763875 4.26938515707
7230.0 122.0 13.4017867856 3.97106040743
7230.0 123.0 13.1563844678 4.97796610242
7230.0 124.0 11.8590775321 4.98369393762
7230.0 125.0 11.8241403239 4.25649752789
7240.0 122.0 13.9043774674 3.77846194906
7240.0 123.0 13.6659204982 4.81472279942
7240.0 124.0 12.3976577476 4.94861094706
7240.0 125.0 12.3646147254 4.2436098987
7250.0 122.0 14.4067576841 3.58610215049
7250.0 123.0 14.1754565286 4.65124083662
7250.0 124.0 12.9362379631 4.9135279565
7250.0 125.0 12.9048786618 4.23072226952
7260.0 122.0 14.9091379008 3.39350369213
7260.0 123.0 14.684992559 4.48799753361
7260.0 124.0 13.4533507375 4.834292903
7260.0 125.0 13.3954728323 4.16867072159
7270.0 124.0 13.9386832803 4.68823310557
7270.0 125.0 13.8359763068 4.05769391473
7280.0 124.0 14.423805358 4.54241196795
7280.0 125.0 14.2764797812 3.94671710786
7290.0 124.0 14.9091379008 4.39659083032
7290.0 125.0 14.7169832557 3.83574030099
7300.0 125.0 15.1574867301 3.72476349413
7300.0 126.0 14.9893251075 5.5822527153
7310.0 126.0 14.6946739541 5.59012848869
7320.0 126.0 14.4000228006 5.59824292188
7330.0 126.0 14.1053716471 5.60611869527
7330.0 127.0 14.7700204633 3.06152791072
7330.0 129.0 14.9975332468 5.39562075192
7340.0 126.0 13.801880959 5.60850529327
7340.0 127.0 14.3257286169 3.21403152274
7340.0 128.0 14.6708913967 4.59014392789
7340.0 129.0 14.6506867461 5.331182606
7350.0 126.0 13.4948123641 5.60850529327
7350.0 127.0 13.8814367705 3.36653513476
7350.0 128.0 14.3059448966 4.55052640114
7350.0 129.0 14.3036297804 5.26674446008
7360.0 126.0 13.187533304 5.60850529327
7360.0 127.0 13.4369344589 3.51880008698
7360.0 128.0 13.9412088616 4.51067021458
7360.0 129.0 13.9567832797 5.20206765435
7360.0 130.0 14.9219762724 4.17654649499
7370.0 126.0 12.8804647091 5.60850529327
7370.0 127.0 12.9936949381 3.55412173733
7370.0 128.0 13.5478495717 4.47988310042
7370.0 129.0 13.6097263139 5.13762950843
7370.0 130.0 14.4442204736 4.00876865579
7380.0 126.0 12.5180437903 5.6614877688
7380.0 127.0 12.5512972776 3.51140163318
7380.0 128.0 13.1115553995 4.46246093504
7380.0 129.0 13.1565949329 5.13118569384
7380.0 130.0 13.9662542096 3.84122947639
7390.0 126.0 12.1556228715 5.71447024434
7390.0 127.0 12.1086891521 3.46844286924
7390.0 128.0 12.6750507621 4.44480010986
7390.0 129.0 12.6914670406 5.13118569384
7390.0 130.0 13.4884984108 3.67345163719
7400.0 126.0 11.7929914876 5.76769137967
7400.0 127.0 11.6662914917 3.42572276509
7400.0 128.0 12.2387565898 4.42737794448
7400.0 129.0 12.2265496135 5.13118569384
7400.0 130.0 12.9962205194 3.52810781916
7410.0 126.0 11.4017368489 5.80874086523
7410.0 127.0 11.235048482 3.4171310123
7410.0 128.0 11.8039356733 4.40255732531
7410.0 129.0 11.7616321863 5.13118569384
7410.0 130.0 12.4700577453 3.43526915708
7420.0 126.0 10.9673368626 5.831652206
7420.0 127.0 10.808856635 3.42309750729
7420.0 128.0 11.3724821986 4.36079186036
7420.0 129.0 11.3335461533 5.14311868382
7420.0 130.0 11.9438949712 3.34266915479
7430.0 126.0 10.5327264112 5.85480220657
7430.0 127.0 10.382664788 3.42906400228
7430.0 128.0 10.9410287239 4.31902639541
7430.0 129.0 10.9300845382 5.1629274472
7430.0 130.0 11.4177321972 3.2498304927
7440.0 126.0 10.117689215 5.88081612474
7440.0 127.0 9.95668340616 3.43503049728
7440.0 128.0 10.5095752492 4.27726093046
7440.0 129.0 10.5268333881 5.18297487038
7440.0 130.0 10.8372694248 3.11188512846
7450.0 126.0 9.74790201742 5.9130351977
7450.0 127.0 9.50734039711 3.43670111587
7450.0 128.0 10.0795950302 4.25816814649
7450.0 129.0 10.123371773 5.20278363375
7450.0 130.0 10.2336554904 2.95508564005
7460.0 126.0 9.3781148198 5.94525427066
7460.0 127.0 9.04810552791 3.43670111587
7460.0 128.0 9.65277178787 4.29134185865
7460.0 129.0 9.71991015784 5.22259239713
7460.0 130.0 9.62983109092 2.79804749184
7470.0 126.0 9.0085380873 5.97747334362
7470.0 127.0 8.58887065871 3.43670111587
7470.0 128.0 9.22594854554 4.32427691101
7470.0 129.0 9.27014621857 5.26698311988
7470.0 130.0 9.02621715651 2.64100934363
7480.0 126.0 8.59223810046 6.02973983976
7480.0 127.0 8.12963578951 3.43670111587
7480.0 128.0 8.79912530322 4.35745062317
7480.0 129.0 8.80880669827 5.31757899742
7480.0 130.0 8.51205089369 2.63528150844
7490.0 126.0 8.14499974251 6.09537128468
7490.0 127.0 7.68913231506 3.44481554906
7490.0 128.0 8.37903694441 4.40494392331
7490.0 129.0 8.34746717797 5.36793621515
7490.0 130.0 8.00798695614 2.64625985922
7500.0 126.0 7.69776138455 6.1610027296
7500.0 127.0 7.2919846532 3.47178410643
7500.0 128.0 7.96168463202 4.45840371845
7500.0 129.0 7.88591719256 5.41853209269
7500.0 130.0 7.50371255348 2.65747686981
7500.0 131.0 15.0061623163 4.22427845493
7510.0 126.0 7.2505230266 6.22663417452
7510.0 127.0 6.89483699133 3.4987526638
7510.0 128.0 7.54454278475 4.51186351358
7510.0 129.0 7.42457767226 5.46888931043
7510.0 130.0 6.99964861593 2.6684552206
7510.0 131.0 14.4799995422 4.29587639484
7520.0 126.0 6.83232885377 6.29059500085
7520.0 127.0 6.49789979458 3.52595988097
7520.0 128.0 7.12719047236 4.56532330872
7520.0 129.0 7.02869280105 5.53595271415
7520.0 130.0 6.52441839839 2.70019697396
7520.0 131.0 13.9538367682 4.36747433476
7530.0 126.0 6.43349747103 6.35336252817
7530.0 127.0 6.10075213272 3.55292843833
7530.0 128.0 6.70983815997 4.61878310385
7530.0 129.0 6.67679513776 5.61423312846
7530.0 130.0 6.06834050583 2.7457809957
7530.0 131.0 13.4276739941 4.43907227467
7540.0 126.0 6.0348765534 6.4161300555
7540.0 127.0 5.70360447085 3.5798969957
7540.0 128.0 6.29248584758 4.67248155879
7540.0 129.0 6.32468700936 5.69227488297
7540.0 130.0 5.61247307838 2.79112635765
7540.0 131.0 12.8520519193 4.47415526523
7550.0 126.0 5.63604517066 6.47889758282
7550.0 127.0 5.3329754128 3.68371400858
7550.0 128.0 5.96079283481 4.73023723032
7550.0 129.0 5.97257888095 5.77031663747
7550.0 130.0 5.15639518583 2.83671037939
7550.0 131.0 12.2762193793 4.50923825579
7560.0 126.0 5.2586812291 6.52710686236
7560.0 127.0 4.96234635475 3.78753102145
7560.0 128.0 5.63878121709 4.78870888125
7560.0 129.0 5.62047075255 5.84835839198
7560.0 130.0 4.70115915371 2.9166614123
7560.0 131.0 11.7005973045 4.54432124634
7570.0 126.0 4.8867893804 6.57149758511
7570.0 127.0 4.5917172967 3.89110937453
7570.0 128.0 5.31655913425 4.84718053218
7570.0 129.0 5.26331146152 5.92162695049
7570.0 130.0 4.24697544713 3.04864028154
7570.0 131.0 11.1855891813 4.5817908349
7580.0 126.0 4.51468706658 6.61588830786
7580.0 127.0 4.22129870376 3.9949263874
7580.0 128.0 4.99454751652 4.90565218311
7580.0 129.0 4.85963938125 5.95098210585
7580.0 130.0 3.79279174056 3.18038049098
7580.0 131.0 10.710779894 4.62045372245
7590.0 126.0 4.14279521787 6.6602790306
7590.0 127.0 3.88413359814 4.12881453504
7590.0 128.0 4.67232543369 4.96412383404
7590.0 129.0 4.45575683588 5.98009860142
7590.0 130.0 3.33860803399 3.31235936022
7590.0 131.0 10.2359706066 4.65935526981
7600.0 126.0 3.78542546172 6.70466975335
7600.0 127.0 3.59705918861 4.30780938483
7600.0 128.0 4.29054172483 5.07510064091
7600.0 129.0 4.05208475562 6.00945375678
7600.0 130.0 2.9999696726 3.47345472503
7600.0 131.0 9.76116131932 4.69825681716
7610.0 126.0 3.44278826325 6.7490604761
7610.0 127.0 3.30998477908 4.48680423461
7610.0 128.0 3.90875801597 5.18607744777
7610.0 129.0 3.64841267536 6.03857025235
7610.0 130.0 2.73815107623 3.65388153361
7610.0 131.0 9.27561831142 4.73166918912
7620.0 126.0 3.10015106478 6.79345119884
7620.0 127.0 3.02312083466 4.6657990844
7620.0 128.0 3.5269743071 5.29705425464
7620.0 129.0 3.27041733847 6.0824836555
7620.0 130.0 2.47633247985 3.8343083422
7620.0 131.0 8.76524042057 4.75195527209
7630.0 126.0 2.75751386631 6.83784192159
7630.0 127.0 2.73604642513 4.84479393418
7630.0 128.0 3.14519059824 5.40803106151
7630.0 129.0 2.9955499053 6.18463004977
7630.0 130.0 2.21451388348 4.01473515078
7630.0 131.0 8.25465206462 4.77224135507
7630.0 132.0 0.0810290672058 3.31307533962
7640.0 126.0 2.43739643457 6.89225635592
7640.0 127.0 2.49548480483 4.98608053561
7640.0 128.0 2.78276967947 5.55958003432
7640.0 129.0 2.72068247213 6.28701510385
7640.0 130.0 1.93775226432 4.21067484634
7640.0 131.0 7.74427417378 4.79252743805
7640.0 132.0 0.582567423443 3.32715626781
7650.0 126.0 2.20735806975 6.98652697681
7650.0 127.0 2.25492318453 5.12736713704
7650.0 128.0 2.42855689997 5.72831251272
7650.0 129.0 2.44560457385 6.38916149813
7650.0 130.0 1.64583715727 4.42236608869
7650.0 131.0 7.23368581783 4.81281352102
7650.0 132.0 1.08389531457 3.34123719599
7660.0 126.0 1.97731970493 7.0810362575
7660.0 127.0 2.01436156423 5.26865373847
7660.0 128.0 2.07434412047 5.89728365092
7660.0 129.0 2.17073714068 6.49154655221
7660.0 130.0 1.35413251533 4.63381867124
7660.0 131.0 6.77466141374 4.82355321201
7660.0 132.0 1.5852232057 3.35531812417
7670.0 126.0 1.74728134011 7.17530687838
7670.0 127.0 1.77401040903 5.4097016801
7670.0 128.0 1.72013134097 6.06601612932
7670.0 129.0 1.8651418015 6.61374036966
7670.0 130.0 1.06242787339 4.84527125378
7670.0 131.0 6.3497323574 4.8280877482
7670.0 132.0 2.10296737538 3.37369492875
7670.0 133.0 15.1311785914 4.62093104205
7680.0 126.0 1.51724297529 7.26957749927
7680.0 127.0 1.53681623049 5.55313621973
7680.0 128.0 1.40001390923 6.25193211329
7680.0 129.0 1.48756739483 6.78223418826
7680.0 130.0 0.777458114957 5.05338259913
7680.0 131.0 5.92480330107 4.8326222844
7680.0 132.0 2.65859526479 3.40233410471
7680.0 133.0 14.6443727929 4.65076351702
7690.0 126.0 1.29457088931 7.47649554562
7690.0 127.0 1.32992902772 5.71494756394
7690.0 128.0 1.10262670933 6.44906510786
7690.0 129.0 1.10999298817 6.95072800685
7690.0 130.0 0.553312773206 5.23046817052
7690.0 131.0 5.49987424473 4.83715682059
7690.0 132.0 3.2142231542 3.43097328068
7690.0 133.0 14.1577774594 4.68059599198
7700.0 126.0 1.07274066376 7.69582390156
7700.0 127.0 1.12304182496 5.87675890814
7700.0 128.0 0.805239509427 6.64619810242
7700.0 129.0 0.732208116387 7.11898316565
7700.0 130.0 0.329377896564 5.40755374191
7700.0 131.0 5.0749451884 4.84169135679
7700.0 132.0 3.76985104361 3.45961245665
7700.0 133.0 13.6709716608 4.71042846694
7700.0 134.0 14.9324995279 3.51880008698
7710.0 126.0 0.850910438216 7.9151522575
7710.0 127.0 0.9161546222 6.03857025235
7710.0 128.0 0.507852309526 6.84333109698
7710.0 129.0 0.602351143748 7.31778677881
7710.0 130.0 0.105232554813 5.58463931329
7710.0 131.0 4.63633589994 4.85314702717
7710.0 132.0 4.32547893302 3.48825163261
7710.0 133.0 13.1812193507 4.69133568297
7710.0 134.0 14.3537204765 3.57082792331
7720.0 126.0 0.665280211526 8.13042339684
7720.0 127.0 0.709056954328 6.20038159656
7720.0 128.0 0.210465109626 7.04046409155
7720.0 129.0 0.499644170251 7.51969296937
7720.0 131.0 4.16594637993 4.88130888354
7720.0 132.0 4.84848473044 3.51712946838
7720.0 133.0 12.6880995989 4.62307898025
7720.0 134.0 13.774941425 3.62309441945
7730.0 126.0 0.624449980259 8.32922701
7730.0 127.0 0.502169751567 6.36219294076
7730.0 129.0 0.397147661863 7.72159915993
7730.0 131.0 3.69555685991 4.90947073991
7730.0 132.0 5.29509169307 3.54624596394
7730.0 133.0 12.194979847 4.55506093733
7730.0 134.0 13.1961623735 3.67536091559
7740.0 126.0 0.583409283882 8.52803062316
7740.0 127.0 0.295282548805 6.52400428497
7740.0 129.0 0.294651153476 7.92350535049
7740.0 131.0 3.2251673399 4.93739393647
7740.0 132.0 5.7419091208 3.57560111931
7740.0 133.0 11.7018600952 4.48680423461
7740.0 134.0 12.713355412 3.73144596852
7750.0 126.0 0.542579052615 8.72707289612
7750.0 131.0 2.75456735478 4.96555579284
7750.0 132.0 6.18851608343 3.60471761487
7750.0 133.0 11.2283135985 4.53310423576
7750.0 134.0 12.2305484506 3.78776968125
7760.0 131.0 2.28438829988 4.9949109482
7760.0 132.0 6.63533351116 3.63407277024
7760.0 133.0 10.7547671019 4.5796428967
7760.0 134.0 11.7477414891 3.84385473418
7760.0 135.0 0.140801158339 2.85651914277
7770.0 131.0 1.81526157052 5.02975527896
7770.0 132.0 7.1061439614 3.61020679027
7770.0 133.0 10.2812206052 4.62594289785
7770.0 134.0 11.2352589471 3.89421195192
7770.0 135.0 0.48891044966 2.89327275193
7770.0 136.0 -0.0492488356524 3.81044236222
7780.0 131.0 1.34613484116 5.06459960972
7780.0 132.0 7.61315441048 3.50710575679
7780.0 133.0 9.84871480493 4.68107331158
7780.0 134.0 10.7034136151 3.94051195307
7780.0 135.0 0.837019740981 2.93002636108
7780.0 136.0 0.389570917917 3.8307284452
7790.0 131.0 0.87700811181 5.09944394048
7790.0 132.0 8.12016485957 3.40424338311
7790.0 133.0 9.47787528177 4.7493300143
7790.0 134.0 10.171357818 3.98705061401
7790.0 135.0 1.1851290323 2.96677997024
7790.0 136.0 0.828601136596 3.85125318798
7800.0 131.0 0.407670917345 5.13404961144
7800.0 132.0 8.62717530866 3.30114234964
7800.0 133.0 9.10703575861 4.81758671702
7800.0 134.0 9.63930202085 4.03335061516
7800.0 135.0 1.53323832362 3.00353357939
7800.0 136.0 1.26763135527 3.87177793075
7810.0 131.0 -0.0614558120107 5.16889394219
7810.0 132.0 9.13418575775 3.19804131616
7810.0 133.0 8.73640670056 4.88560475993
7810.0 134.0 9.14491947834 4.09993669928
7810.0 135.0 1.88113714983 3.04004852875
7810.0 136.0 1.70350459731 3.89110937453
7820.0 132.0 9.65677062495 3.22262327553
7820.0 133.0 8.36914508426 4.95911197825
7820.0 134.0 8.65053693583 4.1665227834
7820.0 135.0 2.27302318396 3.06152791072
7820.0 136.0 2.11117551465 3.90113308612
7830.0 132.0 10.1793554921 3.2474438947
7830.0 133.0 8.01682649075 5.05338259913
7830.0 134.0 8.15615439332 4.23310886752
7830.0 135.0 2.69395340321 3.07250626151
7830.0 136.0 2.51905689711 3.9109181379
7840.0 132.0 10.7004671036 3.27679905007
7840.0 133.0 7.66429743212 5.14789187982
7840.0 134.0 7.66198231592 4.29969495164
7840.0 135.0 3.11488362246 3.0837232721
7840.0 136.0 2.92672781445 3.92094184949
7850.0 132.0 11.2097926689 3.34839698998
7850.0 133.0 7.31197883861 5.24216250071
7850.0 134.0 7.2155858184 4.43119650128
7850.0 135.0 3.53581384171 3.09470162288
7850.0 136.0 3.33460919691 3.93096556108
7860.0 132.0 11.7191182342 3.41999492989
7860.0 133.0 6.95944977999 5.33643312159
7860.0 134.0 6.79002536674 4.59014392789
7860.0 135.0 3.95948010739 3.10854389127
7860.0 136.0 3.74627895133 3.94242123147
7870.0 132.0 12.2284437995 3.49159286981
7870.0 133.0 6.52778584015 5.48702745521
7870.0 134.0 6.36446491508 4.7493300143
7870.0 135.0 4.38924986124 3.12882997424
7870.0 136.0 4.16720917059 3.95769545865
7880.0 132.0 12.7377693648 3.56319080972
7880.0 133.0 6.08707190059 5.64358828382
7880.0 134.0 5.93890446341 4.90851610071
7880.0 135.0 4.8190196151 3.14887739742
7880.0 136.0 4.58813938984 3.97296968583
7890.0 132.0 13.2483577207 3.63001555364
7890.0 133.0 5.64635796103 5.80014911244
7890.0 134.0 5.51187075598 5.05171198053
7890.0 135.0 5.24878936895 3.16916348039
7890.0 136.0 5.00906960909 3.98824391301
7900.0 132.0 13.771153053 3.65292689441
7900.0 133.0 5.22753239288 5.95551664205
7900.0 134.0 5.07915449059 5.13118569384
7900.0 135.0 5.67876958792 3.18921090357
7900.0 136.0 5.42999982834 4.00327948039
7910.0 132.0 14.2939483853 3.67607689499
7910.0 133.0 4.81796728955 6.11040685206
7910.0 134.0 4.64664869031 5.21065940714
7910.0 135.0 6.10853934177 3.20949698655
7910.0 136.0 5.84882539649 4.02929339856
7920.0 132.0 14.8167437176 3.69922689556
7920.0 133.0 4.40840218622 6.26505840228
7920.0 134.0 4.21393242492 5.29037178025
7920.0 135.0 6.53830909563 3.22954440972
7920.0 136.0 6.26596724377 4.06222845092
7930.0 133.0 3.99883708289 6.41994861229
7930.0 134.0 3.79489639166 5.43237436108
7930.0 135.0 6.98049629095 3.2457732761
7930.0 136.0 6.68331955616 4.09516350328
7940.0 133.0 3.61200221139 6.51660583118
7940.0 134.0 3.384699893 5.61590374706
7940.0 135.0 7.47151139171 3.2457732761
7940.0 136.0 7.10067186855 4.12809855564
7950.0 133.0 3.2251673399 6.61350170986
7950.0 134.0 2.97450339434 5.79943313304
7950.0 135.0 7.96252649246 3.2457732761
7950.0 136.0 7.51802418093 4.1607949482
7960.0 133.0 2.83833246841 6.71015892874
7960.0 134.0 2.56451736079 5.98296251902
7960.0 135.0 8.45375205833 3.2457732761
7960.0 136.0 7.93516602821 4.19373000056
7970.0 133.0 2.46559875926 6.8306821276
7970.0 134.0 2.16400225717 6.14620582202
7970.0 135.0 8.94476715908 3.2457732761
7970.0 136.0 8.42533926853 4.22929031072
7980.0 133.0 2.12506621189 7.00609708039
7980.0 134.0 1.76790692085 6.30061871244
7980.0 135.0 9.43578225984 3.2457732761
7980.0 136.0 8.93361250827 4.26532794048
7990.0 133.0 1.78453366451 7.18151203318
7990.0 134.0 1.37181158454 6.45503160285
7990.0 135.0 9.9202729422 3.23407894592
7990.0 136.0 9.44188574802 4.30136557024
8000.0 133.0 1.44421158225 7.35692698597
8000.0 134.0 0.997815084735 6.67507593819
8000.0 135.0 10.3782450207 3.17536863519
8000.0 136.0 9.95015898777 4.33740319999
8010.0 133.0 1.26279065775 7.54761616594
8010.0 134.0 0.675382536788 7.04786254534
8010.0 135.0 10.8360066342 3.11689698426
8010.0 136.0 10.4584322275 4.37344082975
8010.0 137.0 8.84837413888 -0.159186086409
8010.0 138.0 8.11700788293 -0.276845367668
8020.0 133.0 1.09904880246 7.73997596451
8020.0 134.0 0.353160453952 7.42088781229
8020.0 135.0 11.2937682476 3.05818667353
8020.0 136.0 10.9488159329 4.36222381916
8020.0 137.0 9.08809389874 0.0264912377682
8020.0 138.0 8.36241020075 0.0334123719599
8030.0 133.0 0.935306947176 7.93209710328
8030.0 134.0 0.030938371115 7.79367441944
8030.0 135.0 11.7517403262 2.9994763628
8030.0 136.0 11.421310104 4.30375216823
8030.0 137.0 9.32760319349 0.212168561945
8030.0 138.0 8.60802298368 0.343670111587
8040.0 133.0 0.789244161096 8.12851411844
8040.0 135.0 12.2095019396 2.94100471187
8040.0 136.0 11.8938042752 4.2452805173
8040.0 137.0 9.56732295336 0.397845886122
8040.0 138.0 8.85363576662 0.653927851215
8050.0 133.0 0.684011606283 8.33471618539
8050.0 135.0 12.6832589014 2.92071862889
8050.0 136.0 12.3662984463 4.18680886637
8050.0 137.0 9.86029038596 0.791873215449
8050.0 138.0 9.14597180389 1.15439745121
8060.0 133.0 0.57877905147 8.54067959255
8060.0 135.0 13.1730112115 2.93909543347
8060.0 136.0 12.8387926174 4.12833721544
8060.0 137.0 10.1759880504 1.27515930987
8060.0 138.0 9.44988342219 1.70236035136
8070.0 133.0 0.473546496657 8.7468816595
8070.0 135.0 13.6627635216 2.95747223805
8070.0 136.0 13.298869347 4.11902948325
8070.0 137.0 10.4916857148 1.75844540429
8070.0 138.0 9.7607403891 2.2455500555
8080.0 133.0 0.368313941845 8.95308372645
8080.0 135.0 14.1525158317 2.97584904263
8080.0 136.0 13.7505274723 4.14217948383
8080.0 137.0 10.8073833793 2.24173149871
8080.0 138.0 10.1338950285 2.7445876967
8090.0 135.0 14.6422681418 2.9942258472
8090.0 136.0 14.2021855975 4.1655681442
8090.0 137.0 11.139076392 2.55365985693
8090.0 138.0 10.5070496678 3.24362533791
8100.0 135.0 15.1320204519 3.01260265178
8100.0 136.0 14.6540541879 4.18871814477
8100.0 137.0 11.4777147534 2.79231965665
8100.0 138.0 10.8818880281 3.47082946723
8110.0 136.0 15.1057123132 4.21210680514
8110.0 137.0 11.8232984634 3.02358100257
8110.0 138.0 11.2571473185 3.6682011216
8120.0 137.0 12.1794054289 3.24386399771
8120.0 138.0 11.7292205594 3.86867535336
8130.0 137.0 12.5357228595 3.46414699284
8130.0 138.0 12.265906589 4.07153618311
8140.0 137.0 12.9534961021 3.50829905579
8140.0 138.0 12.7480821551 4.19468463976
8150.0 137.0 13.3956832974 3.50829905579
8150.0 138.0 13.1936367922 4.26508928068
8160.0 137.0 13.9182681646 3.50829905579
8160.0 138.0 13.6657100331 4.31496917882
8170.0 137.0 14.4408530318 3.50829905579
8170.0 138.0 14.1409402506 4.36270113876
8180.0 137.0 14.963437899 3.50829905579
8180.0 138.0 14.6159600031 4.4104330987
8190.0 138.0 15.0909797555 4.45816505865
8360.0 139.0 0.244560457385 3.58825008869
8370.0 139.0 0.796399974823 3.67201967839
8380.0 139.0 1.34802902715 3.75602792789
8390.0 139.0 1.89986854459 3.83979751759
8400.0 139.0 2.44749875984 3.89707586952
8400.0 140.0 15.2246251001 6.5788960389
8410.0 139.0 2.98860455668 3.9140207153
8410.0 140.0 14.6367960489 6.44715582946
8420.0 139.0 3.52971035353 3.93096556108
8420.0 140.0 14.0489669977 6.31541562002
8430.0 139.0 4.07102661549 3.94814906666
8430.0 140.0 13.4611379465 6.18367541058
8440.0 139.0 4.61086962168 3.96294597424
8440.0 140.0 12.8499472682 6.06530014992
8450.0 139.0 5.14503006991 3.96915112903
8450.0 140.0 12.2330740319 5.95002746666
8460.0 139.0 5.67940098324 3.97511762403
8460.0 140.0 11.6162007956 5.83475478339
8470.0 139.0 6.21356143147 3.98132277882
8470.0 140.0 11.0090089543 5.72019807953
8480.0 139.0 6.67174397513 3.98991453161
8480.0 140.0 10.4028694386 5.60564137567
8490.0 139.0 6.95229396626 4.00447277939
8490.0 140.0 9.821985736 5.60778931387
8500.0 139.0 7.2330544225 4.01879236737
8500.0 140.0 9.27877528806 5.78487488525
8510.0 139.0 7.51360441363 4.03335061516
8510.0 140.0 8.73556484012 5.96196045664
8520.0 139.0 7.78994510257 4.0779799977
8520.0 140.0 8.20014160123 6.10133777967
8530.0 139.0 8.12395323155 4.12165474105
8530.0 140.0 7.68260789666 6.15288829641
8540.0 139.0 8.5448834508 4.1629428864
8540.0 140.0 7.16486372698 6.20443881315
8550.0 139.0 8.97507413487 4.20446969155
8550.0 140.0 6.64733002241 6.25622798969
8550.0 148.0 0.461549985409 8.76525846408
8560.0 139.0 9.48966132791 4.2440872183
8560.0 140.0 6.12306143434 6.3292578884
8560.0 148.0 0.573938353949 8.61394815106
8570.0 139.0 10.0040380558 4.28394340486
8570.0 140.0 5.59416261385 6.4163687153
8570.0 148.0 0.686116257379 8.46287649784
8580.0 139.0 10.5186252489 4.32379959141
8580.0 140.0 5.06547425847 6.50371820199
8580.0 141.0 7.18317419152 -0.37469588555
8580.0 148.0 0.79829416081 8.31180484462
8590.0 139.0 10.9660740719 4.34862021058
8590.0 140.0 4.53678590309 6.59106768869
8590.0 141.0 6.94113931545 0.305484543633
8590.0 148.0 0.846280205804 8.17338216079
8600.0 139.0 11.3468054552 4.35888258197
8600.0 140.0 4.04850684876 6.63474243203
8600.0 141.0 6.69910443938 0.985664972817
8600.0 142.0 7.72617417435 -0.289732996852
8600.0 148.0 0.878060437358 8.03830071415
8610.0 139.0 11.7273263734 4.36890629355
8610.0 140.0 3.60063709547 6.63474243203
8610.0 141.0 6.3672009615 1.86536499456
8610.0 142.0 7.61020789895 -0.0346056709584
8610.0 148.0 0.909840668911 7.90321926751
8620.0 139.0 12.1080577568 4.37916866494
8620.0 140.0 3.15255687708 6.63474243203
8620.0 141.0 6.01256725178 2.79494491444
8620.0 142.0 7.49403115844 0.220521654935
8620.0 143.0 15.186109985 4.79586867524
8620.0 148.0 0.941620900465 7.76813782087
8630.0 139.0 12.488578675 4.38919237653
8630.0 140.0 2.7046871238 6.63474243203
8630.0 141.0 5.73159633043 3.1662995628
8630.0 142.0 7.37785441792 0.475648980829
8630.0 143.0 14.7268751158 4.95863465865
8630.0 148.0 0.973401132018 7.63305637424
8640.0 139.0 12.7952263397 4.35124546838
8640.0 140.0 2.24503132438 6.64524346322
8640.0 141.0 5.45883354836 3.47560266323
8640.0 142.0 7.05310675377 0.856311361372
8640.0 143.0 14.2676402466 5.12140064205
8640.0 148.0 1.05001043192 7.52780740256
8650.0 139.0 13.0831426097 4.30088825064
8650.0 140.0 1.77716738568 6.6631429482
8650.0 141.0 5.23510913683 3.76676761888
8650.0 142.0 6.70499746245 1.2510546701
8650.0 143.0 13.7985135173 5.22330837653
8650.0 144.0 14.9263960398 4.59706506208
8650.0 148.0 1.17186973039 7.45239090585
8660.0 139.0 13.3874751582 4.30566144663
8660.0 140.0 1.30951391209 6.68080377338
8660.0 141.0 5.20753820746 3.98466401602
8660.0 142.0 6.35141607828 1.64150210243
8660.0 143.0 13.3194949278 5.26459652188
8660.0 144.0 14.5271437268 4.67104959999
8660.0 145.0 14.8632565069 2.38063150214
8660.0 146.0 15.0272088273 3.15030935622
8660.0 147.0 15.0602518495 5.43237436108
8660.0 148.0 1.29372902887 7.37721306894
8670.0 139.0 13.6937018927 4.31640113762
8670.0 140.0 0.98371392239 6.87459553075
8670.0 141.0 5.18017774321 4.20256041315
8670.0 142.0 5.98499632242 2.02144850357
8670.0 143.0 12.8404763383 5.30564600743
8670.0 144.0 14.1281018789 4.7452727977
8670.0 145.0 14.2470146659 2.44029645207
8670.0 146.0 14.4435890783 3.19374543977
8670.0 147.0 14.5191460526 5.52330374477
8670.0 148.0 1.41558832734 7.30179657223
8680.0 139.0 14.0853774617 4.26007742489
8680.0 140.0 0.673488350802 7.08819605149
8680.0 141.0 5.01264751595 4.28967124005
8680.0 142.0 5.61857656656 2.40163356452
8680.0 143.0 12.3711391438 5.24192384091
8680.0 144.0 13.7071716597 4.79205011845
8680.0 145.0 13.6305623598 2.499961402
8680.0 146.0 13.8601797944 3.23718152331
8680.0 147.0 13.9778297907 5.61423312846
8680.0 148.0 1.46736274431 7.18127337338
8690.0 139.0 14.5138844249 4.17487587639
8690.0 140.0 0.431453474732 7.50585070099
8690.0 141.0 4.82975333569 4.36222381916
8690.0 142.0 5.18059867343 2.59781191988
8690.0 143.0 11.9081159026 5.10827435307
8690.0 144.0 13.2353088839 4.77558259227
8690.0 145.0 13.0206344721 2.52358872217
8690.0 146.0 13.2334146979 3.28944801945
8690.0 147.0 13.5268030607 5.67318209899
8690.0 148.0 1.48925111571 7.04118007095
8700.0 139.0 14.942180923 4.08967432789
8700.0 140.0 0.189418598663 7.92350535049
8700.0 141.0 4.5369963682 4.36031454076
8700.0 142.0 4.69505566553 2.67108047839
8700.0 143.0 11.4560368471 4.98417125722
8700.0 144.0 12.7636565732 4.75911506609
8700.0 145.0 12.4168100726 2.51093975279
8700.0 146.0 12.596126346 3.34386245379
8700.0 147.0 13.0985065627 5.72449395593
8700.0 148.0 1.510929022 6.90132542831
8710.0 140.0 0.15174534404 8.13662855163
8710.0 141.0 4.17078707745 4.30828670443
8710.0 142.0 4.20930219251 2.7445876967
8710.0 143.0 11.1014031374 4.94646300886
8710.0 144.0 12.3267310057 4.73286248812
8710.0 145.0 11.8131961382 2.4982907834
8710.0 146.0 11.9586275289 3.39851554792
8710.0 147.0 12.6699995995 5.77556715307
8710.0 148.0 1.50503599893 6.86648109756
8720.0 140.0 0.114282554527 8.34975175277
8720.0 141.0 3.78184755486 4.26222536308
8720.0 142.0 3.8001580194 2.75795264549
8720.0 143.0 10.7448752417 4.90875476051
8720.0 144.0 11.9420007853 4.69181300257
8720.0 145.0 11.2095822038 2.48588047382
8720.0 146.0 11.3244961536 3.44720214706
8720.0 147.0 12.2069763583 5.7438253997
8720.0 148.0 1.49198716214 6.85788934477
8730.0 140.0 0.0766092999037 8.56287495392
8730.0 141.0 3.33923942932 4.22929031072
8730.0 142.0 3.37438710263 2.76368048068
8730.0 143.0 10.3792973463 4.87128517195
8730.0 144.0 11.55748103 4.65052485722
8730.0 145.0 10.6032322229 2.51070109299
8730.0 146.0 10.6983624525 3.48204647782
8730.0 147.0 11.7439531171 5.71184498654
8730.0 148.0 1.47893832534 6.84905893218
8740.0 141.0 2.89642083867 4.19635525836
8740.0 142.0 2.88337200187 2.73981450071
8740.0 143.0 10.0162450322 4.89157125493
8740.0 144.0 11.14475895 4.65601403261
8740.0 145.0 9.99582991655 2.55151191874
8740.0 146.0 10.0720182862 3.51712946838
8740.0 147.0 11.2809298759 5.68010323318
8740.0 148.0 1.46588948854 6.84046717939
8750.0 141.0 2.44791969005 4.14361144262
8750.0 142.0 2.39235690111 2.71594852074
8750.0 143.0 9.65950667137 5.04717744434
8750.0 144.0 10.7040450104 4.70828052875
8750.0 145.0 9.38863807529 2.59256140429
8750.0 146.0 9.44567411999 3.55197379914
8750.0 147.0 10.8665240751 5.64287230442
8750.0 148.0 1.45284065175 6.8316367668
8760.0 141.0 1.99078947195 4.06127381172
8760.0 142.0 1.91838947424 2.70258357196
8760.0 143.0 9.30255784545 5.20278363375
8760.0 144.0 10.263541536 4.76054702488
8760.0 145.0 8.78123576891 2.63337223004
8760.0 146.0 8.82985320923 3.58992070729
8760.0 147.0 10.4523287393 5.60540271587
8760.0 148.0 1.43979181495 6.82304501401
8770.0 141.0 1.53344878873 3.97893618082
8770.0 142.0 1.51282320799 2.73170006752
8770.0 143.0 8.96939157691 5.27247229527
8770.0 144.0 9.83419271236 4.80732434563
8770.0 145.0 8.17383346253 2.67442171559
8770.0 146.0 8.25507299484 3.64051658483
8770.0 147.0 10.0379229385 5.56817178711
8770.0 148.0 1.42695344326 6.81445326122
8780.0 141.0 1.08221159369 3.90566762231
8780.0 142.0 1.10725694174 2.76081656309
8780.0 143.0 8.63601484326 5.34192229699
8780.0 144.0 9.43115202743 4.84145269699
8780.0 145.0 7.56643115615 2.71523254134
8780.0 146.0 7.68029278045 3.69087380257
8780.0 147.0 9.64309039282 5.56650116852
8780.0 148.0 1.41390460646 6.80562284863
8790.0 141.0 0.637077886836 3.84146813619
8790.0 142.0 0.701690675492 2.78993305865
8790.0 143.0 8.28706369151 5.36984549355
8790.0 144.0 9.02832180761 4.87558104835
8790.0 145.0 6.95923931488 2.75628202689
8790.0 146.0 7.10572303118 3.74146968011
8790.0 147.0 9.25309854469 5.57342230271
8790.0 148.0 1.40085576967 6.79703109584
8800.0 141.0 0.191944179978 3.77726865006
8800.0 142.0 0.296124409243 2.81904955422
8800.0 143.0 7.92758928426 5.36984549355
8800.0 144.0 8.62549158778 4.90947073991
8800.0 145.0 6.38235444939 2.79399027525
8800.0 146.0 6.53094281679 3.79182689785
8800.0 147.0 8.86310669655 5.5803434369
8800.0 148.0 1.38780693287 6.78843934305
8810.0 143.0 7.57148231878 5.37581198855
8810.0 144.0 8.22076718197 4.93071146208
8810.0 145.0 5.81872888582 2.8302665648
8810.0 146.0 5.9561626024 3.84242277539
8810.0 147.0 8.46511717425 5.60659601487
8810.0 148.0 1.37475809607 6.77960893046
8820.0 143.0 7.22947651564 5.40564446351
8820.0 144.0 7.81562184594 4.94861094706
8820.0 145.0 5.25510332224 2.86678151416
8820.0 146.0 5.40369168964 3.86032226037
8820.0 147.0 8.05492067559 5.6617264286
8820.0 148.0 1.36191972439 6.77101717767
8830.0 143.0 6.8874707125 5.43547693847
8830.0 144.0 7.41047650991 4.96651043204
8830.0 145.0 4.69168822377 2.90305780371
8830.0 146.0 4.85122077687 3.87822174534
8830.0 147.0 7.64472417693 5.71685684234
8830.0 148.0 1.34887088759 6.76242542488
8840.0 143.0 6.54546490935 5.46530941344
8840.0 144.0 7.00533117389 4.98440991702
8840.0 145.0 4.1280626602 2.93957275307
8840.0 146.0 4.2987498641 3.89612123032
8840.0 147.0 7.23452767827 5.77198725607
8840.0 148.0 1.33582205079 6.75359501229
8850.0 143.0 6.18199166503 5.55218158053
8850.0 144.0 6.58755793128 5.04479084634
8850.0 145.0 3.56443709662 2.97584904263
8850.0 146.0 3.74627895133 3.9140207153
8850.0 147.0 6.84621955101 5.8302202472
8850.0 148.0 1.322773214 6.7450032595
8860.0 143.0 5.8183079556 5.63881508783
8860.0 144.0 6.16662771203 5.11567280686
8860.0 145.0 3.01554409071 2.98635007381
8860.0 146.0 3.18686268995 3.96437793304
8860.0 147.0 6.47937886493 5.89155581573
8860.0 148.0 1.3097243772 6.73617284691
8870.0 143.0 5.45483471127 5.72568725493
8870.0 144.0 5.74569749278 5.18655476737
8870.0 145.0 2.47043945678 2.99040729041
8870.0 146.0 2.62744642857 4.01449649098
8870.0 147.0 6.11253817885 5.95289138425
8870.0 148.0 1.2966755404 6.72758109412
8880.0 143.0 5.07957542081 5.84859705178
8880.0 144.0 5.32371494798 5.2469356967
8880.0 145.0 1.92533482285 2.9942258472
8880.0 146.0 2.06803016718 4.06485370872
8880.0 147.0 5.74569749278 6.01422695278
8880.0 148.0 1.28383716872 6.71898934133
8890.0 143.0 4.69947543283 5.98725839541
8890.0 144.0 4.89205100814 5.21018208754
8890.0 145.0 1.38001972381 2.998044404
8890.0 146.0 1.51134995222 4.09969803948
8890.0 147.0 5.27341378678 5.96768829183
8890.0 148.0 1.27078833192 6.71015892874
8900.0 143.0 4.31937544484 6.12591973904
8900.0 144.0 4.46017660318 5.17342847839
8900.0 145.0 0.834915089884 3.0021016206
8900.0 146.0 0.965824388072 4.07320680171
8900.0 147.0 4.80113008078 5.92114963089
8900.0 148.0 1.25773949512 6.70156717595
8910.0 143.0 3.95379754943 6.19107386437
8910.0 144.0 4.02851266334 5.13667486923
8910.0 145.0 0.289810455954 3.00592017739
8910.0 146.0 0.420088358813 4.04671556394
8910.0 147.0 4.32884637478 5.87461096995
8910.0 148.0 1.24469065833 6.69297542316
8920.0 143.0 3.59832197927 6.20754139055
8920.0 144.0 3.60905569986 5.10039857968
8920.0 146.0 -0.125437205337 4.01998566637
8920.0 147.0 3.85656266878 5.828072309
8920.0 148.0 1.23164182153 6.68414501057
8930.0 143.0 3.242635944 6.22400891673
8930.0 144.0 3.2382161767 5.06650888812
8930.0 147.0 3.44468244924 5.7438253997
8930.0 148.0 1.21880344984 6.67555325778
8940.0 143.0 2.78508479567 6.165298606
8940.0 144.0 2.86737665354 5.03238053676
8940.0 147.0 3.0393266481 5.65528261401
8940.0 148.0 1.20575461304 6.666961505
8950.0 143.0 2.28396736966 6.07436922231
8950.0 144.0 2.49653713038 4.9982521854
8950.0 147.0 2.62765689367 5.56817178711
8950.0 148.0 1.19270577625 6.65813109241
8960.0 143.0 1.78579645517 5.97938262202
8960.0 144.0 2.12569760722 4.96412383404
8960.0 147.0 2.19052086098 5.48559549641
8960.0 148.0 1.17965693945 6.64953933962
8970.0 143.0 1.28930926157 5.88153210414
8970.0 144.0 1.67467087729 4.78226506666
8970.0 147.0 1.75338482829 5.40278054591
8970.0 148.0 1.16660810265 6.64070892703
8980.0 143.0 0.792822067959 5.78344292646
8980.0 144.0 1.22364414736 4.60040629928
8980.0 147.0 1.31246042362 5.27772281086
8980.0 148.0 1.07231973354 6.5767481007
8990.0 143.0 0.296334874353 5.68559240857
8990.0 144.0 0.772617417435 4.41854753189
8990.0 147.0 0.869010437644 5.12426455965
8990.0 148.0 0.787770905328 6.38319500314
9000.0 144.0 0.321801152617 4.23668876451
9000.0 147.0 0.425349986553 4.97104496823
9000.0 148.0 0.503432542224 6.18964190557
9010.0 148.0 0.21909417912 5.996088808

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,481 @@
5940.0 95.0 14.6694181409 7.59129090929
5940.0 96.0 14.6927797681 6.77030119827
5950.0 95.0 15.195580915 7.38007698654
5950.0 96.0 15.268612308 6.60085274047
5970.0 97.0 14.796328602 8.11896772645
5970.0 98.0 14.748342557 8.76859970127
5980.0 97.0 14.2503821076 8.22326205893
5980.0 98.0 14.2747960604 8.83566310499
5990.0 97.0 13.7046460784 8.3277950512
5990.0 98.0 13.8012495637 8.90272650871
6000.0 97.0 13.1595414444 8.42612288868
6000.0 98.0 13.3041309748 8.98768939741
6010.0 97.0 12.6163309965 8.51084711758
6010.0 98.0 12.751660062 9.11465641086
6020.0 97.0 12.0733310137 8.59557134648
6020.0 98.0 12.1991891492 9.2413847645
6030.0 97.0 11.5204391707 8.69771774076
6030.0 98.0 11.6502961433 9.36859043775
6040.0 97.0 10.9284008173 8.86931413675
6040.0 98.0 11.1336042992 9.4986600286
6040.0 99.0 14.9234495282 8.73160743232
6050.0 97.0 10.336572929 9.04067187295
6050.0 98.0 10.6169124551 9.62872961944
6050.0 99.0 14.2996309433 8.83375382659
6060.0 97.0 9.74453457566 9.21226826894
6060.0 98.0 10.100431076 9.75903787008
6060.0 99.0 13.6758123584 8.93613888067
6070.0 97.0 9.19290552333 9.36047600456
6070.0 98.0 9.54817062838 9.87192395535
6070.0 99.0 13.0551507501 9.02229506837
6070.0 100.0 14.6401634907 9.11465641086
6080.0 97.0 8.64148693611 9.50844508038
6080.0 98.0 8.98686018101 9.98027550442
6080.0 99.0 12.4393298393 9.08410795649
6080.0 100.0 13.9081658394 9.06286723432
6090.0 97.0 8.08964741868 9.65164096021
6090.0 98.0 8.42554973364 10.0888657133
6090.0 99.0 11.8235089285 9.14592084462
6090.0 100.0 13.1601728398 8.9986677482
6100.0 97.0 7.53612418036 9.74877549869
6100.0 98.0 7.86844858846 10.1907734478
6100.0 99.0 11.1923240648 9.23541826951
6100.0 100.0 12.3481984468 8.88506568353
6110.0 97.0 6.98281140716 9.84614869698
6110.0 98.0 7.31513581525 10.2862373676
6110.0 99.0 10.5544043175 9.33661002459
6110.0 100.0 11.5358031237 8.77862341286
6110.0 101.0 15.1280216148 8.40249556851
6120.0 97.0 6.42928816884 9.95044302945
6120.0 98.0 6.76203350716 10.3817012875
6120.0 99.0 9.91669503534 9.43780177967
6120.0 100.0 10.7173043124 8.73900588611
6120.0 101.0 14.5191460526 8.51323371558
6130.0 97.0 5.87597539564 10.0652383931
6130.0 98.0 6.20872073395 10.4771652074
6130.0 99.0 9.28761482266 9.50558116279
6130.0 100.0 9.88028457137 8.72086774133
6130.0 101.0 13.9102704905 8.62421052245
6140.0 97.0 5.32245215732 10.1802724166
6140.0 98.0 5.66277423958 10.5890966535
6140.0 99.0 8.67116251657 9.52252600857
6140.0 100.0 9.00032994803 8.75332547409
6140.0 101.0 13.3013949283 8.73494866951
6150.0 97.0 4.79565798793 10.2721564395
6150.0 98.0 5.11682774522 10.7010280996
6150.0 99.0 8.05471021048 9.53947085435
6150.0 100.0 8.12142765023 8.78649918625
6150.0 101.0 12.6474798327 8.9139435193
6160.0 97.0 4.28043939956 10.3544940704
6160.0 98.0 4.57109171596 10.8127208858
6160.0 99.0 7.44562418322 9.61560333045
6160.0 100.0 7.25431139857 8.8246847542
6160.0 101.0 11.9935647371 9.09293836908
6170.0 97.0 3.7652208112 10.4368317013
6170.0 98.0 4.01798940786 10.9392105797
6170.0 99.0 6.84137885349 9.73111467352
6170.0 100.0 6.38719514692 8.86287032216
6170.0 101.0 11.3531194085 9.26334146608
6180.0 97.0 3.22011617727 10.5647533539
6180.0 98.0 3.46320337889 11.0695188303
6180.0 99.0 6.23713352375 9.84662601658
6180.0 100.0 5.48114284998 8.88840692073
6180.0 101.0 10.7667636131 9.39985487151
6190.0 97.0 2.6676452645 10.7038920171
6190.0 98.0 2.90820688481 11.1995884212
6190.0 99.0 5.63983354264 9.95450024605
6190.0 100.0 4.54899287945 8.90535176651
6190.0 101.0 10.1804078177 9.53612961715
6200.0 97.0 2.12464528167 10.836348206
6200.0 98.0 2.3593138789 11.3456482186
6200.0 99.0 5.04905797992 10.0544987021
6200.0 100.0 3.61705337402 8.92253527209
6200.0 101.0 9.57616248796 9.67526828038
6210.0 97.0 1.5961673914 10.9583033636
6210.0 98.0 1.81610343096 11.507220903
6210.0 99.0 4.45849288231 10.1544971582
6210.0 100.0 2.68048363619 8.98673475821
6210.0 101.0 8.94476715908 9.81846416021
6220.0 97.0 1.06747903602 11.0804971811
6220.0 98.0 1.27289298302 11.6690322472
6220.0 99.0 3.83804173913 10.2413693253
6220.0 100.0 1.74307203792 9.06262857452
6220.0 101.0 8.31337183021 9.96166004004
6230.0 97.0 0.538790680641 11.2026909985
6230.0 98.0 0.752202301802 11.8263090552
6230.0 99.0 3.17297199271 10.3081940692
6230.0 100.0 0.831337183021 9.22730383632
6230.0 101.0 7.71102068646 10.1222780852
6240.0 97.0 0.0103127903717 11.3246461562
6240.0 98.0 0.264765107909 11.9769033888
6240.0 99.0 2.5079022463 10.3750188131
6240.0 100.0 -0.0631395328877 9.45092806865
6240.0 101.0 7.15181489019 10.3086713888
6250.0 98.0 -0.222672085984 12.1277363822
6250.0 99.0 1.90155226547 10.4838476818
6250.0 101.0 6.59281955902 10.4950646924
6260.0 99.0 1.30172670303 10.5972110867
6260.0 101.0 6.0260370188 10.6805033568
6270.0 99.0 0.701901140601 10.7105744915
6270.0 101.0 5.42831610746 10.8618848046
6280.0 99.0 0.102075578168 10.8239378964
6280.0 101.0 4.83059519613 11.0432662523
6290.0 102.0 15.0482553382 8.80654660943
6290.0 103.0 3.81699522817 9.61608065005
6290.0 104.0 0.542368587505 13.7658972475
6300.0 102.0 14.2617472236 8.96978991243
6300.0 103.0 4.41471613951 9.4919775542
6300.0 104.0 1.19775693888 13.6704333276
6310.0 102.0 13.469767016 9.12396414305
6310.0 103.0 5.01243705084 9.36787445835
6310.0 104.0 1.85335575536 13.5749694077
6320.0 102.0 12.6279065775 9.19556208296
6320.0 103.0 5.604054474 9.28100229126
6320.0 104.0 2.50874410674 13.4795054878
6330.0 102.0 11.7900449761 9.27360383747
6330.0 103.0 6.19504050183 9.19842600056
6330.0 104.0 3.16118594658 13.3864281659
6340.0 102.0 10.9883833735 9.4101172429
6340.0 103.0 6.79549745959 9.11036053446
6340.0 104.0 3.8066824378 13.2988400194
6350.0 102.0 10.1865113059 9.54639198854
6350.0 103.0 7.4188951143 9.00940743918
6350.0 104.0 4.45196846391 13.2114905327
6360.0 102.0 9.36464505279 9.73970642631
6360.0 103.0 8.04229276901 8.9084543439
6360.0 104.0 5.11977425675 13.1200838294
6360.0 105.0 -0.0631395328877 13.025335889
6370.0 102.0 8.5427787997 9.93278220427
6370.0 103.0 8.66021833087 8.81012650642
6370.0 104.0 5.83935446656 13.0200853734
6370.0 105.0 0.357790686363 12.9642389802
6380.0 102.0 7.75458696415 10.0301554026
6380.0 103.0 9.27603924164 8.71275330814
6380.0 104.0 6.55914514148 12.9198482575
6380.0 105.0 0.778720905614 12.9029034117
6390.0 102.0 6.97018350058 10.1170275697
6390.0 103.0 9.8918601524 8.61561876966
6390.0 104.0 7.27788349085 12.795267842
6390.0 105.0 1.19965112487 12.841806503
6400.0 102.0 6.13379515493 10.2031837573
6400.0 103.0 10.5232554813 8.37695896994
6400.0 104.0 7.99346486358 12.5725982489
6400.0 105.0 1.62058134412 12.7804709345
6410.0 102.0 5.27509750765 10.2891012852
6410.0 103.0 11.1546508102 8.13829917023
6410.0 104.0 8.7090462363 12.3499286558
6410.0 105.0 2.07602784135 12.7193740257
6420.0 102.0 4.42292427878 10.3614152046
6420.0 103.0 11.7786798602 7.89963937052
6420.0 104.0 9.40779040026 12.1050637013
6420.0 105.0 2.54010340807 12.6580384572
6430.0 102.0 3.59726965372 10.3797920091
6430.0 103.0 12.37345426 7.6609795708
6430.0 104.0 10.0391857291 11.7728492601
6430.0 105.0 3.00417897479 12.5969415485
6430.0 106.0 14.9851158053 8.75881464948
6430.0 107.0 15.2561948665 8.16813164519
6440.0 102.0 2.77161502866 10.3981688137
6440.0 103.0 12.9682286598 7.42231977109
6440.0 104.0 10.670581058 11.4403961591
6440.0 105.0 3.46825454152 12.5358446397
6440.0 106.0 14.4042321028 8.76836104147
6440.0 107.0 14.6334286072 8.22779659512
6450.0 102.0 1.94069877586 10.4463780933
6450.0 103.0 13.6198286392 7.23735842631
6450.0 104.0 11.3051333635 11.0719054283
6450.0 105.0 4.00262545486 12.4904992778
6450.0 106.0 13.8233484002 8.77790743346
6450.0 107.0 14.0108728129 8.28746154505
6460.0 102.0 1.1064150813 10.514634796
6460.0 103.0 14.2775321068 7.05836357653
6460.0 104.0 11.9447368317 10.6497162426
6460.0 105.0 4.54478357725 12.4473018541
6460.0 106.0 13.237413535 8.84688011558
6460.0 107.0 13.3876856233 8.35380896937
6470.0 102.0 0.272131386746 10.5828914987
6470.0 103.0 14.9352355744 6.87936872674
6470.0 104.0 12.5841298347 10.2275270569
6470.0 105.0 5.08694169965 12.4038657705
6470.0 106.0 12.6464275072 8.97551774762
6470.0 107.0 12.7636565732 8.43614660027
6480.0 104.0 13.3192844627 9.90008581171
6480.0 105.0 5.60679052042 12.3434848412
6480.0 106.0 12.0556519445 9.10391671987
6480.0 107.0 12.1394170581 8.51848423117
6490.0 104.0 14.078221648 9.59627188668
6490.0 105.0 6.11190678353 12.2718869013
6490.0 106.0 11.5147566127 9.28243425005
6490.0 107.0 11.5257007984 8.64545124462
6500.0 104.0 14.8371588333 9.29245796164
6500.0 105.0 6.61702304663 12.2002889614
6500.0 106.0 10.9797543041 9.46667961543
6500.0 107.0 10.9271380266 8.83948166179
6510.0 104.0 15.5960960186 8.98888269641
6510.0 105.0 7.12213930973 12.1525570014
6510.0 106.0 10.4445415303 9.65068632101
6510.0 107.0 10.3285752549 9.03327341915
6520.0 105.0 7.62725557283 12.120815248
6520.0 106.0 9.88701945488 9.81965745921
6520.0 107.0 9.71948922762 9.21370022774
6530.0 105.0 8.13237183593 12.0888348349
6530.0 106.0 9.29645435727 9.96595591643
6530.0 107.0 9.06852064354 9.34066724118
6540.0 105.0 8.61728344851 12.0074518432
6540.0 106.0 8.70588925966 10.1122543737
6540.0 107.0 8.41734159436 9.46739559483
6550.0 105.0 9.08872529407 11.8928951393
6550.0 106.0 8.1186916038 10.2392213871
6550.0 107.0 7.76616254518 9.59412394848
6560.0 105.0 9.56227179073 11.7823956521
6560.0 106.0 7.53991255233 10.3213203582
6560.0 107.0 7.17812302889 9.67837085778
6570.0 105.0 10.0558124728 11.7084111141
6570.0 106.0 6.96113350086 10.4034193293
6570.0 107.0 6.59008351259 9.76261776708
6580.0 105.0 10.5493531549 11.6341879164
6580.0 106.0 6.37709282165 10.4833703622
6580.0 107.0 6.0020439963 9.84686467638
6580.0 108.0 0.110073252334 11.8107961682
6590.0 105.0 11.0494182553 11.5640219353
6590.0 106.0 5.74569749278 10.5449445905
6590.0 107.0 5.39548355036 9.88481158453
6590.0 108.0 0.790086021534 11.7833502913
6600.0 105.0 11.5755810294 11.5088915216
6600.0 106.0 5.1143021639 10.6065188189
6600.0 107.0 4.77692659317 9.89197137852
6600.0 108.0 1.47009879073 11.7559044143
6600.0 109.0 10.1970345614 8.43662391987
6610.0 105.0 12.1017438035 11.4537611078
6610.0 106.0 4.48290683502 10.6680930472
6610.0 107.0 4.15836963598 9.89936983231
6610.0 108.0 2.13516853715 11.7096044131
6610.0 109.0 10.0497089846 8.7468816595
6620.0 105.0 12.6051763457 11.4432600767
6620.0 106.0 3.81909987926 10.7139157287
6620.0 107.0 3.52992081864 9.90223374991
6620.0 108.0 2.785716191 11.6444502878
6620.0 109.0 9.90238340788 9.05713939913
6630.0 105.0 13.0934554 11.4623528606
6630.0 106.0 3.15529292351 10.7599770701
6630.0 107.0 2.87874176946 9.89459663632
6630.0 108.0 3.42089989185 11.5630672961
6630.0 109.0 9.68034271723 9.36954507695
6640.0 105.0 13.5865751519 11.4740471908
6640.0 106.0 2.52137201331 10.816062123
6640.0 107.0 2.22756272028 9.88719818253
6640.0 108.0 4.01988359385 11.4437373963
6640.0 109.0 9.38379737776 9.68409869297
6650.0 105.0 14.1228402512 11.4203487359
6650.0 106.0 1.93206970636 10.8876600629
6650.0 107.0 1.58374994993 9.89674457452
6650.0 108.0 4.61886729584 11.3244074964
6650.0 109.0 9.06809971333 10.0031868452
6660.0 105.0 14.6593158156 11.3664116212
6660.0 106.0 1.34276739941 10.9592580028
6660.0 107.0 1.00497089846 10.0585559187
6660.0 108.0 5.2214289047 11.2377739891
6660.0 109.0 8.70715205032 10.3327760286
6670.0 105.0 15.195580915 11.3124745064
6670.0 106.0 0.766303464147 11.0470848091
6670.0 107.0 0.426191846992 10.2206059227
6670.0 108.0 5.82525330422 11.16522141
6670.0 109.0 8.34346834088 10.6544894386
6680.0 106.0 0.209202318968 11.1594935748
6680.0 107.0 -0.152587204479 10.3826559267
6680.0 108.0 6.39582421641 11.0110471794
6680.0 109.0 7.96757765509 10.9442224355
6690.0 108.0 6.91651489762 10.7346791313
6690.0 109.0 7.61883696844 11.4556703862
6700.0 108.0 7.48603348427 10.5031791256
6700.0 109.0 7.21284977198 11.6091286375
6710.0 108.0 8.10416951124 10.316785822
6710.0 109.0 6.76013932117 11.7850209099
6720.0 108.0 8.72251600332 10.1303925184
6720.0 109.0 6.27606956903 11.9759487496
6730.0 108.0 9.34170435584 9.94185127666
6730.0 109.0 5.78168702652 12.0045879256
6740.0 108.0 9.96299735945 9.74877549869
6740.0 109.0 5.2805696005 11.9251142123
6750.0 108.0 10.5842903631 9.55546106093
6750.0 109.0 4.7796626396 11.8454018392
6750.0 110.0 10.9652322115 9.40319610871
6750.0 111.0 0.778720905614 9.23613424891
6760.0 110.0 10.3338368826 9.54472136994
6760.0 111.0 1.0910511283 9.33613270499
6770.0 110.0 9.70244155374 9.68600797137
6770.0 111.0 1.40338135098 9.43636982087
6780.0 110.0 9.098196224 9.83875024319
6780.0 111.0 1.71571157367 9.53636827695
6790.0 110.0 8.55687996205 10.017745093
6790.0 111.0 2.1187522586 9.72132962173
6800.0 110.0 8.0157741652 10.1967399428
6800.0 111.0 2.5318952688 9.91583735849
6810.0 110.0 7.43720557884 10.3833719061
6810.0 111.0 2.95198362761 10.1108224149
6820.0 110.0 6.82138466807 10.5778796429
6820.0 111.0 3.40048477622 10.3079554094
6830.0 110.0 6.20556375731 10.7723873797
6830.0 111.0 3.84877545972 10.505088404
6840.0 110.0 5.60131842757 10.9750095496
6840.0 111.0 4.33221381653 10.5248971674
6850.0 110.0 5.00001960937 11.179540998
6850.0 111.0 4.81965101042 10.5248971674
6860.0 110.0 4.4298696274 11.3270327542
6860.0 111.0 5.37359517896 10.3993621127
6870.0 110.0 3.87318941244 11.4501812109
6870.0 111.0 5.92206725464 10.2714404601
6880.0 110.0 3.31650919748 11.5735683273
6880.0 111.0 6.42318468066 10.123710044
6890.0 110.0 2.71499991417 11.670464206
6890.0 111.0 6.92430210668 9.97597962802
6890.0 112.0 14.8220053454 7.43186616308
6890.0 113.0 15.0080565023 8.0774409213
6890.0 114.0 11.259883365 6.49154655221
6900.0 112.0 14.1379937391 7.5750620429
6900.0 113.0 14.3575088484 8.20774917195
6900.0 114.0 11.6808135842 6.73020635192
6910.0 112.0 13.4539821328 7.71825792273
6910.0 113.0 13.7069611946 8.33781876279
6910.0 114.0 12.1017438035 6.96886615163
6920.0 112.0 12.8078542463 7.88293318454
6920.0 113.0 13.0982960975 8.47337752903
6920.0 114.0 12.5003647211 7.00538110099
6930.0 112.0 12.1659356619 8.04999504433
6930.0 113.0 12.5073100697 8.61108423346
6930.0 114.0 12.8575240121 7.02972440056
6940.0 112.0 11.4989717295 8.20345329555
6940.0 113.0 11.9167449721 8.7478362987
6940.0 114.0 13.0583077267 7.09463986608
6950.0 112.0 10.8294822158 8.35524092817
6950.0 113.0 11.329126386 8.87718991014
6950.0 114.0 13.2593019064 7.1597939914
6960.0 112.0 10.209872933 8.50631258139
6960.0 113.0 10.7417182651 9.00654352159
6960.0 114.0 13.2955019053 7.22972131272
6970.0 112.0 9.61152062638 8.65714557481
6970.0 113.0 10.123371773 9.10988321486
6970.0 114.0 13.3319123692 7.29964863404
6980.0 112.0 8.98686018101 8.76072392788
6980.0 113.0 9.49197644411 9.20224455735
6980.0 114.0 13.3681123681 7.36957595535
6990.0 112.0 8.35546485213 8.85236929097
6990.0 113.0 8.86058111523 9.29460589984
6990.0 114.0 13.404522832 7.43950327667
7000.0 112.0 7.7141776631 8.95141310785
7000.0 113.0 8.24728578579 9.39818425292
7000.0 114.0 13.4407228309 7.50966925778
7010.0 112.0 7.03374396369 9.0786187811
7010.0 113.0 7.64598696759 9.50892239998
7010.0 114.0 13.5036518987 7.58389245549
7020.0 112.0 6.35331026427 9.20582445435
7020.0 113.0 7.04447768428 9.61966054705
7020.0 114.0 13.6734972422 7.67649245778
7030.0 112.0 5.68403121566 9.33637136479
7030.0 113.0 6.37098933347 9.77144817967
7030.0 114.0 13.8431321205 7.76885380027
7040.0 112.0 5.0595812354 9.47956724462
7040.0 113.0 5.67961144835 9.93325952387
7040.0 114.0 14.0285518821 7.84952081258
7050.0 112.0 4.43513125514 9.62276312445
7050.0 113.0 4.98802309813 10.0953095279
7050.0 114.0 14.3543518718 7.8266094718
7060.0 112.0 3.80352546115 9.75211673589
7060.0 113.0 4.38461962883 10.1907734478
7060.0 114.0 14.6803623266 7.80345947123
7070.0 112.0 3.14308594715 9.8261012738
7070.0 113.0 3.78142662464 10.2862373676
7070.0 114.0 15.0061623163 7.78030947066
7080.0 112.0 2.48285689825 9.90032447151
7080.0 113.0 3.17802315535 10.3817012875
7090.0 112.0 1.83862319769 9.94137395706
7090.0 113.0 2.59166735993 10.5862327359
7100.0 112.0 1.23290461219 9.90509766751
7100.0 113.0 2.00552202962 10.7907641842
7100.0 115.0 14.8678867393 8.4931862924
7110.0 112.0 0.626975561574 9.86906003775
7110.0 113.0 1.43053135012 11.0022167668
7110.0 115.0 14.0719076947 8.59700330528
7120.0 112.0 0.0210465109626 9.83278374819
7120.0 113.0 0.900580204088 11.2408765665
7120.0 115.0 13.275718185 8.70082031815
7130.0 113.0 0.37083952316 11.4795363662
7130.0 115.0 12.4384879789 8.75953062888
7140.0 113.0 -0.158901157767 11.7181961659
7140.0 115.0 11.5966275404 8.81346774362
7150.0 115.0 10.7547671019 8.86740485835
7160.0 115.0 9.99141014925 8.9098863027
7170.0 115.0 9.24783691695 8.94974248925
7170.0 116.0 0.0841860438502 11.0499487267
7180.0 115.0 8.50426368464 8.98959867581
7180.0 116.0 0.602351143748 10.967372436
7180.0 117.0 0.255083712866 10.3996007725
7190.0 115.0 7.72406952326 8.99055331501
7190.0 116.0 1.12030577854 10.8847961453
7190.0 117.0 0.754938348227 10.300079636
7200.0 115.0 6.93482536216 8.98220022202
7200.0 116.0 1.60963715842 10.794582741
7200.0 117.0 1.25479298359 10.2007971594
7210.0 115.0 6.14558120107 8.97360846923
7210.0 116.0 2.05624412104 10.693390986
7210.0 117.0 1.76496040932 10.1062878787
7220.0 115.0 5.30372076256 8.93685486007
7220.0 116.0 2.50264061856 10.5921992309
7220.0 117.0 2.27596969549 10.0127332372
7230.0 115.0 4.46186032406 8.90010125091
7230.0 116.0 2.95577199958 10.489575517
7230.0 117.0 2.79160921407 9.9139280801
7240.0 115.0 3.60379407212 8.92587650928
7240.0 116.0 3.47035919262 10.3745414935
7240.0 117.0 3.34723710348 9.77073220027
7250.0 115.0 2.72152433257 9.04520640914
7250.0 116.0 3.98473592054 10.2597461299
7250.0 117.0 3.9028649929 9.62753632044
7260.0 115.0 1.83904412791 9.164536309
7260.0 116.0 4.50521613664 10.149962622
7260.0 117.0 4.41050683731 9.50653580199
7270.0 115.0 0.997604619625 9.39842291272
7270.0 116.0 5.03432542224 10.0475775679
7270.0 117.0 4.88594751996 9.40057085091
7280.0 115.0 0.166267436604 9.6609486924
7280.0 116.0 5.56343470784 9.94519251386
7280.0 117.0 5.3613882026 9.29436724004
7290.0 116.0 6.09275445855 9.84304611958
7290.0 117.0 5.90438818543 9.22611053732
7290.0 118.0 15.1713774274 8.90559042631
7300.0 116.0 6.57850793157 9.7468662203
7300.0 117.0 6.45475444711 9.1616723914
7300.0 118.0 14.5555565166 8.88792960113
7310.0 116.0 7.05963117217 9.65140230041
7310.0 117.0 6.97902303518 9.09771156508
7310.0 118.0 13.9397356058 8.87026877595
7320.0 116.0 7.54075441277 9.55593838053
7320.0 117.0 7.44204627636 9.03398939855
7320.0 118.0 13.3207577184 8.86000640456
7330.0 116.0 8.0397671877 9.45665590385
7330.0 117.0 7.90506951753 8.97050589183
7330.0 118.0 12.6725251808 8.91728475649
7340.0 116.0 8.56592996176 9.35116827237
7340.0 117.0 8.38282531638 8.88220176594
7340.0 118.0 12.0242926431 8.97456310842
7350.0 116.0 9.09209273582 9.2459193007
7350.0 117.0 8.89530785832 8.73709660771
7350.0 118.0 11.4206787087 9.14067032903
7360.0 116.0 9.61825550989 9.14067032903
7360.0 117.0 9.40779040026 8.59175278968
7360.0 118.0 10.8465298897 9.37933012874
7370.0 116.0 10.1334740982 9.00893011958
7370.0 117.0 9.95352642952 8.43256670328
7370.0 118.0 10.2456520017 9.61798992845
7380.0 116.0 10.6489031517 8.87742856994
7380.0 117.0 10.4990519937 8.27361927667
7380.0 118.0 9.60415434754 9.85664972817
7390.0 116.0 11.1679101121 8.74998423689
7390.0 117.0 11.0559426737 8.12087700485
7390.0 118.0 8.98159855327 10.0666703519
7400.0 116.0 11.7203810248 8.660486812
7400.0 117.0 11.6387205623 7.98317030041
7400.0 118.0 8.4028195018 10.2098662317
7410.0 116.0 12.2728519376 8.57098938711
7410.0 117.0 12.2214984508 7.84546359598
7410.0 118.0 7.78720905614 10.3459023176

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More