This repository contains the code for [Trajectron++: Dynamically-Feasible Trajectory Forecasting With Heterogeneous Data](https://arxiv.org/abs/2001.03093) by Tim Salzmann\*, Boris Ivanovic\*, Punarjay Chakravarty, and Marco Pavone (\* denotes equal contribution).
We've already included preprocessed data splits for the ETH and UCY Pedestrian datasets in this repository, you can see them in `experiments/pedestrians/raw`. In order to process them into a data format that our model can work with, execute the follwing.
```
cd experiments/pedestrians
python process_data.py # This will take around 10-15 minutes, depending on your computer.
```
#### nuScenes Dataset ####
Download the nuScenes dataset (this requires signing up on [their website](https://www.nuscenes.org/)). Note that the full dataset is very large, so if you only wish to test out the codebase and model then you can just download the nuScenes "mini" dataset which only requires around 4 GB of space. Extract the downloaded zip file's contents and place them in the `experiments/nuScenes` directory. Then, download the map expansion pack (v1.1) and copy the contents of the extracted `maps` folder into the `experiments/nuScenes/v1.0-mini/maps` folder. Finally, process them into a data format that our model can work with.
```
cd experiments/nuScenes
# For the mini nuScenes dataset, use the following
In case you also want a validation set generated (by default this will just produce the training and test sets), replace line 406 in `process_data.py` with:
```
val_scene_names = val_scenes
```
## Model Training ##
### Pedestrian Dataset ###
To train a model on the ETH and UCY Pedestrian datasets, you can execute a version of the following command from within the `trajectron/` directory.
For example, a fully-fleshed out version of this command to train a model without dynamics integration for evaluation on the ETH - University scene would look like:
What this means is to train a new Trajectron++ model which will be evaluated every 10 epochs, have a few outputs visualized in Tensorboard every 1 epoch, use the `eth_train.pkl` file as the source of training data (which actually contains the four other datasets, since we train using a leave-one-out scheme), and evaluate the partially-trained models on the data within `eth_val.pkl`. Further options specify that we want to perform a bit of preprocessing to make training as fast as possible (`--offline_scene_graph yes`), use 5 threads to parallelize data loading, save trained models and Tensorboard logs to `../experiments/pedestrians/models`, mark the created log directory with an additional `_eth_vel_ar3` at the end, run training for 100 epochs, augment the dataset with rotations (`--augment`), and use the same model configuration as in the model we previously trained for the ETH dataset without any dynamics integration (`--conf ../experiments/pedestrians/models/eth_vel/config.json`).
where the only difference is the sourced model configuration (now from `../experiments/pedestrians/models/eth_attention_radius_3/config.json`). Our codebase is set up such that hyperparameters are saved in a json file every time a model is trained, so that you don't have to remember what settings you use when you end up training many models in parallel!
Commands like these would be used for all of the scenes in the ETH and UCY datasets (the options being `eth`, `hotel`, `univ`, `zara1`, and `zara2`). The only change would be what `train_data_dict`, `eval_data_dict`, `log_tag`, and configuration file (`conf`) you wish to use.
### nuScenes Dataset ###
To train a model on the nuScenes dataset, you can execute one of the following commands from within the `trajectron/` directory, depending on the model version you desire.
In case you also want to produce the version of our model that was trained without the ego-vehicle (first row of Table 4 (b) in the paper), then run the command from the third row of the table above, but change line 132 of `train.py` to:
By default, our training script assumes access to a GPU. If you want to train on a CPU, comment out line 38 in `train.py` and add `--device cpu` to the training command.
For example, a fully-fleshed out version of this command to evaluate a model without dynamics integration for evaluation on the ETH - University scene would look like:
These scripts will produce csv files in the `results` directory which can then be analyzed in the `Result Analysis.ipynb` notebook.
### nuScenes Dataset ###
If you just want to use a trained model to generate trajectories and plot them, you can do this in the `NuScenes Qualitative.ipynb` notebook.
To evaluate a trained model's performance on forecasting vehicles, you can execute a one of the following commands from within the `experiments/nuScenes` directory.
As of December 2020, this repository includes an "online" running capability. In addition to the regular batched mode for training and testing, Trajectron++ can now be executed online on streaming data!
The `trajectron/test_online.py` script shows how to use it, and can be run as follows (depending on the desired model).
Further, lines 145-151 can be changed to choose different scenes and starting timesteps.
During running, each prediction will be iteratively visualized and saved in a `pred_figs/` folder within the specified model folder. For example, if the script loads the `int_ee` version of Trajectron++ then generated figures will be saved to `experiments/nuScenes/models/int_ee/pred_figs/`.
Preprocessed ETH and UCY datasets are available in this repository, under `experiments/pedestrians/raw` (e.g., `raw/eth/train`). The train/validation/test splits are the same as those found in [Social GAN](https://github.com/agrimgupta92/sgan).
If you want the *original* ETH or UCY datasets, you can find them here: [ETH Dataset](http://www.vision.ee.ethz.ch/en/datasets/) and [UCY Dataset](https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data).
If you only want to evaluate models (e.g., produce trajectories and plot them), then the nuScenes mini dataset should be fine. If you want to train a model, then the full nuScenes dataset is required. In either case, you can find them on the [dataset website](https://www.nuscenes.org/).