Code for the ICRA 2023 paper It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying [1].
The main branch contains code for training a Variational Recurrent Neural Network for the cooperative table-carrying task (link to repository for human-robot cooperative table-carrying, a custom gym environment. To execute the trained model in the environment, please see instructions housed in the gym environment repository.
We recommend following the instructions for creating a virtual environment and installation for the custom gym environment first. Activate the environment using conda activate [environment name]. Afterwards, to install the remaining packages required for training the model, clone this repo and run:
$ cd cooperative_planner
$ pip install -e .
Download full dataset for [1] here. To use this dataset, see documentation for dataset and documentation for trained models.
To train the model, run the following:
python3 -m scripts.run --train
See the full list of args in configs/exp_config.py.
To test the model on the dataset, run the following:
python3 -m scripts.run --restore --artifact-path [path to saved model .ckpt file] --test-data [test_holdout | unseen_map]
See the full list of args in configs/exp_config.py.
During training, you can visualize plots of the predictions while the script is running in the results/plots directory that is created when you begin training/testing.
If you would like to use our environment, please cite us:
@article{ng2022takes,
title={It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying},
author={Ng, Eley and Liu, Ziang and Kennedy III, Monroe},
journal={arXiv preprint arXiv:2209.12890},
year={2022}
}
For issues, comments, suggestions, or anything else, please contact Eley Ng at eleyng@stanford.edu.

