Implementation of modern IRL and imitation learning algorithms.
Imitation Learning Baseline Implementations
This project aims to provide clean implementations of imitation learning algorithms. Currently we have implementations of Behavioral Cloning, DAgger (with synthetic examples), Adversarial Inverse Reinforcement Learning, and Generative Adversarial Imitation Learning.
Installing PyPI release
pip install imitation
Install latest commit
git clone http://github.com/HumanCompatibleAI/imitation cd imitation pip install -e .
Optional Mujoco Dependency:
Follow instructions to install mujoco_py v1.5 here.
We provide several CLI scripts as a front-end to the algorithms implemented in
imitation. These use Sacred for configuration and replicability.
# Train PPO agent on cartpole and collect expert demonstrations. Tensorboard logs saved in `quickstart/rl/` python -m imitation.scripts.expert_demos with fast cartpole log_dir=quickstart/rl/ # Train GAIL from demonstrations. Tensorboard logs saved in output/ (default log directory). python -m imitation.scripts.train_adversarial with fast gail cartpole rollout_path=quickstart/rl/rollouts/final.pkl # Train AIRL from demonstrations. Tensorboard logs saved in output/ (default log directory). python -m imitation.scripts.train_adversarial with fast airl cartpole rollout_path=quickstart/rl/rollouts/final.pkl
- Remove the "fast" option from the commands above to allow training run to completion.
python -m imitation.scripts.expert_demos print_configwill list Sacred script options. These configuration options are documented in each script's docstrings.
For more information on how to configure Sacred CLI options, see the Sacred docs.
Python Interface Quickstart:
See examples/quickstart.py for an example script that loads CartPole-v1 demonstrations and trains BC, GAIL, and AIRL models on that data.
BC, GAIL, and AIRL also accept as
expert_data any Pytorch-style DataLoader that iterates over dictionaries containing observations, actions, and next_observations.
Density reward baseline
We also implement a density-based reward baseline. You can find an example notebook here.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size imitation-0.2.0.tar.gz (89.3 kB)||File type Source||Python version None||Upload date||Hashes View|