A Toolbox for Model-Based Reinforcement Learning in TensorFlow
Project description
Bellman
Website | Twitter | Documentation (latest)
What does Bellman do?
Bellman is a package for model-based reinforcement learning (MBRL) in Python, using TensorFlow and building on top of model-free reinforcement learning package TensorFlow Agents.
Bellman provides a framework for flexible composition of model-based reinforcement learning algorithms. It offers two major classes of algorithms: decision time planning and background planning algorithms. With each class any kind of supervised learning method can be easily used to learn certain component of the environment. Bellman was designed with modularity in mind - important components can be flexibly combined, such as type of decision time planning method (e.g. a cross entropy method or a random shooting method) and type of model for state transition (e.g. a probabilistic neural network or an ensemble of neural networks). Bellman also provides implementation of several popular state-of-the-art MBRL algorithms, such as PETS, MBPO and METRPO. The online documentation (latest) contains more details.
Bellman requires Python 3.7 onwards and uses TensorFlow 2.4+ for running computations, which allows fast execution on GPUs.
Maintainers
Bellman was originally created by (in alphabetical order) Vincent Adam, Jordi Grau-Moya, Felix Leibfried, John A. McLeod, Hrvoje Stojic, and Peter Vrancx, at Secondmind Labs.
It is now actively maintained by (in alphabetical order) Felix Leibfried, John A. McLeod, Hrvoje Stojic, and Peter Vrancx.
Bellman is an open source project. If you have relevant skills and are interested in contributing then please do contact us (see "The Bellman Community" section below).
We are very grateful to our Secondmind Labs colleagues, maintainers of GPflow and Trieste in particular, for their help with creating contributing guidelines, instructions for users and open-sourcing in general.
Install Bellman
For users
For latest (stable) release from PyPI you can use pip
to install the toolbox
$ pip install bellman
Use pip
to install the toolbox from latest source from GitHub. Check-out the develop
branch of the Bellman GitHub repository, and in the repository root run
$ pip install -e .
This will install the toolbox in editable mode.
For contributors
If you wish to contribute please use Poetry to manage dependencies in a local virtual environment. Poetry configuration file specifies all the development dependencies (testing, linting, typing, docs etc) and makes it much easier to contribute. To install Poetry, follow the instructions in the Poetry documentation.
To install this project in editable mode, run the commands below from the root directory of the bellman
repository.
poetry install
This command creates a virtual environment for this project
in a hidden .venv
directory under the root directory. You can easily activate it with
poetry shell
You must also run the poetry install
command to install updated dependencies when
the pyproject.toml
file is updated, for example after a git pull
.
Installing MuJoCo (Optional)
Many benchmarks in continuous control in MBRL use the MuJoCo physics engine. Some of the TF-Agents examples have been tested against Mujoco environments as well. MuJoCo is proprietary software that requires a license (see MuJoCo website). As a result installing it is optional, but because of its importance to the research community it is highly recommended. Don't worry if you decide not to install MuJoCo though, all our examples and notebooks rely on standard environments available in OpenAI Gym.
We interface with MuJoCo through a python library mujoco-py
via OpenAI Gym (mujoco-py github page). Check the installation instructions there on how to install MuJoCo. Note that you should install MuJoCo 1.5 since OpenAI Gym supports that version. After that you can install mujoco-py library with an additional Poetry command:
poetry install -E mujoco-py
If this command fails, please check troubleshooting sections at mujoco-py
github page, you might need to satisfy other mujoco-py
dependencies (e.g. Linux system libraries) or set some environment variables.
The Bellman Community
Getting help
Bugs, feature requests, pain points, annoying design quirks, etc: Please use GitHub issues to flag up bugs/issues/pain points, suggest new features, and discuss anything else related to the use of Bellman that in some sense involves changing the Bellman code itself. We positively welcome comments or concerns about usability, and suggestions for changes at any level of design. We aim to respond to issues promptly, but if you believe we may have forgotten about an issue, please feel free to add another comment to remind us.
"How-to-use" questions: Please use Stack Overflow (Bellman tag) to ask questions that relate to "how to use Bellman", i.e. questions of understanding rather than issues that require changing Bellman code. (If you are unsure where to ask, you are always welcome to open a GitHub issue; we may then ask you to move your question to Stack Overflow.)
Slack workspace
We have a public Bellman slack workspace. Please use this invite link if you'd like to join, whether to ask short informal questions or to be involved in the discussion and future development of Bellman.
Contributing
All constructive input is very much welcome. For detailed information, see the guidelines for contributors.
Citing Bellman
To cite Bellman, please reference our arXiv paper where we review the framework and describe the design. Sample Bibtex is given below:
@article{bellman2021,
author = {McLeod, John and Stojic, Hrvoje and Adam, Vincent and Kim, Dongho and Grau-Moya, Jordi and Vrancx, Peter and Leibfried, Felix},
title = {Bellman: A Toolbox for Model-based Reinforcement Learning in TensorFlow},
year = {2021},
journal = {arXiv:2103.14407},
url = {https://arxiv.org/abs/2103.14407}
}
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.