Skip to main content

Dopamine: A framework for flexible Reinforcement Learning research

Project description

Dopamine

Getting Started | Docs | Baseline Results | Changelist



Dopamine is a research framework for fast prototyping of reinforcement learning algorithms. It aims to fill the need for a small, easily grokked codebase in which users can freely experiment with wild ideas (speculative research).

Our design principles are:

  • Easy experimentation: Make it easy for new users to run benchmark experiments.
  • Flexible development: Make it easy for new users to try out research ideas.
  • Compact and reliable: Provide implementations for a few, battle-tested algorithms.
  • Reproducible: Facilitate reproducibility in results. In particular, our setup follows the recommendations given by Machado et al. (2018).

Dopamine supports the following agents, implemented with jax:

For more information on the available agents, see the docs.

Many of these agents also have a tensorflow (legacy) implementation, though newly added agents are likely to be jax-only.

This is not an official Google product.

Getting Started

We provide docker containers for using Dopamine. Instructions can be found here.

Alternatively, Dopamine can be installed from source (preferred) or installed with pip. For either of these methods, continue reading at prerequisites.

Prerequisites

Dopamine supports Atari environments and Mujoco environments. Install the environments you intend to use before you install Dopamine:

Atari

  1. These should now come packaged with ale_py.
  2. You may need to manually run some steps to properly install baselines, see instructions.

Mujoco

  1. Install Mujoco and get a license here.
  2. Run pip install mujoco-py (we recommend using a virtual environment).

Installing from Source

The most common way to use Dopamine is to install it from source and modify the source code directly:

git clone https://github.com/google/dopamine

After cloning, install dependencies:

pip install -r dopamine/requirements.txt

Dopamine supports tensorflow (legacy) and jax (actively maintained) agents. View the Tensorflow documentation for more information on installing tensorflow.

Note: We recommend using a virtual environment when working with Dopamine.

Installing with Pip

Note: We strongly recommend installing from source for most users.

Installing with pip is simple, but Dopamine is designed to be modified directly. We recommend installing from source for writing your own experiments.

pip install dopamine-rl

Running tests

You can test whether the installation was successful by running the following from the dopamine root directory.

export PYTHONPATH=$PYTHONPATH:$PWD
python -m tests.dopamine.atari_init_test

Next Steps

View the docs for more information on training agents.

We supply baselines for each Dopamine agent.

We also provide a set of Colaboratory notebooks which demonstrate how to use Dopamine.

References

Bellemare et al., The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 2013.

Machado et al., Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents, Journal of Artificial Intelligence Research, 2018.

Hessel et al., Rainbow: Combining Improvements in Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 2018.

Mnih et al., Human-level Control through Deep Reinforcement Learning. Nature, 2015.

Schaul et al., Prioritized Experience Replay. Proceedings of the International Conference on Learning Representations, 2016.

Haarnoja et al., Soft Actor-Critic Algorithms and Applications, arXiv preprint arXiv:1812.05905, 2018.

Schulman et al., Proximal Policy Optimization Algorithms.

Giving credit

If you use Dopamine in your work, we ask that you cite our white paper. Here is an example BibTeX entry:

@article{castro18dopamine,
  author    = {Pablo Samuel Castro and
               Subhodeep Moitra and
               Carles Gelada and
               Saurabh Kumar and
               Marc G. Bellemare},
  title     = {Dopamine: {A} {R}esearch {F}ramework for {D}eep {R}einforcement {L}earning},
  year      = {2018},
  url       = {http://arxiv.org/abs/1812.06110},
  archivePrefix = {arXiv}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dopamine_rl-4.1.2.tar.gz (190.4 kB view details)

Uploaded Source

Built Distribution

dopamine_rl-4.1.2-py3-none-any.whl (290.5 kB view details)

Uploaded Python 3

File details

Details for the file dopamine_rl-4.1.2.tar.gz.

File metadata

  • Download URL: dopamine_rl-4.1.2.tar.gz
  • Upload date:
  • Size: 190.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.6

File hashes

Hashes for dopamine_rl-4.1.2.tar.gz
Algorithm Hash digest
SHA256 61b791d11edf80e08a164db7bc60166bb669db8a57b6cf21c8717a4aea1a84ef
MD5 98835099a08efb8ecf1e303598e423b0
BLAKE2b-256 fcd26b6372afdb9b62c32a237af4a68551e863314363fb4a065605c6a990a61c

See more details on using hashes here.

File details

Details for the file dopamine_rl-4.1.2-py3-none-any.whl.

File metadata

  • Download URL: dopamine_rl-4.1.2-py3-none-any.whl
  • Upload date:
  • Size: 290.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.6

File hashes

Hashes for dopamine_rl-4.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 bd03119a54caf839fb61cf22179bad0bde712bc8ccf62aebd93ebce5381e54aa
MD5 3eee0c842af3e60cf2156ee62375da03
BLAKE2b-256 cf4b79287a4889f51f143ddf6d73526be97d104d742b17e18f5335fbe5e9fa95

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page