Skip to main content

RL library inspired from salina

Project description

BBRL

bbrl- A Flexible and Simple Library for Reinforcement Learning deriving from SaLinA

BBRL stands for "BlackBoard Reinforcement Learning". Initially, this library was a fork of the SaLinA library. But SaLinA is a general model for sequential learning whereas BBRL is dedicated to RL, thus it focuses on a subset of SaLinA. Morevover, BBRL is designed for education purpose (in particular, to teach various RL algorithms, concepts and phenomena). Thus the fork slowly drifted away from SaLinA and became independent after a few months, even if some parts of the code are still inherited from SaLinA.

TL;DR.

bbrl is a lightweight library extending PyTorch modules for developping Reinforcement Learning models

  • It supports simultaneously training with AutoReset on multiple environments
  • It works on multiple CPUs and GPUs

Citing bbrl

BBRL being inspired from SaLinA, please use this bibtex if you want to cite BBRL in your publications:

Link to the paper: SaLinA: Sequential Learning of Agents

    @misc{salina,
        author = {Ludovic Denoyer, Alfredo de la Fuente, Song Duong, Jean-Baptiste Gaya, Pierre-Alexandre Kamienny, Daniel H. Thompson},
        title = {SaLinA: Sequential Learning of Agents},
        year = {2021},
        publisher = {Arxiv},
        howpublished = {\url{https://github.com/facebookresearch/salina}},
    }

Quick Start

  • create and activate a python environment with your favorite tool, e.g. conda or venv (for instance, conda create bbrl_env ; conda activate bbrl_env)
  • Then clone the repo
  • pip install -e .

News

  • April 2024:

    • Major evolution of the documentation
  • March 2024:

    • Bug fixes in the replay buffer
  • May-June 2023:

    • Integrated the use of gymnasium. Turned google colab notebooks into colab compatible jupyter notebooks. Refactored all the notebooks.
  • August 2022:

    • Major updates of the notebook-based documentation
  • May 2022:

    • First commit of the BBRL repository
  • March 2022:

    • Forked SaLinA and started to modify the model

Documentation

Main differences to SaLinA

  • BBRL only contains core classes to implement RL algorithms.

  • Because both notations coexist in the literature, the GymAgent classes support the case where doing action $a_t$ in state $s_t$ results in reward $r_t$, and the case where it results in reward $r_{t+1}$.

  • Some output string were corrected, some variable were renamed and some comments were improved to favor code readability.

  • A few small bugs in SaLinA were fixed:

    • The replay buffer was rejecting samples that did not fit inside when the number of added samples was beyond the limit. This has been corrected to implement the standard FIFO behavior of replay buffer.
    • When using autoreset=True and no replay buffer, transitions from an episode to the next were considered as standard steps in an episode. We added a mechanism to properly filter them out, using an additional get_transitions() function in the Workspace class.

Understanding BBRL

To help you understand how to use BBRL, we have written a doc here

Learning RL on your own

If you want to learn RL on your own using BBRL, you can do so from the following online material

Coding your first RL algorithms with BBRL

Most of the notebooks below can be run under jupyter notebook as well as under Google colaboratory. In any case, download it on your disk and run it with your favorite notebook environment:

Learning RL with bbrl in your favorite coding environment

Have a look at the bbrl_algos library.

Code Documentation:

Generated with pdoc

Development

See contributing

Dependencies

bbrl utilizes pytorch, hydra for configuring experiments, and gymnasium for reinforcement learning algorithms. See requirements.txt for more details.

License

bbrl is released under the MIT license. See LICENSE for additional details about it.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bbrl-1.0.1.tar.gz (96.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bbrl-1.0.1-py3-none-any.whl (60.1 kB view details)

Uploaded Python 3

File details

Details for the file bbrl-1.0.1.tar.gz.

File metadata

  • Download URL: bbrl-1.0.1.tar.gz
  • Upload date:
  • Size: 96.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for bbrl-1.0.1.tar.gz
Algorithm Hash digest
SHA256 46b86cc93acccf2a6411f0d69dc9877a4ed6e9aebc12f66eef90f5e82f0dd2bd
MD5 46dc3c40611d60424ef858b4512e1da6
BLAKE2b-256 f32ef175a92a103a08056b4481ce3763935e7c9441bce464939c27dfa168a748

See more details on using hashes here.

File details

Details for the file bbrl-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: bbrl-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 60.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for bbrl-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 01051b2dfc0f19a7a719d52865ef99a30effe332a9aad88a2074714e6b71ee14
MD5 9e7b6207d199e61a4b724ee296498831
BLAKE2b-256 e61341068207cccf4c37caf06103f52ff80b3f6bd8310d513a8cdaf003fc7703

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page