Skip to main content

unified reinforcement learning framework

Project description


PyPI PyPI - Python Version Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge Code style: black

Hits-of-Code codecov

Documentation Status Read the Docs

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

Embark slack badge

OpenRL-v0.2.0 is updated on Dec 20, 2023

The main branch is the latest version of OpenRL, which is under active development. If you just want to have a try with OpenRL, you can switch to the stable branch.

Welcome to OpenRL

Documentation | 中文介绍 | 中文文档

Crafting Reinforcement Learning Frameworks with Passion, Your Valuable Insights Welcome.

OpenRL is an open-source general reinforcement learning research framework that supports training for various tasks such as single-agent, multi-agent, offline RL, self-play, and natural language. Developed based on PyTorch, the goal of OpenRL is to provide a simple-to-use, flexible, efficient and sustainable platform for the reinforcement learning research community.

Currently, the features supported by OpenRL include:

  • A simple-to-use universal interface that supports training for all tasks/environments

  • Support for both single-agent and multi-agent tasks

  • Support for offline RL training with expert dataset

  • Support self-play training

  • Reinforcement learning training support for natural language tasks (such as dialogue)

  • Support DeepSpeed

  • Support Arena , which allows convenient evaluation of various agents (even submissions for JiDi) in a competitive environment.

  • Importing models and datasets from Hugging Face. Supports loading Stable-baselines3 models from Hugging Face for testing and training.

  • Tutorial on how to integrate user-defined environments into OpenRL.

  • Support for models such as LSTM, GRU, Transformer etc.

  • Multiple training acceleration methods including automatic mixed precision training and data collecting wth half precision policy network

  • User-defined training models, reward models, training data and environment support

  • Support for gymnasium environments

  • Support for Callbacks, which can be used to implement various functions such as logging, saving, and early stopping

  • Dictionary observation space support

  • Popular visualization tools such as wandb, tensorboardX are supported

  • Serial or parallel environment training while ensuring consistent results in both modes

  • Chinese and English documentation

  • Provides unit testing and code coverage testing

  • Compliant with Black Code Style guidelines and type checking

Algorithms currently supported by OpenRL (for more details, please refer to Gallery):

Environments currently supported by OpenRL (for more details, please refer to Gallery):

This framework has undergone multiple iterations by the OpenRL-Lab team which has applied it in academic research. It has now become a mature reinforcement learning framework.

OpenRL-Lab will continue to maintain and update OpenRL, and we welcome everyone to join our open-source community to contribute towards the development of reinforcement learning.

For more information about OpenRL, please refer to the documentation.

Outline

Why OpenRL

Here we provide a table for the comparison of OpenRL and existing popular RL libraries. OpenRL employs a modular design and high-level abstraction, allowing users to accomplish training for various tasks through a unified and user-friendly interface.

Library NLP/RLHF Multi-agent Self-Play Training Offline RL DeepSpeed
OpenRL :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Stable Baselines3 :x: :x: :x: :x: :x:
Ray/RLlib :x: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :x:
DI-engine :x: :heavy_check_mark: not fullly supported :heavy_check_mark: :x:
Tianshou :x: not fullly supported not fullly supported :heavy_check_mark: :x:
MARLlib :x: :heavy_check_mark: not fullly supported :x: :x:
MAPPO Benchmark :x: :heavy_check_mark: :x: :x: :x:
RL4LMs :heavy_check_mark: :x: :x: :x: :x:
trlx :heavy_check_mark: :x: :x: :x: :heavy_check_mark:
trl :heavy_check_mark: :x: :x: :x: :heavy_check_mark:
TimeChamber :x: :x: :heavy_check_mark: :x: :x:

Installation

Users can directly install OpenRL via pip:

pip install openrl

If users are using Anaconda or Miniconda, they can also install OpenRL via conda:

conda install -c openrl openrl

Users who want to modify the source code can also install OpenRL from the source code:

git clone https://github.com/OpenRL-Lab/openrl.git && cd openrl
pip install -e .

After installation, users can check the version of OpenRL through command line:

openrl --version

Tips: No installation required, try OpenRL online through Colab: Open In Colab

Use Docker

OpenRL currently provides Docker images with and without GPU support. If the user's computer does not have an NVIDIA GPU, they can obtain an image without the GPU plugin using the following command:

sudo docker pull openrllab/openrl-cpu

If the user wants to accelerate training with a GPU, they can obtain it using the following command:

sudo docker pull openrllab/openrl

After successfully pulling the image, users can run OpenRL's Docker image using the following commands:

# Without GPU acceleration
sudo docker run -it openrllab/openrl-cpu
# With GPU acceleration 
sudo docker run -it --gpus all --net host openrllab/openrl

Once inside the Docker container, users can check OpenRL's version and then run test cases using these commands:

# Check OpenRL version in Docker container  
openrl --version  
# Run test case  
openrl --mode train --env CartPole-v1  

Quick Start

OpenRL provides a simple and easy-to-use interface for beginners in reinforcement learning. Below is an example of using the PPO algorithm to train the CartPole environment:

# train_ppo.py
from openrl.envs.common import make
from openrl.modules.common import PPONet as Net
from openrl.runners.common import PPOAgent as Agent

env = make("CartPole-v1", env_num=9)  # Create an environment and set the environment parallelism to 9.
net = Net(env)  # Create neural network.
agent = Agent(net)  # Initialize the agent.
agent.train(
    total_time_steps=20000)  # Start training and set the total number of steps to 20,000 for the running environment.

Training an agent using OpenRL only requires four simple steps: Create Environment => Initialize Model => Initialize Agent => Start Training!

For a well-trained agent, users can also easily test the agent:

# train_ppo.py
from openrl.envs.common import make
from openrl.modules.common import PPONet as Net
from openrl.runners.common import PPOAgent as Agent

agent = Agent(Net(make("CartPole-v1", env_num=9)))  # Initialize trainer.
agent.train(total_time_steps=20000)
# Create an environment for test, set the parallelism of the environment to 9, and set the rendering mode to group_human.
env = make("CartPole-v1", env_num=9, render_mode="group_human")
agent.set_env(env)  # The agent requires an interactive environment.
obs, info = env.reset()  # Initialize the environment to obtain initial observations and environmental information.
while True:
    action, _ = agent.act(obs)  # The agent predicts the next action based on environmental observations.
    # The environment takes one step according to the action, obtains the next observation, reward, whether it ends and environmental information.
    obs, r, done, info = env.step(action)
    if any(done): break
env.close()  # Close test environment

Executing the above code on a regular laptop only takes a few seconds to complete the training. Below shows the visualization of the agent:

Tips: Users can also quickly train the CartPole environment by executing a command line in the terminal.

openrl --mode train --env CartPole-v1

For training tasks such as multi-agent and natural language processing, OpenRL also provides a similarly simple and easy-to-use interface.

For information on how to perform multi-agent training, set hyperparameters for training, load training configurations, use wandb, save GIF animations, etc., please refer to:

For information on natural language task training, loading models/datasets on Hugging Face, customizing training models/reward models, etc., please refer to:

For more information about OpenRL, please refer to the documentation.

Gallery

In order to facilitate users' familiarity with the framework, we provide more examples and demos of using OpenRL in Gallery. Users are also welcome to contribute their own training examples and demos to the Gallery.

Projects Using OpenRL

We have listed research projects that use OpenRL in the OpenRL Project. If you are using OpenRL in your research project, you are also welcome to join this list.

Feedback and Contribution

The OpenRL framework is still under continuous development and documentation. We welcome you to join us in making this project better:

Maintainers

At present, OpenRL is maintained by the following maintainers:

Welcome more contributors to join our maintenance team (send an E-mail to huangshiyu@4paradigm.com to apply for joining the OpenRL team).

Supporters

↳ Contributors

↳ Stargazers

Stargazers repo roster for @OpenRL-Lab/openrl

↳ Forkers

Forkers repo roster for @OpenRL-Lab/openrl

Citing OpenRL

If our work has been helpful to you, please feel free to cite us:

@misc{openrl2023,
    title={OpenRL},
    author={OpenRL Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/OpenRL-Lab/openrl}},
    year={2023},
}

Star History

Star History Chart

License

OpenRL under the Apache 2.0 license.

Acknowledgments

The development of the OpenRL framework has drawn on the strengths of other reinforcement learning frameworks:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openrl-0.2.0.tar.gz (206.2 kB view details)

Uploaded Source

Built Distribution

openrl-0.2.0-py3-none-any.whl (359.3 kB view details)

Uploaded Python 3

File details

Details for the file openrl-0.2.0.tar.gz.

File metadata

  • Download URL: openrl-0.2.0.tar.gz
  • Upload date:
  • Size: 206.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for openrl-0.2.0.tar.gz
Algorithm Hash digest
SHA256 bb5d31e85d299d567904a9923032fb7093ffc95d19478f98a170588f2764b319
MD5 121d4a06a1fd7ea18b8efcd76411f684
BLAKE2b-256 fb4ad987b7833b5bd04b3b8e68d75780b4c52dd69aa24a3da9d0115f9831f892

See more details on using hashes here.

File details

Details for the file openrl-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: openrl-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 359.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for openrl-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2a60c46c062e4c1e74a389bc52a3ecf0e19b9d75563987423ff24945640a6c6c
MD5 2895945a2d15b5596d3cb7ac9821c9b4
BLAKE2b-256 d30cfe21a519654739aba3d9f75249d673312d96283297ab099c30a937d28679

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page