Skip to main content

An offline deep reinforcement learning library

Project description

d3rlpy: An offline deep reinforcement learning library

test Documentation Status codecov Maintainability MIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

import d3rlpy

dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")

# prepare algorithm
sac = d3rlpy.algos.SACConfig().create(device="cuda:0")

# train offline
sac.fit(dataset, n_steps=1000000)

# train online
sac.fit_online(env, n_steps=1000000)

# ready to control
actions = sac.predict(x)

[!IMPORTANT] v2.x.x introduces breaking changes. If you still stick to v1.x.x, please explicitly install previous versions (e.g. pip install d3rlpy==1.1.1).

Key features

:zap: Most Practical RL Library Ever

  • offline RL: d3rlpy supports state-of-the-art offline RL algorithms. Offline RL is extremely powerful when the online interaction is not feasible during training (e.g. robotics, medical).
  • online RL: d3rlpy also supports conventional state-of-the-art online training algorithms without any compromising, which means that you can solve any kinds of RL problems only with d3rlpy.

:beginner: User-friendly API

  • zero-knowledge of DL library: d3rlpy provides many state-of-the-art algorithms through intuitive APIs. You can become a RL engineer even without knowing how to use deep learning libraries.
  • extensive documentation: d3rlpy is fully documented and accompanied with tutorials and reproduction scripts of the original papers.

:rocket: Beyond State-of-the-art

  • distributional Q function: d3rlpy is the first library that supports distributional Q functions in the all algorithms. The distributional Q function is known as the very powerful method to achieve the state-of-the-performance.
  • data-prallel distributed training: d3rlpy is the first library that supports data-parallel distributed offline RL training, which allows you to scale up offline RL with multiple GPUs or nodes. See example.

Installation

d3rlpy supports Linux, macOS and Windows.

Dependencies

Installing d3rlpy package will install or upgrade the following packages to satisfy requirements:

  • torch>=2.5.0
  • tqdm>=4.66.3
  • gym>=0.26.0
  • gymnasium>=1.0.0
  • click
  • colorama
  • dataclasses-json
  • h5py
  • structlog
  • typing-extensions

PyPI (recommended)

PyPI version PyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server Badge Anaconda-Server Badge

$ conda install conda-forge/noarch::d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

Supported algorithms

algorithm discrete control continuous control
Behavior Cloning (supervised learning) :white_check_mark: :white_check_mark:
Neural Fitted Q Iteration (NFQ) :white_check_mark: :no_entry:
Deep Q-Network (DQN) :white_check_mark: :no_entry:
Double DQN :white_check_mark: :no_entry:
Deep Deterministic Policy Gradients (DDPG) :no_entry: :white_check_mark:
Twin Delayed Deep Deterministic Policy Gradients (TD3) :no_entry: :white_check_mark:
Soft Actor-Critic (SAC) :white_check_mark: :white_check_mark:
Batch Constrained Q-learning (BCQ) :white_check_mark: :white_check_mark:
Bootstrapping Error Accumulation Reduction (BEAR) :no_entry: :white_check_mark:
Conservative Q-Learning (CQL) :white_check_mark: :white_check_mark:
Advantage Weighted Actor-Critic (AWAC) :no_entry: :white_check_mark:
Critic Reguralized Regression (CRR) :no_entry: :white_check_mark:
Policy in Latent Action Space (PLAS) :no_entry: :white_check_mark:
TD3+BC :no_entry: :white_check_mark:
Implicit Q-Learning (IQL) :no_entry: :white_check_mark:
Calibrated Q-Learning (Cal-QL) :no_entry: :white_check_mark:
ReBRAC :no_entry: :white_check_mark:
Decision Transformer :white_check_mark: :white_check_mark:
Gato :construction: :construction:

Supported Q functions

Benchmark results

d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.

Examples

MuJoCo

import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')

# prepare algorithm
cql = d3rlpy.algos.CQLConfig().create(device='cuda:0')

# train
cql.fit(
    dataset,
    n_steps=100000,
    evaluators={"environment": d3rlpy.metrics.EnvironmentEvaluator(env)},
)

See more datasets at d4rl.

Atari 2600

import d3rlpy

# prepare dataset (1% dataset)
dataset, env = d3rlpy.datasets.get_atari_transitions(
    'breakout',
    fraction=0.01,
    num_stack=4,
)

# prepare algorithm
cql = d3rlpy.algos.DiscreteCQLConfig(
    observation_scaler=d3rlpy.preprocessing.PixelObservationScaler(),
    reward_scaler=d3rlpy.preprocessing.ClipRewardScaler(-1.0, 1.0),
).create(device='cuda:0')

# start training
cql.fit(
    dataset,
    n_steps=1000000,
    evaluators={"environment": d3rlpy.metrics.EnvironmentEvaluator(env, epsilon=0.001)},
)

See more Atari datasets at d4rl-atari.

Online Training

import d3rlpy
import gym

# prepare environment
env = gym.make('Hopper-v3')
eval_env = gym.make('Hopper-v3')

# prepare algorithm
sac = d3rlpy.algos.SACConfig().create(device='cuda:0')

# prepare replay buffer
buffer = d3rlpy.dataset.create_fifo_replay_buffer(limit=1000000, env=env)

# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)

Tutorials

Try cartpole examples on Google Colaboratory!

  • offline RL tutorial: Open In Colab
  • online RL tutorial: Open In Colab

More tutorial documentations are available here.

Contributions

Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.

Community

Channel Link
Issues GitHub Issues

[!IMPORTANT] Please do NOT email to any contributors including the owner of this project to ask for technical support. Such emails will be ignored without replying to your message. Use GitHub Issues to report your problems.

Projects using d3rlpy

Project Description
MINERVA An out-of-the-box GUI tool for offline RL
SCOPE-RL An off-policy evaluation and selection library

Roadmap

The roadmap to the future release is available in ROADMAP.md.

Citation

The paper is available here.

@article{d3rlpy,
  author  = {Takuma Seno and Michita Imai},
  title   = {d3rlpy: An Offline Deep Reinforcement Learning Library},
  journal = {Journal of Machine Learning Research},
  year    = {2022},
  volume  = {23},
  number  = {315},
  pages   = {1--20},
  url     = {http://jmlr.org/papers/v23/22-0017.html}
}

Acknowledgement

This work started as a part of Takuma Seno's Ph.D project at Keio University in 2020.

This work is supported by Information-technology Promotion Agency, Japan (IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscal year 2020.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

d3rlpy-2.7.0.tar.gz (131.3 kB view details)

Uploaded Source

Built Distribution

d3rlpy-2.7.0-py3-none-any.whl (192.6 kB view details)

Uploaded Python 3

File details

Details for the file d3rlpy-2.7.0.tar.gz.

File metadata

  • Download URL: d3rlpy-2.7.0.tar.gz
  • Upload date:
  • Size: 131.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.5

File hashes

Hashes for d3rlpy-2.7.0.tar.gz
Algorithm Hash digest
SHA256 1b2ff13c272cf7b1f7c0b7124fc3fd134c248447f6637578bb3a47197e4bf71f
MD5 35629aeb5fe5983185af8e12e7c193e0
BLAKE2b-256 acddd538c58f12d806e2fe6b77f8f5bdeca996baebac264f06552bdb96515c2b

See more details on using hashes here.

File details

Details for the file d3rlpy-2.7.0-py3-none-any.whl.

File metadata

  • Download URL: d3rlpy-2.7.0-py3-none-any.whl
  • Upload date:
  • Size: 192.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.5

File hashes

Hashes for d3rlpy-2.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0d94bd603fa54a4a7a9056c817b63dcda25ab925327f701b5d578c852e1db200
MD5 9fb63fad2d674e657e8b532dec235b54
BLAKE2b-256 86b021eea9d88d824fda5cc23bd816677b49f17ccd0b9ca69d6dc1fa917d4dca

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page