Algorithm and utilities for deep reinforcement learning
Project description
Rainy
Reinforcement learning utilities and algrithm implementations using PyTorch.
Example
Rainy has a main
decorator which converts a function that returns rainy.Config
to a CLI app.
All function arguments are re-interpreted as command line arguments.
import os
from torch.optim import RMSprop
import rainy
from rainy import Config, net
from rainy.agents import DQNAgent
from rainy.envs import Atari
from rainy.lib.explore import EpsGreedy, LinearCooler
@rainy.main(DQNAgent, script_path=os.path.realpath(__file__))
def main(
envname: str = "Breakout",
max_steps: int = int(2e7),
replay_size: int = int(1e6),
replay_batch_size: int = 32,
) -> Config:
c = Config()
c.set_env(lambda: Atari(envname))
c.set_optimizer(
lambda params: RMSprop(params, lr=0.00025, alpha=0.95, eps=0.01, centered=True)
)
c.set_explorer(lambda: EpsGreedy(1.0, LinearCooler(1.0, 0.1, int(1e6))))
c.set_net_fn("dqn", net.value.dqn_conv())
c.replay_size = replay_size
c.replay_batch_size = replay_batch_size
c.train_start = 50000
c.sync_freq = 10000
c.max_steps = max_steps
c.eval_env = Atari(envname)
c.eval_freq = None
return c
if __name__ == "__main__":
main()
Then you can use this script like
python dqn.py --replay-batch-size=64 train --eval-render
See examples directory for more.
API documentation
COMING SOON
Supported python version
Python >= 3.6.1
Implementation Status
Algorithm | Multi Worker(Sync) | Recurrent | Discrete Action | Continuous Action | MPI support |
---|---|---|---|---|---|
DQN/Double DQN | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: | :x: |
BootDQN/RPF | :x: | :x: | :heavy_check_mark: | :x: | :x: |
DDPG | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: | :x: |
TD3 | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: | :x: |
SAC | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: | :x: |
PPO | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
A2C | :heavy_check_mark: | :small_red_triangle:(1) | :heavy_check_mark: | :heavy_check_mark: | :x: |
ACKTR | :heavy_check_mark: | :x:(2) | :heavy_check_mark: | :heavy_check_mark: | :x: |
AOC | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
PPOC | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
ACTC(3) | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
(1): Very unstable
(2): Needs https://openreview.net/forum?id=HyMTkQZAb implemented
(3): Incomplete implementation. β is often too high.
Sub packages
- intrinsic-rewards
- Contains an implementation of RND(Random Network Distillation)
References
DQN (Deep Q Network)
DDQN (Double DQN)
Bootstrapped DQN
RPF(Randomized Prior Functions)
DDPQ(Deep Deterministic Policy Gradient)
TD3(Twin Delayed Deep Deterministic Policy Gradient)
SAC(Soft Actor Critic)
A2C (Advantage Actor Critic)
- http://proceedings.mlr.press/v48/mniha16.pdf , https://arxiv.org/abs/1602.01783 (A3C, original version)
- https://blog.openai.com/baselines-acktr-a2c/ (A2C, synchronized version)
ACKTR (Actor Critic using Kronecker-Factored Trust Region)
PPO (Proximal Policy Optimization)
AOC (Advantage Option Critic)
- https://arxiv.org/abs/1609.05140 (DQN-like option critic)
- https://arxiv.org/abs/1709.04571 (A3C-like option critic called A2OC)
PPOC (Proximal Option Critic)
ACTC (Actor Critic Termination Critic)
Implementaions I referenced
Thank you!
https://github.com/openai/baselines
https://github.com/ikostrikov/pytorch-a2c-ppo-acktr
https://github.com/ShangtongZhang/DeepRL
https://github.com/chainer/chainerrl
https://github.com/Thrandis/EKFAC-pytorch (for ACKTR)
https://github.com/jeanharb/a2oc_delib (for AOC)
https://github.com/mklissa/PPOC (for PPOC)
https://github.com/sfujim/TD3 (for DDPG and TD3)
https://github.com/vitchyr/rlkit (for SAC)
License
This project is licensed under Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file rainy-0.8.0.tar.gz
.
File metadata
- Download URL: rainy-0.8.0.tar.gz
- Upload date:
- Size: 70.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d5308b6159fc82bfa13cb908d1fd5fba8cb1b581bd112ca8add46296247b5d7 |
|
MD5 | 67a57d3d41a0b41077151014c1c29b79 |
|
BLAKE2b-256 | 5ec4c018d587a92fc71ba03cc2fcf883c24d54d12d35c33d314c059b1ccf7003 |
File details
Details for the file rainy-0.8.0-py3-none-any.whl
.
File metadata
- Download URL: rainy-0.8.0-py3-none-any.whl
- Upload date:
- Size: 117.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 75ed8cd07dc334580892e3446b82b28c6c99e07a3fd8f326d549c03a1b42da6f |
|
MD5 | f505fbd45bfa5105b2c7f39318dcc475 |
|
BLAKE2b-256 | 839c32639a4a9f39bc70dcb7b633b957e8eca01aef6d273ef47f56c9a4961658 |