Skip to main content

A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations.

Project description



pypi GitHub license Build Status Total alerts Language grade: Python codecov Documentation Status Maintainability Lint, Test, Code Coverage



GenRL is a PyTorch reinforcement learning library centered around reproducible and generalizable algorithm implementations.

Reinforcement learning research is moving faster than ever before. In order to keep up with the growing trend and ensure that RL research remains reproducible, GenRL aims to aid faster paper reproduction and benchmarking by providing the following main features:

  • PyTorch-first: Modular, Extensible and Idiomatic Python
  • Unified Trainer and Logging class: code reusability and high-level UI
  • Ready-made algorithm implementations: ready-made implementations of popular RL algorithms.
  • Faster Benchmarking: automated hyperparameter tuning, environment implementations etc.

By integrating these features into GenRL, we aim to eventually support any new algorithm implementation in less than 100 lines.

If you're interested in contributing, feel free to go through the issues and open PRs for code, docs, tests etc. In case of any questions, please check out the Contributing Guidelines

Installation

GenRL is compatible with Python 3.6 or later and also depends on pytorch and openai-gym. The easiest way to install GenRL is with pip, Python's preferred package installer.

$ pip install genrl

Note that GenRL is an active project and routinely publishes new releases. In order to upgrade GenRL to the latest version, use pip as follows.

$ pip install -U genrl

If you intend to install the latest unreleased version of the library (i.e from source), you can simply do:

$ git clone https://github.com/SforAiDl/genrl.git
$ cd genrl
$ python setup.py install

Usage

To train a Soft Actor-Critic model from scratch on the Pendulum-v0 gym environment and log rewards on tensorboard

import gym

from genrl import SAC, QLearning
from genrl.classical.common import Trainer
from genrl.deep.common import OffPolicyTrainer
from genrl.environments import VectorEnv

env = VectorEnv("Pendulum-v0")
agent = SAC('mlp', env)
trainer = OffPolicyTrainer(agent, env, log_mode=['stdout', 'tensorboard'])
trainer.train()

To train a Tabular Dyna-Q model from scratch on the FrozenLake-v0 gym environment and plot rewards:

env = gym.make("FrozenLake-v0")
agent = QLearning(env)
trainer = Trainer(agent, env, mode="dyna", model="tabular", n_episodes=10000)
episode_rewards = trainer.train()
trainer.plot(episode_rewards)

Algorithms

Deep RL

  • DQN (Deep Q Networks)
    • DQN
    • Double DQN
    • Dueling DQN
    • Noisy DQN
    • Categorical DQN
  • VPG (Vanilla Policy Gradients)
  • A2C (Advantage Actor-Critic)
  • PPO (Proximal Policy Optimization)
  • DDPG (Deep Deterministic Policy Gradients)
  • TD3 (Twin Delayed DDPG)
  • SAC (Soft Actor Critic)

Classical RL

  • SARSA
  • Q Learning

Bandit RL

  • Multi Armed Bandits
    • Eps Greedy
    • UCB
    • Thompson Sampling
    • Bayesian Bandits
    • Softmax Explorer
  • Contextual Bandits
    • Eps Greedy
    • UCB
    • Thompson Sampling
    • Bayesian Bandits
    • Softmax Explorer
  • Deep Contextual Bandits
    • Variation Inference
    • Noise sampling for neural network parameters
    • Epsilon greedy with a neural network
    • Bayesian Regression on for posterior inference
    • Bootstraped Ensemble

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

genrl-0.0.2-py2.py3-none-any.whl (291.3 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page