Skip to main content

Reinforcement learning algorithms in RLlib and PyTorch.

Project description

raylab

PyPI Travis (.com) Updates GitHub CodeStyle

Reinforcement learning algorithms in RLlib and PyTorch.

Introduction

Raylab provides agents and environments to be used with a normal RLlib/Tune setup.

import ray
from ray import tune
import raylab

def main():
    raylab.register_all_agents()
    raylab.register_all_environments()
    ray.init()
    tune.run(
        "NAF",
        local_dir=...,
        stop={"timesteps_total": 100000},
        config={
            "env": "CartPoleSwingUp-v0",
            "exploration_config": {
                "type": tune.grid_search([
                    "raylab.utils.exploration.GaussianNoise",
                    "raylab.utils.exploration.ParameterNoise"
                ])
            }
            ...
        },
    )

if __name__ == "__main__":
    main()

One can then visualize the results using raylab dashboard

https://i.imgur.com/bVc6WC5.png

Installation

pip install raylab

Algorithms

Paper

Agent Name

Actor Critic using Kronecker-factored Trust Region

ACKTR

Trust Region Policy Optimization

TRPO

Normalized Advantage Function

NAF

Stochastic Value Gradients

SVG(inf)/SVG(1)/SoftSVG

Soft Actor-Critic

SoftAC

Model-Based Policy Optimization

MBPO

Streamlined Off-Policy (DDPG)

SOP

Command-line interface

For a high-level description of the available utilities, run raylab –help

Usage: raylab [OPTIONS] COMMAND [ARGS]...

RayLab: Reinforcement learning algorithms in RLlib.

Options:
--help  Show this message and exit.

Commands:
dashboard    Launch the experiment dashboard to monitor training progress.
episodes     Launch the episode dashboard to monitor state and action...
experiment   Launch a Tune experiment from a config file.
find-best    Find the best experiment checkpoint as measured by a metric.
rollout      Wrap `rllib rollout` with customized options.
test-module  Launch dashboard to test generative models from a checkpoint.

Packages

The project is structured as follows

raylab
├── agents            # Trainer and Policy classes
├── cli               # Command line utilities
├── envs              # Gym environment registry and utilities
├── losses            # RL loss functions
├── logger            # Tune loggers
├── modules           # PyTorch neural network modules for algorithms
├── policy            # Extensions and customizations of RLlib's policy API
├── pytorch           # PyTorch extensions
├── utils             # miscellaneous utilities

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

0.6.5 (2020-05-21)

  • First release on PyPI.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

raylab-0.7.7.tar.gz (142.0 kB view details)

Uploaded Source

File details

Details for the file raylab-0.7.7.tar.gz.

File metadata

  • Download URL: raylab-0.7.7.tar.gz
  • Upload date:
  • Size: 142.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.3.1 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.7.1

File hashes

Hashes for raylab-0.7.7.tar.gz
Algorithm Hash digest
SHA256 9ae55fab1f8d30ca6025adb908296814ccb830ce3375002ef8c9ef02d2d9320a
MD5 e55a558f825be6ecae2dc4e56d8019a7
BLAKE2b-256 a1738d21b962c203a6a876d019a2ce8294e6541ce94fbd948c43207af9b74335

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page