Skip to main content

Reinforcement learning algorithms in RLlib and PyTorch.

Project description

raylab

PyPI Travis (.com) Updates GitHub CodeStyle

Reinforcement learning algorithms in RLlib and PyTorch.

Introduction

Raylab provides agents and environments to be used with a normal RLlib/Tune setup.

import ray
from ray import tune
import raylab

def main():
    raylab.register_all_agents()
    raylab.register_all_environments()
    ray.init()
    tune.run(
        "NAF",
        local_dir=...,
        stop={"timesteps_total": 100000},
        config={
            "env": "CartPoleSwingUp-v0",
            "exploration_config": {
                "type": tune.grid_search([
                    "raylab.utils.exploration.GaussianNoise",
                    "raylab.utils.exploration.ParameterNoise"
                ])
            }
            ...
        },
    )

if __name__ == "__main__":
    main()

One can then visualize the results using raylab dashboard

https://i.imgur.com/bVc6WC5.png

Installation

pip install raylab

Algorithms

Paper

Agent Name

Actor Critic using Kronecker-factored Trust Region

ACKTR

Trust Region Policy Optimization

TRPO

Normalized Advantage Function

NAF

Stochastic Value Gradients

SVG(inf)/SVG(1)/SoftSVG

Soft Actor-Critic

SoftAC

Model-Based Policy Optimization

MBPO

Streamlined Off-Policy (DDPG)

SOP

Command-line interface

For a high-level description of the available utilities, run raylab –help

Usage: raylab [OPTIONS] COMMAND [ARGS]...

RayLab: Reinforcement learning algorithms in RLlib.

Options:
--help  Show this message and exit.

Commands:
dashboard    Launch the experiment dashboard to monitor training progress.
episodes     Launch the episode dashboard to monitor state and action...
experiment   Launch a Tune experiment from a config file.
find-best    Find the best experiment checkpoint as measured by a metric.
rollout      Wrap `rllib rollout` with customized options.
test-module  Launch dashboard to test generative models from a checkpoint.

Packages

The project is structured as follows

raylab
├── agents            # Trainer and Policy classes
├── cli               # Command line utilities
├── envs              # Gym environment registry and utilities
├── losses            # RL loss functions
├── logger            # Tune loggers
├── modules           # PyTorch neural network modules for algorithms
├── policy            # Extensions and customizations of RLlib's policy API
├── pytorch           # PyTorch extensions
├── utils             # miscellaneous utilities

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

0.6.5 (2020-05-21)

  • First release on PyPI.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

raylab-0.8.0.tar.gz (150.8 kB view details)

Uploaded Source

File details

Details for the file raylab-0.8.0.tar.gz.

File metadata

  • Download URL: raylab-0.8.0.tar.gz
  • Upload date:
  • Size: 150.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.3.1 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.7.1

File hashes

Hashes for raylab-0.8.0.tar.gz
Algorithm Hash digest
SHA256 628db07581f892e5220246499f8fdcdc46207679c2a86b7d613b21b15e3c9971
MD5 8e88207e9f7ceb04a1d7f0679141ebbe
BLAKE2b-256 145cbc586e9a6cf3619657730d2f300fede7d63b134c603b2406dae3ba64dffb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page