Skip to main content

Reinforcement learning algorithms in RLlib and PyTorch.

Project description

raylab

PyPI Travis (.com) Updates GitHub CodeStyle

Reinforcement learning algorithms in RLlib and PyTorch.

Introduction

Raylab provides agents and environments to be used with a normal RLlib/Tune setup.

import ray
from ray import tune
import raylab

def main():
    raylab.register_all_agents()
    raylab.register_all_environments()
    ray.init()
    tune.run(
        "NAF",
        local_dir=...,
        stop={"timesteps_total": 100000},
        config={
            "env": "CartPoleSwingUp-v0",
            "exploration_config": {
                "type": tune.grid_search([
                    "raylab.utils.exploration.GaussianNoise",
                    "raylab.utils.exploration.ParameterNoise"
                ])
            }
            ...
        },
    )

if __name__ == "__main__":
    main()

One can then visualize the results using raylab dashboard

https://i.imgur.com/bVc6WC5.png

Installation

pip install raylab

Algorithms

Paper

Agent Name

Actor Critic using Kronecker-factored Trust Region

ACKTR

Trust Region Policy Optimization

TRPO

Normalized Advantage Function

NAF

Stochastic Value Gradients

SVG(inf)/SVG(1)/SoftSVG

Soft Actor-Critic

SoftAC

Model-Based Policy Optimization

MBPO

Streamlined Off-Policy (DDPG)

SOP

Command-line interface

For a high-level description of the available utilities, run raylab –help

Usage: raylab [OPTIONS] COMMAND [ARGS]...

RayLab: Reinforcement learning algorithms in RLlib.

Options:
--help  Show this message and exit.

Commands:
dashboard    Launch the experiment dashboard to monitor training progress.
episodes     Launch the episode dashboard to monitor state and action...
experiment   Launch a Tune experiment from a config file.
find-best    Find the best experiment checkpoint as measured by a metric.
rollout      Wrap `rllib rollout` with customized options.
test-module  Launch dashboard to test generative models from a checkpoint.

Packages

The project is structured as follows

raylab
├── agents            # Trainer and Policy classes
├── cli               # Command line utilities
├── envs              # Gym environment registry and utilities
├── losses            # RL loss functions
├── logger            # Tune loggers
├── modules           # PyTorch neural network modules for algorithms
├── policy            # Extensions and customizations of RLlib's policy API
├── pytorch           # PyTorch extensions
├── utils             # miscellaneous utilities

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

0.6.5 (2020-05-21)

  • First release on PyPI.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

raylab-0.7.5.tar.gz (142.0 kB view details)

Uploaded Source

File details

Details for the file raylab-0.7.5.tar.gz.

File metadata

  • Download URL: raylab-0.7.5.tar.gz
  • Upload date:
  • Size: 142.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/47.3.0 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.7.1

File hashes

Hashes for raylab-0.7.5.tar.gz
Algorithm Hash digest
SHA256 c9b350debd8e1da5d5e903e2976f20b39d23049034d7896b611422d4c1e7f2ba
MD5 c449f19bfd8e1d1d365fa6a9e70e939e
BLAKE2b-256 a11ec7de452c07538712997ceaa97603c49a9bd547266292f4c2bba6d3ba0575

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page