Skip to main content

Reinforcement learning algorithms in RLlib and PyTorch.

Project description

PyPI GitHub Workflow Status Dependabot GitHub CodeStyle

Reinforcement learning algorithms in RLlib and PyTorch.

Installation

pip install raylab

Quickstart

Raylab provides agents and environments to be used with a normal RLlib/Tune setup. You can an agent’s name (from the Algorithms section) to raylab info list to list its top-level configurations:

raylab info list SoftAC
learning_starts: 0
    Hold this number of timesteps before first training operation.
policy: {}
    Sub-configurations for the policy class.
wandb: {}
    Configs for integration with Weights & Biases.

    Accepts arbitrary keyword arguments to pass to `wandb.init`.
    The defaults for `wandb.init` are:
    * name: `_name` property of the trainer.
    * config: full `config` attribute of the trainer
    * config_exclude_keys: `wandb` and `callbacks` configs
    * reinit: True

    Don't forget to:
      * install `wandb` via pip
      * login to W&B with the appropriate API key for your
        team/project.
      * set the `wandb/project` name in the config dict

    Check out the Quickstart for more information:
    `https://docs.wandb.com/quickstart`

You can add the --rllib flag to get the descriptions for all the options common to RLlib agents (or Trainers)

Launching experiments can be done via the command line using raylab experiment passing a file path with an agent’s configuration through the --config flag. The following command uses the cartpole example configuration file to launch an experiment using the vanilla Policy Gradient agent from the RLlib library.

raylab experiment PG --name PG -s training_iteration 10 --config examples/PG/cartpole_defaults.py

You can also launch an experiment from a Python script normally using Ray and Tune. The following shows how you may use Raylab to perform an experiment comparing different types of exploration for the NAF agent.

import ray
from ray import tune
import raylab

def main():
    raylab.register_all_agents()
    raylab.register_all_environments()
    ray.init()
    tune.run(
        "NAF",
        local_dir="data/NAF",
        stop={"timesteps_total": 100000},
        config={
            "env": "CartPoleSwingUp-v0",
            "exploration_config": {
                "type": tune.grid_search([
                    "raylab.utils.exploration.GaussianNoise",
                    "raylab.utils.exploration.ParameterNoise"
                ])
            }
        },
        num_samples=10,
    )

if __name__ == "__main__":
    main()

One can then visualize the results using raylab dashboard, passing the local_dir used in the experiment. The dashboard lets you filter and group results in a quick way.

raylab dashboard data/NAF/
https://i.imgur.com/bVc6WC5.png

You can find the best checkpoint according to a metric (episode_reward_mean by default) using raylab find-best.

raylab find-best data/NAF/

Finally, you can pass a checkpoint to raylab rollout to see the returns collected by the agent and render it if the environment supports a visual render() method. For example, you can use the output of the find-best command to see the best agent in action.

raylab rollout $(raylab find-best data/NAF/) --agent NAF

Algorithms

Paper

Agent Name

Actor Critic using Kronecker-factored Trust Region

ACKTR

Trust Region Policy Optimization

TRPO

Normalized Advantage Function

NAF

Stochastic Value Gradients

SVG(inf)/SVG(1)/SoftSVG

Soft Actor-Critic

SoftAC

Streamlined Off-Policy (DDPG)

SOP

Model-Based Policy Optimization

MBPO

Model-based Action-Gradient-Estimator

MAGE

Command-line interface

For a high-level description of the available utilities, run raylab --help

Usage: raylab [OPTIONS] COMMAND [ARGS]...

  RayLab: Reinforcement learning algorithms in RLlib.

Options:
  --help  Show this message and exit.

Commands:
  dashboard    Launch the experiment dashboard to monitor training progress.
  episodes     Launch the episode dashboard to monitor state and action...
  experiment   Launch a Tune experiment from a config file.
  find-best    Find the best experiment checkpoint as measured by a metric.
  info         View information about an agent's config parameters.
  rollout      Wrap `rllib rollout` with customized options.
  test-module  Launch dashboard to test generative models from a checkpoint.

Packages

The project is structured as follows

raylab
|-- agents            # Trainer and Policy classes
|-- cli               # Command line utilities
|-- envs              # Gym environment registry and utilities
|-- logger            # Tune loggers
|-- policy            # Extensions and customizations of RLlib's policy API
|   |-- losses        # RL loss functions
|   |-- modules       # PyTorch neural network modules for TorchPolicy
|-- pytorch           # PyTorch extensions
|-- utils             # miscellaneous utilities

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

raylab-0.15.7.tar.gz (151.7 kB view details)

Uploaded Source

Built Distribution

raylab-0.15.7-py3-none-any.whl (234.9 kB view details)

Uploaded Python 3

File details

Details for the file raylab-0.15.7.tar.gz.

File metadata

  • Download URL: raylab-0.15.7.tar.gz
  • Upload date:
  • Size: 151.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.6 CPython/3.8.9 Linux/5.4.0-1046-azure

File hashes

Hashes for raylab-0.15.7.tar.gz
Algorithm Hash digest
SHA256 5a2732a248c9ba36756d64b01a911ac9a74dcbf8f82b73dcaabfa22f408fdfb2
MD5 ff8d97e85dc32c3392fa16577d860078
BLAKE2b-256 6b7169ed908da988d5afc3c9ed03c5a6ac6ac3a69d60be7204436ab469f8a4b9

See more details on using hashes here.

File details

Details for the file raylab-0.15.7-py3-none-any.whl.

File metadata

  • Download URL: raylab-0.15.7-py3-none-any.whl
  • Upload date:
  • Size: 234.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.6 CPython/3.8.9 Linux/5.4.0-1046-azure

File hashes

Hashes for raylab-0.15.7-py3-none-any.whl
Algorithm Hash digest
SHA256 e2d197c29c6c499d5570423feba732b7f25704ceaf7a7a2ea9e43142d3134621
MD5 2da2d48cb733eb07c5083d000b095173
BLAKE2b-256 1665c54e89b432f0ee930cd7c83b3d67c12d0e56d4fe55abd84649cd26f1fb9b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page