Reinforcement learning algorithms in RLlib and PyTorch.
Project description
Reinforcement learning algorithms in RLlib and PyTorch.
Installation
pip install raylab
Quickstart
Raylab provides agents and environments to be used with a normal RLlib/Tune setup.
You can an agent’s name (from the Algorithms section) to raylab info list
to list its top-level configurations:
raylab info list SoftAC
learning_starts: 0
Hold this number of timesteps before first training operation.
policy: {}
Sub-configurations for the policy class.
wandb: {}
Configs for integration with Weights & Biases.
Accepts arbitrary keyword arguments to pass to `wandb.init`.
The defaults for `wandb.init` are:
* name: `_name` property of the trainer.
* config: full `config` attribute of the trainer
* config_exclude_keys: `wandb` and `callbacks` configs
* reinit: True
Don't forget to:
* install `wandb` via pip
* login to W&B with the appropriate API key for your
team/project.
* set the `wandb/project` name in the config dict
Check out the Quickstart for more information:
`https://docs.wandb.com/quickstart`
You can add the --rllib
flag to get the descriptions for all the options common to RLlib agents
(or Trainer
s)
Launching experiments can be done via the command line using raylab experiment
passing a file path
with an agent’s configuration through the --config
flag.
The following command uses the cartpole example configuration file
to launch an experiment using the vanilla Policy Gradient agent from the RLlib library.
raylab experiment PG --name PG -s training_iteration 10 --config examples/PG/cartpole_defaults.py
You can also launch an experiment from a Python script normally using Ray and Tune. The following shows how you may use Raylab to perform an experiment comparing different types of exploration for the NAF agent.
import ray
from ray import tune
import raylab
def main():
raylab.register_all_agents()
raylab.register_all_environments()
ray.init()
tune.run(
"NAF",
local_dir="data/NAF",
stop={"timesteps_total": 100000},
config={
"env": "CartPoleSwingUp-v0",
"exploration_config": {
"type": tune.grid_search([
"raylab.utils.exploration.GaussianNoise",
"raylab.utils.exploration.ParameterNoise"
])
}
},
num_samples=10,
)
if __name__ == "__main__":
main()
One can then visualize the results using raylab dashboard
, passing the local_dir
used in the
experiment. The dashboard lets you filter and group results in a quick way.
raylab dashboard data/NAF/
You can find the best checkpoint according to a metric (episode_reward_mean
by default)
using raylab find-best
.
raylab find-best data/NAF/
Finally, you can pass a checkpoint to raylab rollout
to see the returns collected by the agent and
render it if the environment supports a visual render()
method. For example, you
can use the output of the find-best
command to see the best agent in action.
raylab rollout $(raylab find-best data/NAF/) --agent NAF
Algorithms
Paper |
Agent Name |
ACKTR |
|
TRPO |
|
NAF |
|
SVG(inf)/SVG(1)/SoftSVG |
|
SoftAC |
|
Streamlined Off-Policy (DDPG) |
SOP |
MBPO |
|
MAGE |
Command-line interface
For a high-level description of the available utilities, run raylab --help
Usage: raylab [OPTIONS] COMMAND [ARGS]...
RayLab: Reinforcement learning algorithms in RLlib.
Options:
--help Show this message and exit.
Commands:
dashboard Launch the experiment dashboard to monitor training progress.
episodes Launch the episode dashboard to monitor state and action...
experiment Launch a Tune experiment from a config file.
find-best Find the best experiment checkpoint as measured by a metric.
info View information about an agent's config parameters.
rollout Wrap `rllib rollout` with customized options.
test-module Launch dashboard to test generative models from a checkpoint.
Packages
The project is structured as follows
raylab |-- agents # Trainer and Policy classes |-- cli # Command line utilities |-- envs # Gym environment registry and utilities |-- logger # Tune loggers |-- policy # Extensions and customizations of RLlib's policy API | |-- losses # RL loss functions | |-- modules # PyTorch neural network modules for TorchPolicy |-- pytorch # PyTorch extensions |-- utils # miscellaneous utilities
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file raylab-0.16.1.tar.gz
.
File metadata
- Download URL: raylab-0.16.1.tar.gz
- Upload date:
- Size: 104.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.13 CPython/3.8.12 Linux/5.11.0-1028-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fb5751d488c698ba9e4ff2ba9406241b69f96c9e71dbe6ca6ab2d3a20f200790 |
|
MD5 | 3762ea313a6cc04986e650c3c598056b |
|
BLAKE2b-256 | 75dd1ee92842de44cbae3a00e510e92e775a3634cd39bd9487063345e085f2d2 |
File details
Details for the file raylab-0.16.1-py3-none-any.whl
.
File metadata
- Download URL: raylab-0.16.1-py3-none-any.whl
- Upload date:
- Size: 157.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.13 CPython/3.8.12 Linux/5.11.0-1028-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 574d8bdbd5d50438bbdc36a0e003aa059ec4218d2ffd1fd864b47b0fa7fd1238 |
|
MD5 | c0320f34f739b256d296d08d8a3db17f |
|
BLAKE2b-256 | 1923457325d3a123c1b80a89e011a421e151e9fe5e79b2fffd9edceca280fdc5 |