Reinforcement learning algorithms in RLlib and PyTorch.
Project description
Reinforcement learning algorithms in RLlib and PyTorch.
Installation
pip install raylab
Quickstart
Raylab provides agents and environments to be used with a normal RLlib/Tune setup.
You can an agent’s name (from the Algorithms section) to raylab info list
to list its top-level configurations:
raylab info list SoftAC
learning_starts: 0
Hold this number of timesteps before first training operation.
policy: {}
Sub-configurations for the policy class.
wandb: {}
Configs for integration with Weights & Biases.
Accepts arbitrary keyword arguments to pass to `wandb.init`.
The defaults for `wandb.init` are:
* name: `_name` property of the trainer.
* config: full `config` attribute of the trainer
* config_exclude_keys: `wandb` and `callbacks` configs
* reinit: True
Don't forget to:
* install `wandb` via pip
* login to W&B with the appropriate API key for your
team/project.
* set the `wandb/project` name in the config dict
Check out the Quickstart for more information:
`https://docs.wandb.com/quickstart`
You can add the --rllib
flag to get the descriptions for all the options common to RLlib agents
(or Trainer
s)
Launching experiments can be done via the command line using raylab experiment
passing a file path
with an agent’s configuration through the --config
flag.
The following command uses the cartpole example configuration file
to launch an experiment using the vanilla Policy Gradient agent from the RLlib library.
raylab experiment PG --name PG -s training_iteration 10 --config examples/PG/cartpole_defaults.py
You can also launch an experiment from a Python script normally using Ray and Tune. The following shows how you may use Raylab to perform an experiment comparing different types of exploration for the NAF agent.
import ray
from ray import tune
import raylab
def main():
raylab.register_all_agents()
raylab.register_all_environments()
ray.init()
tune.run(
"NAF",
local_dir="data/NAF",
stop={"timesteps_total": 100000},
config={
"env": "CartPoleSwingUp-v0",
"exploration_config": {
"type": tune.grid_search([
"raylab.utils.exploration.GaussianNoise",
"raylab.utils.exploration.ParameterNoise"
])
}
},
num_samples=10,
)
if __name__ == "__main__":
main()
One can then visualize the results using raylab dashboard
, passing the local_dir
used in the
experiment. The dashboard lets you filter and group results in a quick way.
raylab dashboard data/NAF/
You can find the best checkpoint according to a metric (episode_reward_mean
by default)
using raylab find-best
.
raylab find-best data/NAF/
Finally, you can pass a checkpoint to raylab rollout
to see the returns collected by the agent and
render it if the environment supports a visual render()
method. For example, you
can use the output of the find-best
command to see the best agent in action.
raylab rollout $(raylab find-best data/NAF/) --agent NAF
Algorithms
Paper |
Agent Name |
ACKTR |
|
TRPO |
|
NAF |
|
SVG(inf)/SVG(1)/SoftSVG |
|
SoftAC |
|
Streamlined Off-Policy (DDPG) |
SOP |
MBPO |
|
MAGE |
Command-line interface
For a high-level description of the available utilities, run raylab --help
Usage: raylab [OPTIONS] COMMAND [ARGS]...
RayLab: Reinforcement learning algorithms in RLlib.
Options:
--help Show this message and exit.
Commands:
dashboard Launch the experiment dashboard to monitor training progress.
episodes Launch the episode dashboard to monitor state and action...
experiment Launch a Tune experiment from a config file.
find-best Find the best experiment checkpoint as measured by a metric.
info View information about an agent's config parameters.
rollout Wrap `rllib rollout` with customized options.
test-module Launch dashboard to test generative models from a checkpoint.
Packages
The project is structured as follows
raylab |-- agents # Trainer and Policy classes |-- cli # Command line utilities |-- envs # Gym environment registry and utilities |-- logger # Tune loggers |-- policy # Extensions and customizations of RLlib's policy API | |-- losses # RL loss functions | |-- modules # PyTorch neural network modules for TorchPolicy |-- pytorch # PyTorch extensions |-- utils # miscellaneous utilities
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file raylab-0.16.0.tar.gz
.
File metadata
- Download URL: raylab-0.16.0.tar.gz
- Upload date:
- Size: 104.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.7 CPython/3.8.11 Linux/5.8.0-1039-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9cca30ee490ac1632be4348ee04fc7a4cd09872fbf81b7623e4fb3383955a4fa |
|
MD5 | 1d0c9bd3bd37f025e16e44d782837360 |
|
BLAKE2b-256 | e15303285875bbc60291aad8e9f474ef0f9a99d7c7c404660538a2185a22aa0c |
File details
Details for the file raylab-0.16.0-py3-none-any.whl
.
File metadata
- Download URL: raylab-0.16.0-py3-none-any.whl
- Upload date:
- Size: 157.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.7 CPython/3.8.11 Linux/5.8.0-1039-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 181677f83239efc8b6b80056e905707780d2f04aa903ac701a26be2adc0f9b90 |
|
MD5 | a7bba09764486cc070871ceb1e0ddc67 |
|
BLAKE2b-256 | 104510cb0b0bd1aff4ecc5a41905f0fef00a0d674bbe29748b3ea8461a3750f9 |