Skip to main content

Utilities for Ray RLlib and related workflows. Includes experiment setup, callbacks, and JAX model utilities.

Project description

test workflow badge ReadtheDocs Badge Ruff Ruff

Ray Utilities

Quickstart

Train a PPO agent on CartPole-v1 with default settings and log to WandB after the experiment has finished:

python experiments/default_training.py --env CartPole-v1 --wandb offline+upload

Features

Many features are stand-alone and can be used independently. The main features include:

  • JAX PPO for RLlib: A JAX-based implementation of the Proximal Policy Optimization (PPO) algorithm compatible with RLlib Algorithm.
  • Ray + Optuna Grid Search + Optuna Pruners: Extends ray's OptunaSearch to be compatible with RLlib and supports advanced pruners.
  • Experiment Framework: A base class for setting up experiments with dynamic parameters and parameter spaces, easily run via CLI and ray.tune.Tuner.
  • Reproducible Environments: Reproducible environments for experiments using ray.tune by using a more sophisticated seeding mechanism.
  • Dynamic Parameter Tuning (WIP): Support for dynamic tuning of parameters during experiments. ray.tune.grid_search and Optuna pruners can work as a Stopper.
  • Trial forking and Experiment Key Management: Enhanced support for trial forking and experiment key management, including parsing and restoring from forked checkpoints. This is especially designed for Population Based Training (PBT) and similar use cases and combined with WandB's support for fork logging.

What's New in v0.5.0

Highlights

  • Config File Integration: Config files can now be used seamlessly with the argument parser. Specify config files via -cfg or --config_files and all arguments in the file will be parsed as if passed on the command line.

  • Flexible Tagging via CLI: Add tags directly from the command line using --tag:tag_name (for a tag without value) or --tag:tag_name=value (for a tag with a value). Tags are automatically logged to experiment tracking tools (WandB, Comet, etc) and help organize and filter your results.

  • Population Based Training (PBT) and Forking: The new TopPBTTrialScheduler enables advanced population-based training with quantile-based exploitation and flexible mutation strategies. Forked trials are automatically tracked and restored, and experiment lineage is visible in WandB/Comet dashboards.

  • Advanced Comet and WandB Callbacks: Improved handling for online/offline experiment tracking, including robust upload and sync logic for offline runs.

  • Improved Log Formatting: All experiment outputs are now more human-readable and less nested. Training and evaluation results are flattened and easier to interpret.

  • Helper Utilities: New and improved helpers for experiment key generation, argument patching, and test utilities.


Other Features

  • Exact Environment Step Sampling: Ensures accurate step counts in RLlib.
  • Improved Logger Callbacks: Cleaner logs and better video handling for CSV, Tensorboard, WandB, and Comet.
  • PPO Torch Learner with Gradient Accumulation: Efficient training with large batches.

Installation

Install from PyPI

pip install ray_utilities

Install latest version

Clone the repository and install the package using pip:

git clone https://github.com/Daraan/ray_utilities.git
cd ray_utilities
pip install .

Documentation (Work in Progress)

Visit https://ray-utilities.readthedocs.io/

Experiments

Pick What You Need - Customize Your Experiments

ExperimentSetupBase classes provide a modular way to parse your configuration, setup trainables, their parameters and a Tuner, executed by run_tune.

Simple entry point:

# File: run_experiment.py
from ray_utilities import run_tune
from ray_utilities.setup import PPOSetup

if __name__ == "__main__":
    # Take a default setup or adjust to your needs
    with PPOSetup() as setup:
        # The setup takes care of many settings passed in via the CLI
        # but the config (an rllib.AlgorithmConfig) can be adjusted
        # inside the code as well.
        # Changes made in this with block are tracked for checkpoint reloads
        setup.config.training(num_epochs=10)
    results = run_tune(setup)

Using Config Files and Tags

You can specify experiment parameters in a config file and combine them with CLI arguments. This makes it easy to share and reproduce experiments.

python run_experiment.py -cfg experiments/models/mlp/default.cfg --tag:baseline --tag:lr=0.001

Tags are automatically logged to experiment tracking tools (WandB, Comet, etc) and help organize and filter your results.

Population Based Training (PBT) and Forking

The new TopPBTTrialScheduler enables advanced population-based training with quantile-based exploitation and flexible mutation strategies. Forked trials are automatically tracked and restored, and experiment lineage is visible in WandB/Comet dashboards.

See the documentation for a full example and advanced usage: Read the Docs

[!NOTE] It is recommended to subclass AlgorithmSetup or ExperimentSetupBase to define your own setup. Extend DefaultArgumentParser to add custom CLI arguments. Above's PPOSetup is a very minimalistic example.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ray_utilities-0.5.0.tar.gz (261.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ray_utilities-0.5.0-py3-none-any.whl (324.3 kB view details)

Uploaded Python 3

File details

Details for the file ray_utilities-0.5.0.tar.gz.

File metadata

  • Download URL: ray_utilities-0.5.0.tar.gz
  • Upload date:
  • Size: 261.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ray_utilities-0.5.0.tar.gz
Algorithm Hash digest
SHA256 e087921f67c007845c9f3545bb93930b80015e5976b557c5209999e417ef0300
MD5 b6bf80ada1c657d37e27328d75c19a48
BLAKE2b-256 bafebfd8e0477227266e450645da192b9dbb1c37f4f5ae8dbcee0535e2eab236

See more details on using hashes here.

Provenance

The following attestation bundles were made for ray_utilities-0.5.0.tar.gz:

Publisher: publish.yml on Daraan/ray_utilities

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ray_utilities-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: ray_utilities-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 324.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ray_utilities-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 da321610995fda47fc115718951218115b5a7881a9b52d0d8c0e7043470d5fc4
MD5 ad8e14b964b57dc9977aa0b622a1f9ec
BLAKE2b-256 531507d58ee302a4a0cae0b55d97bc69c354ac43e5d8634c9e49cb96d84f786b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ray_utilities-0.5.0-py3-none-any.whl:

Publisher: publish.yml on Daraan/ray_utilities

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page