Skip to main content

Utilities for Ray RLlib and related workflows. Includes experiment setup, callbacks, and JAX model utilities.

Project description

test workflow badge ReadtheDocs Badge Ruff Ruff

Ray Utilities

Quickstart

Train a PPO agent on CartPole-v1 with default settings and log to WandB after the experiment has finished:

python experiments/default_training.py --env CartPole-v1 --wandb offline+upload

Features

Many features are stand-alone and can be used independently. The main features include:

  • JAX PPO for RLlib: A JAX-based implementation of the Proximal Policy Optimization (PPO) algorithm compatible with RLlib Algorithm.
  • Ray + Optuna Grid Search + Optuna Pruners: Extends ray's OptunaSearch to be compatible with RLlib and supports advanced pruners.
  • Experiment Framework: A base class for setting up experiments with dynamic parameters and parameter spaces, easily run via CLI and ray.tune.Tuner.
  • Reproducible Environments: Reproducible environments for experiments using ray.tune by using a more sophisticated seeding mechanism.
  • Dynamic Parameter Tuning (WIP): Support for dynamic tuning of parameters during experiments. ray.tune.grid_search and Optuna pruners can work as a Stopper.
  • Trial forking and Experiment Key Management: Enhanced support for trial forking and experiment key management, including parsing and restoring from forked checkpoints. This is especially designed for Population Based Training (PBT) and similar use cases and combined with WandB's support for fork logging.

What's New in v0.5.0

Highlights

  • Config File Integration: Config files can now be used seamlessly with the argument parser. Specify config files via -cfg or --config_files and all arguments in the file will be parsed as if passed on the command line.

  • Flexible Tagging via CLI: Add tags directly from the command line using --tag:tag_name (for a tag without value) or --tag:tag_name=value (for a tag with a value). Tags are automatically logged to experiment tracking tools (WandB, Comet, etc) and help organize and filter your results.

  • Population Based Training (PBT) and Forking: The new TopPBTTrialScheduler enables advanced population-based training with quantile-based exploitation and flexible mutation strategies. Forked trials are automatically tracked and restored, and experiment lineage is visible in WandB/Comet dashboards.

  • Advanced Comet and WandB Callbacks: Improved handling for online/offline experiment tracking, including robust upload and sync logic for offline runs.

  • Improved Log Formatting: All experiment outputs are now more human-readable and less nested. Training and evaluation results are flattened and easier to interpret.

  • Helper Utilities: New and improved helpers for experiment key generation, argument patching, and test utilities.


Other Features

  • Exact Environment Step Sampling: Ensures accurate step counts in RLlib.
  • Improved Logger Callbacks: Cleaner logs and better video handling for CSV, Tensorboard, WandB, and Comet.
  • PPO Torch Learner with Gradient Accumulation: Efficient training with large batches.

Installation

Install from PyPI

pip install ray_utilities

Install latest version

Clone the repository and install the package using pip:

git clone https://github.com/Daraan/ray_utilities.git
cd ray_utilities
pip install .

Documentation (Work in Progress)

Visit https://ray-utilities.readthedocs.io/

Experiments

Pick What You Need - Customize Your Experiments

ExperimentSetupBase classes provide a modular way to parse your configuration, setup trainables, their parameters and a Tuner, executed by run_tune.

Simple entry point:

# File: run_experiment.py
from ray_utilities import run_tune
from ray_utilities.setup import PPOSetup

if __name__ == "__main__":
    # Take a default setup or adjust to your needs
    with PPOSetup() as setup:
        # The setup takes care of many settings passed in via the CLI
        # but the config (an rllib.AlgorithmConfig) can be adjusted
        # inside the code as well.
        # Changes made in this with block are tracked for checkpoint reloads
        setup.config.training(num_epochs=10)
    results = run_tune(setup)

Using Config Files and Tags

You can specify experiment parameters in a config file and combine them with CLI arguments. This makes it easy to share and reproduce experiments.

python run_experiment.py -cfg experiments/models/mlp/default.cfg --tag:baseline --tag:lr=0.001

Tags are automatically logged to experiment tracking tools (WandB, Comet, etc) and help organize and filter your results.

Population Based Training (PBT) and Forking

The new TopPBTTrialScheduler enables advanced population-based training with quantile-based exploitation and flexible mutation strategies. Forked trials are automatically tracked and restored, and experiment lineage is visible in WandB/Comet dashboards.

See the documentation for a full example and advanced usage: Read the Docs

[!NOTE] It is recommended to subclass AlgorithmSetup or ExperimentSetupBase to define your own setup. Extend DefaultArgumentParser to add custom CLI arguments. Above's PPOSetup is a very minimalistic example.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ray_utilities-0.5.1.tar.gz (261.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ray_utilities-0.5.1-py3-none-any.whl (324.3 kB view details)

Uploaded Python 3

File details

Details for the file ray_utilities-0.5.1.tar.gz.

File metadata

  • Download URL: ray_utilities-0.5.1.tar.gz
  • Upload date:
  • Size: 261.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ray_utilities-0.5.1.tar.gz
Algorithm Hash digest
SHA256 4faaded3100efb927b7a4f9b38206d8a4ecda998c8430cca089865ca3232af94
MD5 4317a257d1fe4773d7722efeac83d6a0
BLAKE2b-256 677ce8eb36bb20ba8321171986a1f83cfc9c13f306709efc0ba6cd0ea2274c74

See more details on using hashes here.

Provenance

The following attestation bundles were made for ray_utilities-0.5.1.tar.gz:

Publisher: publish.yml on Daraan/ray_utilities

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ray_utilities-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: ray_utilities-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 324.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ray_utilities-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f4450ccfd485540c455827fae7a28fcbbad519775ead807c57ad317d50c5ac43
MD5 7735904ccbdd6dec6644f484317855ff
BLAKE2b-256 54dfc1264a6b891849e1f4cb59e461ab63773f813997110856d8c5c589d3b160

See more details on using hashes here.

Provenance

The following attestation bundles were made for ray_utilities-0.5.1-py3-none-any.whl:

Publisher: publish.yml on Daraan/ray_utilities

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page