Skip to main content

skyrl-train

Project description

SkyRL-Train: A modular, performant RL framework for post-training LLMs

🌐 NovaSky Github Twitter Hugging Face Collection Discord Documentation

Overview

With a focus on modularity, skyrl-train makes it easy to prototype new training algorithms, environments, and execution plans—without compromising usability or speed.

skyrl-train is for users who want to modify anything:

  • Quickly develop new environments without modifying or understanding the training code .
  • Modify the training execution plan such as model placement, colocation or disaggregation of training and generation, and async RL.
  • Implement custom trajectory generation specific to your use-case, such as custom sampling methods, tree search, etc.
  • … make any other flexible modifications to the RL workflow!

Key Features

The skyrl-train package supports:

  • PPO and GRPO
  • Training Backends: FSDP, FSDP2, and DeepSpeed
  • Inference backends: vLLM, SGLang, and any custom OpenAI API compatible endpoint that exposes a method to perform weight sync
  • Ulysses sequence parallelism for long-context training
  • Colocated or disaggregated training and generation (including on heterogeneous hardware)
  • Synchronous RL or async one-off pipelining
  • Simple batched rollouts or Asynchronous rollouts for multi-turn conversations
  • Weight sync via NCCL, gloo, or checkpoint-and-load
  • Integration with skyrl-gym to run any environment in the gynasium
  • Sequence packing and Flash Attention 2

Documentation

Find skyrl-train documentation at: skyrl.readthedocs.io/en/latest/

Quick Start

A quick start guide for installation and your first training run is provided below.

Requirements

The only requirements are:

  • CUDA version 12.8
  • uv

If you're running on an existing Ray cluster, make sure to use Ray 2.48.0 and Python 3.12. If not, proceed with the installation instructions below.

First, clone the repository:

git clone --recurse-submodules https://github.com/NovaSky-AI/SkyRL
cd SkyRL/skyrl-train

Then, create a new virtual environment and install the dependencies:

# creates a venv at .venv/
uv sync --extra vllm 
source .venv/bin/activate

Then, prepare the dataset:

uv run -- python examples/gsm8k/gsm8k_dataset.py

Finally, before training, make sure to configure Ray to use uv:

export RAY_RUNTIME_ENV_HOOK=ray._private.runtime_env.uv_runtime_env_hook.hook
# or add to your .bashrc
# echo 'export RAY_RUNTIME_ENV_HOOK=ray._private.runtime_env.uv_runtime_env_hook.hook' >> ~/.bashrc

You should now be able to run our example script (assumes at least 4 GPUs):

export WANDB_API_KEY=<your wandb api key>
bash examples/gsm8k/run_gsm8k.sh

For detailed installation instructions, as well as more examples, please refer to our documentation.

Training on a new task or environment

To implement a new task or environment using the SkyRL-Gym interface, please see our see our Walkthrough Docs.

If you don't want to use the SkyRL-Gym interface, or you have an existing task or agentic pipeline implementation and just want to train with it on top of SkyRL, we recommend you create a simple custom Generator, which requires implementing a single method, generate(). We have one example of a custom Generator at SkyRLGymGenerator which executes environments written in the SkyRL-Gym interface. We are working to provide more example integrations of agent harnesses -- please reach out if you'd like yours to be one of them!

Reproducing SkyRL-SQL

We also test SkyRL by reproducing our prior release SkyRL-SQL, which enabled efficient Multi-Turn RL for Text2SQL. You can find a link to the wandb report here, and a detailed walk through of the reproduction in our documentation.

Acknowledgement

This work is done at Berkeley Sky Computing Lab in collaboration with Anyscale, with generous compute support from Anyscale, Databricks, NVIDIA, Lambda Labs, and AMD.

We adopt many lessons and code from several great projects such as veRL, OpenRLHF, Search-R1, OpenReasonerZero, and NeMo-RL. We appreciate each of these teams and their contributions to open-source research!

Citation

If you find the work in skyrl-train helpful, please consider citing:

@misc{griggs2025skrylv01,
      title={Evolving SkyRL into a Highly-Modular RL Framework},
      author={Tyler Griggs and Sumanth Hegde and Eric Tang and Shu Liu and Shiyi Cao and Dacheng Li and Charlie Ruan and Philipp Moritz and Kourosh Hakhamaneshi and Richard Liaw and Akshay Malik and Matei Zaharia and Joseph E. Gonzalez and Ion Stoica},
      year={2025},
      note={Notion Blog}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skyrl_train-0.1.0.tar.gz (124.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skyrl_train-0.1.0-py3-none-any.whl (144.6 kB view details)

Uploaded Python 3

File details

Details for the file skyrl_train-0.1.0.tar.gz.

File metadata

  • Download URL: skyrl_train-0.1.0.tar.gz
  • Upload date:
  • Size: 124.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for skyrl_train-0.1.0.tar.gz
Algorithm Hash digest
SHA256 7db6928fa3ae421415d6d09cce88c7efa9bf9dd26fecddbdb7bddaca3cc358b2
MD5 3fd83495dc39875f939ae6b460dce826
BLAKE2b-256 1708940d35b218ca32a2da60514271f23ea1bb38dcba7eba32ee8e97b7fcea17

See more details on using hashes here.

File details

Details for the file skyrl_train-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: skyrl_train-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 144.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for skyrl_train-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d8240aa1db8dc9c98d8cee03e661701fa8bfdd5e5918fb8a328190d95642e0a6
MD5 59fcc427976aa4f002e854918c8870a2
BLAKE2b-256 826695424ffa45f5faa8d4d101b83ae2363f6a06589d8611b33c56de3e232f51

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page