Skip to main content

An Async Online Multi-Agent RL library for training reasoning models on TextArena games.

Project description

UnstableBaselines

An Async, Online, Multi-Turn, Multi-Agent RL library for training reasoning models on TextArena games.

Documentation

GitHub Repo stars Discord TextArena


Work in progress — interfaces will change.

Updates

  • 23/06/2025: Early release of the pip package (pip install unstable-rl)
  • 22/06/2025: Early release of the code base

Introduction

UnstableBaselines is an Async-, Online-, Multi-Agent RL library focused on simplicity and hackability. Since multiple recent papers showed the sufficiency of LoRA for reasoning tuning, and the fact that opponent sampling for self-play strategies beyond mirror self-play work best when using LoRA weights (since vLLM allows for hot-swapping), we built UnstableBaselines as a LoRA first RL library. We tried to keep the code as straight forward as possible. It is currently around 1.2K lines long and semi-readable. The main focus of unstable-baselines is to enable fast prototyping/research. For something a bit more production ready we recommend to use oat or verifiers.

Key Features

  • Asynchronous collection & learning – actors generate data while learners train.
  • Multi‑agent, multi‑turn focus with self‑play or fixed opponents.
  • LoRA‑first fine‑tuning workflow for fast, lightweight updates.
  • Composable reward transforms at step, game, and sampling stages.

Structure

                                                ┌───────────────┐
                                                │               │
                                                │   Algorithm   │
                                                │               │
                                                └───────────────┘
                                                        ▲        
                                                        │ Get Loss &
                                                        │ update weights
                                                        ▼
    ┌───────────────┐                           ┌───────────────┐
    │               │    Register new lora      │               │
    │   Model Pool  │◀──────────────────────────│    Learner    │
    │               │       checkpoint          │               │
    └───────────────┘                           └───────────────┘
           ▲ │                                         ▲ │ 
           │ │ Sample                        If enough │ │ Check if enough
    Update │ │ Opponent                     data, pull │ │ data for training
 Trueskill │ │                          the next batch │ │ is available
           │ ▼                                         │ ▼
    ┌───────────────┐                           ┌───────────────┐
    │               │     Process and store     │               │
    │   Collector   │──────────────────────────▶│   StepBuffer  │
    │               │  collected Trajectories   │               │
    └───────────────┘                           └───────────────┘
           ▲ │
           │ │ Maintain
    return │ │ Pool of 
Trajectory │ │ n parallel
           │ │ workers
           │ ▼
     ┌─────────────┐
     │  run_game() │
     │  train\eval │
     └─────────────┘

Installation

install UnstableBaselines

pip3 install unstable-rl

Example

To get you started, in this short example we will run you through the process of training Qwen3-1.7B-Base via mirror self-play on SimpleTak and evaluating it against google/gemini-2.0-flash-lite-001 on SimpleTak and KuhnPoker. We will be running the experiments on 3xRTX6000 ada. If you are limited to 24gb of vRam, you can reduce the MAX_TRAIN_SEQ_LEN to around 2500; this means that the model will only be trained on the first 2500 prompt+answer tokens, but can still generate answer that are longer than that. Since (in our experience) models tend to shorten their reasoning throughout training, this works very well.

Training script

import ray, unstable
import unstable.reward_transformations as retra

ray.init(namespace="unstable")

tracker = unstable.Tracker.options(name="Tracker").remote(run_name="demo", wandb_project="UB")

step_buffer = unstable.StepBuffer.options(name="StepBuffer").remote(
    max_buffer_size=768, 
    tracker=tracker,
    final_reward_transformation=retra.ComposeFinalRewardTransforms([retra.RoleAdvantageByEnvFormatter()]),
    step_reward_transformation=retra.ComposeStepRewardTransforms([retra.RewardForFormat(1.5), retra.PenaltyForInvalidMove(1.0, -1.0)]),
    sampling_reward_transformation=retra.ComposeSamplingRewardTransforms([retra.NormalizeRewardsByEnv(True)]),
)

model_pool = unstable.ModelPool.options(name="ModelPool").remote(sample_mode="mirror", max_active_lora=3, tracker=tracker)
ray.get(model_pool.add_checkpoint.remote(path=None, iteration=-1)) # set initial checkpoint as no LoRA

lora_cfg = {
    "lora_rank": 32, "lora_alpha": 32, "lora_dropout": 0.0,
    "target_modules": ["q_proj","k_proj","v_proj","o_proj","gate_proj", "up_proj","down_proj"]
}
collector = unstable.Collector.options(name="Collector").remote(
    num_actors=2, 
    step_buffer=step_buffer, 
    model_pool=model_pool, 
    tracker=tracker,
    vllm_config={
        "model_name": "Qwen/Qwen3-1.7B-base", 
        "max_parallel_seq": 128,
        "max_tokens": 4096, 
        "max_loras": 5, 
        "lora_config": lora_cfg, 
        "max_model_len": 8192
    },
    training_envs=[("SimpleTak-v0-train", 2, "qwen3-zs")], # (env-id, num players, prompt template)
    evaluation_envs=[("SimpleTak-v0-train", 2, "qwen3-zs"), ("KuhnPoker-v0-train", 2, "qwen3-zs")],
    evaluation_opponent="google/gemini-2.0-flash-lite-001",
)

learner = unstable.StandardLearner.options(num_gpus=1, name="Learner").remote(
    model_name="Qwen/Qwen3-1.7B-base", 
    step_buffer=step_buffer,
    model_pool=model_pool,
    tracker=tracker,
    algorithm=unstable.algorithms.Reinforce(),
    batch_size=384,
    mini_batch_size=1,
    learning_rate=1e-5,
    grad_clip=0.2,
    lora_cfg=lora_cfg,
    activation_checkpointing=False,
    gradient_checkpointing=False,
    max_train_len=None, # always train on the full sequence
    max_generation_len=4096, # important for Dr. GRPO
)

# start the collection and training loops
collector.collect.remote(num_workers=384, num_eval_workers=16)  
ray.get(learner.train.remote(200)) # total update steps

In a Nutshell, the Collector will maintain 384 and 16 in parallel running collection and evaluation games (respectively). Whenever a game finishes, the trajectory is passed to the StepBuffer and a new game is started. The StepBuffer splits each trajectory into steps and applies the specified reward transformations (on the game and step level first; and batch level once the Learner pulls the next batch).

The Learner will periodically (once every 0.2 seconds) check if the StepBuffer has accumulated enough data for training. If so, it'll request a full training batch from the StepBuffer, train on the data, and push the new set of LoRA weights to the ModelPool.

The Collector will keep collecting episodes until the Learner tells it to stop (in this case, after 200 update steps).

Monitoring Progress

If you want to monitor key metrics (in addition to logging them via W&B) during training you can run the following command in a seperate terminal:

unstable-terminal

The rendered interface will currently look something like this: (please not that it might change in the future as UnstableBaselines is very much still under development) The .gif doesn't do it justice, looks nice when you run it yourself haha.

Results

Since we set num_eval_workers=16, throughout training there are always 16 eval games running in parallel (using the most recent lora checkpoint). Running 200 learner steps took a total of ~12h on the 3xRTX6000 ada setup we used. Results (light) Results (dark)

As can be seen in the plots the Win-Rate against a fixed opponent (in this case google/gemini-2.0-flash-lite-001) improves significantly for both the training and evaluation environment, showing that at least some of learned reasoning patterns generalize to other tasks and problems.

Collaboration

Developed in partnership with PlasticLabs.

Papers

We built this code-base as part of our research on self-play for reasoning models on text based games. We hope to finish and release both papers (one focused on the paradigm and one focused on the "scaling-laws" and analysis thereof) within the next couple of weeks!

Citation DOI

If you use UnstableBaselines in your research, please cite:

@software{guertler_leon_2025_15719271,
  author={Guertler, Leon and Grams, Tim and Liu, Zichen and Cheng, Bobby},
  title={{UnstableBaselines}},
  month=jun,
  year=2025,
  publisher={Zenodo},
  version={0.1.0},
  doi={10.5281/zenodo.15719271},
  url={https://doi.org/10.5281/zenodo.15719271}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unstable_rl-0.1.2.tar.gz (33.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unstable_rl-0.1.2-py3-none-any.whl (34.9 kB view details)

Uploaded Python 3

File details

Details for the file unstable_rl-0.1.2.tar.gz.

File metadata

  • Download URL: unstable_rl-0.1.2.tar.gz
  • Upload date:
  • Size: 33.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for unstable_rl-0.1.2.tar.gz
Algorithm Hash digest
SHA256 f0d296fe60f071a15e58903198b04fea2c7de421c3e29fb9f1247e3b09d5a5d0
MD5 ce5d7a12cf189fd6f215d6ba84d86745
BLAKE2b-256 5fc80f77278b6814afd540a65b4bcf6eada2cc46100558fd23b960802a6f06d2

See more details on using hashes here.

File details

Details for the file unstable_rl-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: unstable_rl-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 34.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for unstable_rl-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 57cda9c0a7a38ed5ea90467bba9a9743a0b9f3e234fe11941506b9ca8628d72d
MD5 046da46f2975cff5716d9ab09bca717d
BLAKE2b-256 c229ac53ac8289ed89cff44e2b2a110bef68a0670d7599c0f660335ba60c1e0a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page