Skip to main content

A package for sampling from intractable distributions with LLMs.

Project description

Sample, Don't Search: Rethinking Test-Time Alignment for Language Models

Gonçalo Faria, Noah A. Smith

Paper: https://arxiv.org/abs/2504.03790

TL;DR: QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.

Abstract:

Increasing test-time computation has emerged as a promising direction for improving language model performance, particularly in scenarios where model finetuning is impractical or impossible due to computational constraints or private model weights. However, existing test-time search methods using a reward model (RM) often degrade in quality as compute scales, due to the over-optimization of what are inherently imperfect reward proxies. We introduce QAlign, a new test-time alignment approach. As we scale test-time compute, QAlign converges to sampling from the optimal aligned distribution for each individual prompt. By adopting recent advances in Markov chain Monte Carlo for text generation, our method enables better-aligned outputs without modifying the underlying model or even requiring logit access. We demonstrate the effectiveness of QAlign on mathematical reasoning benchmarks (GSM8K and GSM-Symbolic) using a task-specific RM, showing consistent improvements over existing test-time compute methods like best-of-n and majority voting. Furthermore, when applied with more realistic RMs trained on the Tulu 3 preference dataset, QAlign outperforms direct preference optimization (DPO), best-of-n, majority voting, and weighted majority voting on a diverse range of datasets (GSM8K, MATH500, IFEval, MMLU-Redux, and TruthfulQA). A practical solution to aligning language models at test time using additional computation without degradation, our approach expands the limits of the capability that can be obtained from off-the-shelf language models without further training.

General Alignment Experiments

Average error rate across multiple evaluation datasets (GSM8K, MATH500, MMLU-Redux, TruthfulQA, and IFEval) as a function of the floating point operations (FLOPS) in log scale. We compare QAlign method with Tülu3-8B-SFT against four baselines: majority vote (MV) Tülu3-8B-DPO, and applied to Tülu3-8B-SFT the methods best-of-n (BoN), MV, and weighted MV (WMV). All experiments use temperature 1.0 with reasoning included in model outputs. Note that Tülu3-8B-DPO model is the result of doing preference finetuning on the Tülu3-8B-SFT with 271k preference pairs. The costs associated with this process are not accounted for in this plot.


Dependencies

This project was based on the following external libraries:

For saving experiment data we use the expkit-core package.

pip install expkit-core # required only for the experiment 

Install the required packages:

pip install -r requirements.txt

Reproducing the work

Replicating the work:

Experiment Setup

  1. Create Configuration Files
    # Create configs for general experiments
    scripts/create_all_general_experiments.sh
    
    # Create configs for task-specific experiments
    scripts/create_all_task_experiments.sh
    

Running Experiments

  1. Execute Experiments
    # Run experiments locally
    scripts/run_local_experiments.sh
    
    # Run experiments on remote server
    scripts/run_remote_experiments.sh
    

Evaluation & Analysis

  1. Evaluate Results

    # Compare responses against ground truth answers
    scripts/run_eval_experiment.sh
    
    # Evaluate reward model for ancestral predictions (remote by default)
    scripts/run_rm_eval.sh
    
  2. Generate Final Predictions

    # Run WMV, BON, and MV final prediction methods
    scripts/run_pred.sh
    

Quick Start

This guide will help you get started running QAlign.

Basic Usage

To quickly try out QAlign, you'll need two servers running compatible language models:

  • A generation model (for sampling text responses)
  • A reward model (for evaluating responses)

You can use vllm for both, or swap in sglang as appropriate. Below are example commands (adjust model paths and ports as needed):

1. Start the Reward Model Server (on port 8001):

vllm serve Skywork/Skywork-Reward-Llama-3.1-8B-v0.2 --task classify --port 8001

2. Start the Generation Model Server (on port 8000):

vllm serve meta-llama/Llama-3.1-8B-Instruct  --port 8000

Once the servers are running, you can use the following Python script to sample and align responses via QAlign:

from qalign.reward import RemoteReward
from qalign.model import RemoteVLLM
from qalign.base import QAlign

# Define your generation model (connected to localhost:8000)
model = RemoteVLLM(
    server_url="http://localhost:8000",
    model_path="meta-llama/Llama-3.1-8B-Instruct",
    max_prompt_length=100,
    max_new_tokens=50,
)

# Define the remote reward model (connected to localhost:8001)
reward = RemoteReward(
    server_url="http://localhost:8001",
    model_path="Skywork/Skywork-Reward-Llama-3.1-8B-v0.2",
    server_format="vllm",
)

# Set up the QAlign chain
chain = QAlign(
    model=model,
    reward=reward,
    beta=1.0, 
)

# Example conversation/question
question = (
    "Joana has 10 apples. She gives it to The Lord of Fire which multiplies them by 2 every 10 seconds. "
    "One in five of the apples are poisoned and will kill anyone who eats them. "
    "All of the apples will be eaten by a hungry crowd. How many people die after 50 seconds?"
)

# Run QAlign for 8 steps
results = chain.run(
    conversations=[
        [{"role": "user", "content": question}]
    ],
    steps=8,
    use_tqdm=True,
)

# results.state_path contains stepwise generations without-accept/reject filtering

Tips:

  • You can point server_url to a remote server, e.g., "http://remotehost:8001", as long as your reward/generation servers are accessible.
  • max_prompt_length and max_new_tokens can be tuned based on your hardware/needs.
  • For other supported server formats or reward models, see the documentation.

Contact

For bugs and feature requests please visit GitHub Issues. For business inquiries or professional support requests please send an e-mail.


Citation

@misc{faria2025sampledontsearchrethinking,
      title={Sample, Don't Search: Rethinking Test-Time Alignment for Language Models}, 
      author={Gonçalo Faria and Noah A. Smith},
      year={2025},
      eprint={2504.03790},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.03790}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quest_alignment-1.0.13.tar.gz (35.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quest_alignment-1.0.13-py3-none-any.whl (38.0 kB view details)

Uploaded Python 3

File details

Details for the file quest_alignment-1.0.13.tar.gz.

File metadata

  • Download URL: quest_alignment-1.0.13.tar.gz
  • Upload date:
  • Size: 35.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.15

File hashes

Hashes for quest_alignment-1.0.13.tar.gz
Algorithm Hash digest
SHA256 7944b74a5fcbc3c91249f411c9c4499e175fb443355d74b45f765b9fc9191aff
MD5 b24d121dd79c1fd7016aea6764a86241
BLAKE2b-256 7c1565d949745433142d0b8b023b66d2551cb12a8022f25f270810dc870f8758

See more details on using hashes here.

File details

Details for the file quest_alignment-1.0.13-py3-none-any.whl.

File metadata

File hashes

Hashes for quest_alignment-1.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 96cba333ffa3e05750f773810436eb8be437442ca6bac9aeb597c95c682ee295
MD5 47edcd5fd73ac92299109b910f0c2f7a
BLAKE2b-256 95b6fc3c76b57f3047ea0006eb4cb39c9cd9ba775790f146471b0f66ccd1bf44

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page