Skip to main content

IFBench: A challenging benchmark for precise instruction following

Project description

Generalizing Verifiable Instruction Following

NVIDIA NeMo Evaluator

IFBench provides you with evaluation clients that are specifically built to evaluate model endpoints using our Standard API.

Launching an evaluation for an LLM

Install the package

pip install nvidia-ifbench

(Optional) Set a token to your API endpoint if it's protected

export MY_API_KEY="your_api_key_here"
export HF_TOKEN="your_huggingface_token_here"

List the available evaluations

nemo-evaluator ls

Available tasks:

  • ifbench

Run the evaluation of your choice

nemo-evaluator run_eval \
    --eval_type ifbench \
    --model_id meta/llama-3.1-8b-instruct \
    --model_url https://integrate.api.nvidia.com/v1/chat/completions \
    --model_type chat \
    --api_key_name MY_API_KEY \
    --output_dir /workspace/results

Gather the results

cat /workspace/results/results.yml

Command-Line Tool

Each package comes pre-installed with a set of command-line tools, designed to simplify the execution of evaluation tasks. Below are the available commands and their usage for the ifbench evaluations:

Commands

1. List Evaluation Types
nemo-evaluator ls

Displays the evaluation types available within the harness.

2. Run an evaluation

The nemo-evaluator run_eval command executes the evaluation process. Below are the flags and their descriptions:

Required flags:

  • --eval_type <string>: The type of evaluation to perform (e.g., ifbench_en, ifbench_multi_turn, etc.)
  • --model_id <string>: The name or identifier of the model to evaluate.
  • --model_url <url>: The API endpoint where the model is accessible.
  • --model_type <string>: The type of the model to evaluate, currently either "chat", "completions", or "vlm".
  • --output_dir <directory>: The directory to use as the working directory for the evaluation. The results, including the results.yml output file, will be saved here.

Optional flags:

  • --api_key_name <string>: The name of the environment variable that stores the Bearer token for the API, if authentication is required.
  • --run_config <path>: Specifies the path to a YAML file containing the evaluation definition.
  • --overrides <string>: Override configuration parameters (e.g., 'config.params.limit_samples=10').

Example

nemo-evaluator run_eval \
    --eval_type ifbench \
    --model_id meta/llama-3.1-8b-instruct \
    --model_type chat \
    --model_url https://integrate.api.nvidia.com/v1/chat/completions \
    --output_dir ./evaluation_results

If the model API requires authentication, set the API key in an environment variable and reference it using the --api_key_name flag:

export MY_API_KEY="your_api_key_here"

nemo-evaluator run_eval \
    --eval_type ifbench \
    --model_id meta/llama-3.1-8b-instruct \
    --model_type chat \
    --model_url https://integrate.api.nvidia.com/v1/chat/completions \
    --api_key_name MY_API_KEY \
    --output_dir ./evaluation_results

Configuring evaluations via YAML

Evaluations in IFBench are configured using YAML files that define the parameters and settings required for the evaluation process. These configuration files follow a standard API which ensures consistency across evaluations.

Example of a YAML config:

config:
  type: ifbench
  params:
    parallelism: 50
    limit_samples: 20
target:
  api_endpoint:
    model_id: meta/llama-3.1-8b-instruct
    type: chat
    url: https://integrate.api.nvidia.com/v1/chat/completions
    api_key: NVIDIA_API_KEY

The priority of overrides is as follows:

  1. command line arguments
  2. user config (as seen above)
  3. task defaults (defined per task type)
  4. framework defaults

The --dry_run option allows you to print the final run configuration and command without executing the evaluation.

Example:

nemo-evaluator run_eval \
    --eval_type ifbench \
    --model_id meta/llama-3.1-8b-instruct \
    --model_type chat \
    --model_url https://integrate.api.nvidia.com/v1/chat/completions \
    --output_dir ./evaluation_results \
    --dry_run

This repo contains IFBench, which is a new, challenging benchmark for precise instruction following.

IFBench

IFBench consists of two parts:

  • OOD Constraints: 58 new and challenging constraints, with corresponding verification functions. The constraint templates are combined with prompts from a held-out set of WildChat (Zhao et al. 2024).

  • (optionally) Multiturn Constraint Isolation in 2 turns: The prompt and the constraint are separated over two turns, i.e. the first turn is the user prompt and the model's response to the prompt, and the second turn is the constraint that modifies the initial prompt.

  • New IF-RLVR training constraints: 29 new and challenging constraints, with corresponding verification functions.

How to run the evaluation

Install the requirements via the requirements.txt file. You need two jsonl files, one the IFBench_test.jsonl file (in the data folder) and one your file with eval prompts and completions (see sample_output.jsonl as an example). Then run:

python3 -m run_eval --input_data=IFBench_test.jsonl --input_response_data=sample_output.jsonl --output_dir=eval

Released Datasets

You can find our released datasets in this collection, which contains the test data, the multi-turn test data and the IF-RLVR training data.

RLVR for Precise Instruction Following

We also release our IF-RLVR code, as part of open-instruct. You can run this GRPO script, using our training data. This is an example command.

The new training constraints and verification functions are here: https://github.com/allenai/open-instruct/tree/main/open_instruct/IFEvalG

Attribution

This repository is a fork of the original IFBench project. For complete attribution details, see ATTRIBUTION.md.

Original Project

Original Authors

The original IFBench project was developed by researchers at Allen Institute for AI: Valentina Pyatkin, Saumya Malik, Victoria Graf, Hamish Ivison, Shengyi Huang, Pradeep Dasigi, Nathan Lambert, and Hannaneh Hajishirzi.

Licensing

The data is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's Responsible Use Guidelines. The dataset includes output data generated from third party models that are subject to separate terms governing their use.

Acknowledgements

Parts of IFBench are built upon and extend IFEval (Zhou et al. 2023) and we would like to thank them for their great work!

Citation

If you used this repository or our models, please cite our work:

@misc{pyatkin2025generalizing,
   title={Generalizing Verifiable Instruction Following}, 
   author={Valentina Pyatkin and Saumya Malik and Victoria Graf and Hamish Ivison and Shengyi Huang and Pradeep Dasigi and Nathan Lambert and Hannaneh Hajishirzi},
   year={2025},
  journal={Advances in Neural Information Processing Systems},
  volume={38},
  year={2025}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nvidia_ifbench-25.11-py3-none-any.whl (314.2 kB view details)

Uploaded Python 3

File details

Details for the file nvidia_ifbench-25.11-py3-none-any.whl.

File metadata

  • Download URL: nvidia_ifbench-25.11-py3-none-any.whl
  • Upload date:
  • Size: 314.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for nvidia_ifbench-25.11-py3-none-any.whl
Algorithm Hash digest
SHA256 6d0b0430b5e16d9b648e3c153d087bae8e3b5be0d176265922627691ceb1029a
MD5 eb8f96b68354e6e7889b69990d742787
BLAKE2b-256 e8e55cf3f71dcf27c28d754b6810dd0784ca02a2471e44628895e7857b5d7475

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page