Skip to main content

Lightweight evaluation library for IFBench and IFEval instruction-following benchmarks

Project description

if-verifiable

Lightweight Python library for evaluating LLM outputs against instruction-following benchmarks.

Supports:

  • IFEval (google/IFEval) - Google's Instruction Following Eval
  • IFBench (allenai/IFBench_test) - Allen AI's instruction-following benchmark

Installation

pip install if-verifiable

Usage

from if_verifiable import get_eval_data, evaluate_output_for_sample

# Load samples from a benchmark
for sample in get_eval_data("ifeval"):
    print(f"Prompt: {sample.prompt[:100]}...")
    print(f"Instructions: {sample.instruction_id_list}")
    break

# Evaluate a model's response
sample = next(get_eval_data("ifeval"))
response = "Your model's response here..."

results, scores = evaluate_output_for_sample("ifeval", sample, response)

# Access scores (4 metrics available)
print(f"Partial strict: {scores.partial_strict:.2%}")
print(f"Partial loose: {scores.partial_loose:.2%}")
print(f"Binary strict (all passed): {scores.binary_strict}")
print(f"Binary loose (all passed): {scores.binary_loose}")

# Check individual instruction results
for result in results:
    print(f"  {result.instruction_id}: strict={result.strict_pass}, loose={result.loose_pass}")

Batch Evaluation

from if_verifiable import run_eval, run_eval_async, get_eval_data

# Sync batch evaluation with multiprocessing
model_responses = ["response1", "response2", ...]  # One per sample
results = run_eval("ifeval", model_responses, max_workers=8)

for sample, response, instruction_results, scores in results:
    print(f"{sample.key}: {scores.partial_strict:.2%}")

Async Evaluation

import asyncio
from if_verifiable import run_eval_async, get_eval_data

async def get_model_response(prompt: str) -> dict:
    # Your async API call here
    return {"content": "model response", "usage": {...}}

samples = list(get_eval_data("ifeval"))
coroutines = [get_model_response(s.prompt) for s in samples]

# Evaluate concurrently with a map function to extract the response string
results = await run_eval_async(
    "ifeval",
    coroutines,
    map_fn=lambda r: r["content"]
)

API

get_eval_data(benchmark: str) -> Iterator[BenchmarkSample]

Load evaluation samples from a benchmark dataset.

  • benchmark: Either "ifeval" or "ifbench"
  • Returns: Iterator of IFEvalSample or IFBenchSample dataclasses

evaluate_output_for_sample(benchmark, sample, response) -> tuple[list[InstructionResult], EvaluationScores]

Evaluate a model response against a benchmark sample.

  • benchmark: Either "ifeval" or "ifbench"
  • sample: A sample from get_eval_data()
  • response: The model's text response

Returns:

  • list[InstructionResult]: Per-instruction pass/fail results
  • EvaluationScores: Aggregated scores dataclass with 4 metrics:
    • partial_strict: Fraction of instructions passed (strict evaluation)
    • partial_loose: Fraction of instructions passed (loose - allows formatting variations)
    • binary_strict: 1.0 if ALL instructions passed strict, else 0.0
    • binary_loose: 1.0 if ALL instructions passed loose, else 0.0

run_eval(benchmark, model_responses, max_workers=None) -> list[EvalResult]

Batch evaluate all responses with multiprocessing.

  • benchmark: Either "ifeval" or "ifbench"
  • model_responses: List of response strings, one per sample in dataset
  • max_workers: Number of parallel workers (None = auto)

Returns list of (sample, response, instruction_results, scores) tuples.

run_eval_async(benchmark, coroutines, map_fn=str) -> list[EvalResult]

Evaluate responses from async coroutines concurrently.

  • benchmark: Either "ifeval" or "ifbench"
  • coroutines: List of awaitables, one per sample
  • map_fn: Function to extract response string from coroutine result

Returns list of (sample, response, instruction_results, scores) tuples in input order.

Types

@dataclass
class IFEvalSample:
    key: int
    prompt: str
    instruction_id_list: list[str]
    kwargs: list[dict[str, Any]]

@dataclass  
class IFBenchSample:
    key: str
    prompt: str
    instruction_id_list: list[str]
    kwargs: list[dict[str, Any]]

@dataclass
class EvaluationScores:
    partial_strict: float  # Fraction of instructions passed (strict)
    partial_loose: float   # Fraction of instructions passed (loose)
    binary_strict: float   # 1.0 if all passed strict, else 0.0
    binary_loose: float    # 1.0 if all passed loose, else 0.0

@dataclass
class InstructionResult:
    instruction_id: str
    strict_pass: bool
    loose_pass: bool

# Type alias for batch evaluation results
EvalResult = tuple[BenchmarkSample, str, list[InstructionResult], EvaluationScores]

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

if_verifiable-0.1.1.tar.gz (211.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

if_verifiable-0.1.1-py3-none-any.whl (52.3 kB view details)

Uploaded Python 3

File details

Details for the file if_verifiable-0.1.1.tar.gz.

File metadata

  • Download URL: if_verifiable-0.1.1.tar.gz
  • Upload date:
  • Size: 211.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for if_verifiable-0.1.1.tar.gz
Algorithm Hash digest
SHA256 1b748d0b10ba0d9e53e14b9470bf735c49d8e61ce187e516a4d610353c7aca10
MD5 45cfdd616333a504a2b6d487c191867d
BLAKE2b-256 4251aaea4ae976696ce05bb7965a7185babe90aea93d70e31ed20f9286f95ff4

See more details on using hashes here.

File details

Details for the file if_verifiable-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: if_verifiable-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 52.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for if_verifiable-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0d13da65883a62695d781bbd1d552430f4aa961d8e90008c5572911ee3a2b13e
MD5 7d01ce2663578da136684f8a8bcbb2ed
BLAKE2b-256 fe78b61f3f95c6b22ed4f8f88f404b0925d0aced4688fff0a3c979b0bf691266

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page