Skip to main content

A library for multi-agent systems evaluation and benchmarking.

Project description

MASEval logo

LLM-based Multi-Agent Evaluation & Benchmark Framework

ParameterLab Python 3.10+ PyPI version Documentation Tests codecov License

MASEval is an evaluation library that provides a unified interface for benchmarking (multi-)agent systems. It offers standardized abstractions for running any agent implementation—whether built with AutoGen, LangChain, custom frameworks, or direct API calls—against established benchmarks like GAIA and AgentBench, or your own custom evaluation tasks.

Analogous to pytest for testing or MLflow for ML experimentation, MASEval focuses exclusively on evaluation infrastructure. It does not implement agents, define multi-agent communication protocols, or turn LLMs into agents. Instead, it wraps existing agent systems via simple adapters, orchestrates the evaluation lifecycle (setup, execution, measurement, teardown), and provides lifecycle hooks for tracing, logging, and metrics collection. This separation allows researchers to compare different agent architectures apples-to-apples across frameworks, while maintaining full control over their agent implementations.

Why MASEval?

Compare multi-agent evaluation frameworks across key capabilities.

Library Multi-Agent System Eval Agent-Agnostic Benchmarks Flexible Interaction BYO Trace-First Mature
MASEval
AnyAgent 🟡 🟡 🟡
MLflow GenAI 🟡 🟡 🟡
HAL Harness 🟡 🟡 🟡 🟡 🟡
Inspect-AI 🟡 🟡 🟡 🟡 🟡
OpenCompass 🟡 🟡 🟡 🟡
AgentGym 🟡 🟡 🟡
Arize Phoenix 🟡 🟡 🟡
TruLens 🟡 🟡 🟡
MARBLE 🟡 🟡
DeepEval 🟡 🟡 🟡 🟡 🟡
MCPEval 🟡 🟡 🟡

Full/Native · 🟡 Partial/Limited · Not supported

Expand for Column Explanation
Column Feature One-Liner
Multi-Agent Multi-Agent Native Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns.
System Eval System-Level Comparison Compare different framework implementations on the same benchmark (not just swapping LLMs).
Agent-Agnostic Agent Framework Agnostic Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation.
Benchmarks Pre-Implemented Benchmarks Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates).
Flexible Interaction Flexible Agent-Environment-User First-class user simulation with personas and tool access for realistic multi-turn conversations.
BYO BYO Philosophy Bring your own logging, agents, environments, and tools. Open-source, works offline, no mandatory cloud services.
Trace-First Trace-First Evaluation Evaluate intermediate steps across environment and agents via first-class traces, not post-hoc fixes.
Mature Professional Tooling Published on PyPI, CI/CD, good test coverage, active maintenance.

Core Principles:

  • Evaluation, Not Implementation: MASEval provides the evaluation infrastructure—you bring your agent implementation. Whether you've built agents with AutoGen, LangChain, custom code, or direct LLM calls, MASEval wraps them via simple adapters and runs them through standardized benchmarks.

  • System-Level Benchmarking: The fundamental unit of evaluation is the complete system—the full configuration of agents, prompts, tools, and their interaction patterns. This allows meaningful comparison between entirely different architectural approaches.

  • Task-Specific Configurations: Each benchmark task is a self-contained evaluation unit with its own instructions, environment state, success criteria, and custom evaluation logic. One task might measure success by environment state changes, another by programmatic output validation.

  • Framework Agnostic by Design: MASEval is intentionally unopinionated about agent frameworks, model providers, and system architectures. Simple, standardized interfaces and adapters enable any agent system to be evaluated without modification to the core library.

  • Lifecycle Hooks via Callbacks: Inject custom logic at any point in the evaluation lifecycle (e.g., on_run_start, on_task_start, on_agent_step_end) through a callback system. This enables extensibility without modifying core evaluation logic.

  • Pluggable Backends: Tracing, logging, metrics, and data storage are implemented as callbacks. Easily add new backends or combine existing ones—log to WandB and Langfuse simultaneously, or implement custom metrics collectors.

  • Extensible Benchmark Suite: Researchers can implement new benchmarks by inheriting from base classes and focusing on task construction and evaluation logic, while leveraging built-in evaluation infrastructure.

  • Abstract Base Classes: The library provides abstract base classes for core components (Task, Benchmark, Environment, Evaluator) with optional default implementations, giving users flexibility to customize while maintaining interface consistency.

Install

The package is published on PyPI as maseval. To install the stable release for general use, run:

pip install maseval

If you want the optional integrations used by the examples (smolagents, langgraph, llamaindex, etc.), install the examples extras:

pip install "maseval[examples]"

Or install specific framework integrations:

# Smolagents
pip install "maseval[smolagents]"

# LangGraph
pip install "maseval[langgraph]"

# LlamaIndex
pip install "maseval[llamaindex]"

Or install benchmark-specific dependencies:

# MMLU (HuggingFace models)
pip install "maseval[mmlu]"

Example

Examples are available in the Documentation.

Contribute

We welcome any contributions. Please read the CONTRIBUTING.md file to learn more!

Benchmarks

This library includes implementations for several benchmarks to evaluate a variety of multi-agent scenarios. Each benchmark is designed to test specific collaboration and problem-solving skills.

➡️ See here for a full list and description of all available benchmarks including licenses.

Citation

Please consider citing the MASEval library.

@misc{emde2026maseval,
      title={MASEval: Extending Multi-Agent Evaluation from Models to Systems},
      author={Cornelius Emde and Alexander Rubinstein and Anmol Goel and Ahmed Heakl and Sangdoo Yun and Seong Joon Oh and Martin Gubri},
      year={2026},
      eprint={2603.08835},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2603.08835},
      note={Alexander Rubinstein, Anmol Goel, and Ahmed Heakl contributed equally.},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

maseval-0.4.0.tar.gz (347.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maseval-0.4.0-py3-none-any.whl (423.3 kB view details)

Uploaded Python 3

File details

Details for the file maseval-0.4.0.tar.gz.

File metadata

  • Download URL: maseval-0.4.0.tar.gz
  • Upload date:
  • Size: 347.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for maseval-0.4.0.tar.gz
Algorithm Hash digest
SHA256 c13d7e5eb41660672ba18f96b3f90293159ac8d9dff5f47296d3ede06b61aef7
MD5 e6819a198a8e06d6eab81ef42d3095fc
BLAKE2b-256 2b1ac44fa638a6b25a0d8faa80b10c56a143d8e8305528fd72313edf50ca2e9d

See more details on using hashes here.

File details

Details for the file maseval-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: maseval-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 423.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for maseval-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 731a381d4eaddf78fac41ed66bb8dea942bac1b7a63ff59b2676da78fb24ea9b
MD5 cf9fdfe502d1d00fe89f28aa636bbaa7
BLAKE2b-256 feb91552ed60297c36113254c9dbaacd47594ba4a3e36614c14add72a9e7755b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page