Evaluation harness for Vision-Language-Action models
Project description
vla-evaluation-harness
| Benchmarks | |
| Models (official) | |
| Models (dexbotic) |
|
| Models (starVLA) |
One framework to evaluate any VLA model on any robot simulation benchmark.
Why vla-evaluation-harness?
| Batch Parallel Evaluation | Episode sharding + batched GPU inference → 47× throughput (2 000 LIBERO episodes in 18 min on 1× H100). Details |
| Zero Setup | Benchmarks in Docker, model servers as single-file uv scripts — no dependency conflicts. |
| AI-Assisted Integration | Built-in Claude Code skills for adding benchmarks and model servers — scaffold new integrations in minutes, not hours. |
Motivation
VLA models are evaluated on LIBERO, CALVIN, SimplerEnv, ManiSkill, and others — but each benchmark has its own dependencies, observation format, and evaluation protocol. In practice, every research team ends up maintaining private eval forks per benchmark. Results diverge. Bug fixes don't propagate. No one tests under real-time conditions where the environment keeps moving during inference.
vla-evaluation-harness integrates the model once, integrates the benchmark once, and the full cross-evaluation matrix fills itself.
How: our abstraction layer fully decouples models from benchmarks.
- Benchmarks run inside Docker — no dependency hell, exact reproducibility.
- Model servers are standalone uv scripts with inline dependency declarations — zero manual setup.
See Architecture for how the pieces connect.
Installation
pip install vla-eval
Or from source:
git clone https://github.com/allenai/vla-evaluation-harness.git
cd vla-evaluation-harness
uv sync --python 3.11 --all-extras --dev
Quick Start
Two terminals: one for the model server (GPU), one for the benchmark client.
# Terminal 1 — model server (runs on host with GPU)
vla-eval serve --config configs/model_servers/dexbotic_cogact_libero.yaml
# Terminal 2 — run evaluation (benchmark runs in Docker by default)
vla-eval run --config configs/libero_smoke_test.yaml
Results are saved to results/ as JSON. The benchmark runs inside Docker by default — pass --no-docker for local development.
For full evaluation (10 tasks × 50 episodes):
vla-eval run --config configs/libero_spatial.yaml
See Reproduction Reports for verified scores and per-model details.
Need faster runs? See Batch Parallel Evaluation — 2 000 LIBERO episodes in ~18 min (47× vs sequential).
Batch Parallel Evaluation
A full evaluation takes hours sequentially. Two layers of parallelism bring this down to minutes:
Episode sharding splits (task, episode) pairs across N independent processes (RFC-0006). Each shard connects to the same model server, where a BatchPredictModelServer batches their inference requests into a single forward pass. The two axes multiply together.
Episode Sharding (environment parallelism)
# Option A: use the helper script (launches all shards + auto-merges)
./scripts/run_sharded.sh -c configs/libero_spatial.yaml -n 50
# Option B: manual launch
vla-eval run -c configs/libero_spatial.yaml --shard-id 0 --num-shards 4 &
vla-eval run -c configs/libero_spatial.yaml --shard-id 1 --num-shards 4 &
# ... (each shard is a separate process)
wait
vla-eval merge -c configs/libero_spatial.yaml -o results/libero_spatial.json
Each shard gets a deterministic slice via round-robin. Results merge with episode-level deduplication — if a shard fails, re-run only that shard.
Batch Model Server (GPU parallelism)
Enable batching in the model server config by setting max_batch_size > 1:
args:
max_batch_size: 16 # max observations per GPU forward pass (>1 enables batching)
max_wait_time: 0.05 # seconds to wait before dispatching a partial batch
Tuning & Combined Effect
We tune parallelism via a demand/supply methodology: demand λ(N) measures environment throughput as a function of shards, supply μ(B) measures model throughput as a function of batch size. The operating point satisfies λ(N) < 80% · μ(B*) to prevent queue buildup.
Sharding and batching multiply together (DB-CogACT 7B, LIBERO Spatial, 1× H100-80GB):
| Sequential | Batch Parallel (50 shards, B=16) | |
|---|---|---|
| Wall-clock | ~14 h | ~18 min |
| Throughput | ~11 obs/s | ~486 obs/s |
2 000 episodes, 47× faster. The included benchmarking tools (experiments/bench_demand.py, experiments/bench_supply.py) measure λ and μ for any model + benchmark combination. See the Tuning Guide for worked examples and max_wait_time derivation.
Docker Images
All benchmark environments are packaged as standalone Docker images based on base.
| Image | Size | Benchmark | Python | Base |
|---|---|---|---|---|
base |
3.3 GB | — | 3.10 | nvidia/cuda:12.1.1-runtime-ubuntu22.04 |
rlbench |
4.7 GB | RLBench | 3.8 | base |
simpler |
4.9 GB | SimplerEnv | 3.10 | base |
libero |
6.0 GB | LIBERO | 3.8 | base |
libero-pro |
6.2 GB | LIBERO-Pro | 3.8 | base |
robocerebra |
6.3 GB | RoboCerebra | 3.8 | base |
calvin |
9.5 GB | CALVIN | 3.8 | base |
kinetix |
9.5 GB | Kinetix | 3.11 | base |
maniskill2 |
9.8 GB | ManiSkill2 | 3.10 | base |
mikasa-robo |
10.1 GB | MIKASA-Robo | 3.10 | base |
libero-mem |
11.3 GB | LIBERO-Mem | 3.8 | base |
vlabench |
17.7 GB | VLABench | 3.10 | base |
robotwin |
28.6 GB | RoboTwin 2.0 | 3.10 | base |
robocasa |
35.6 GB | RoboCasa | 3.11 | base |
Pull (recommended):
docker pull ghcr.io/allenai/vla-evaluation-harness/libero:latest
Build locally (see docker/build.sh):
docker/build.sh # build all (base first, then benchmarks)
docker/build.sh libero # build one
Documentation
| Document | Description |
|---|---|
| Architecture | Component descriptions, protocol, episode flow, configuration |
| Contributing | Dev setup, adding benchmarks/models, PR workflow |
| Reproduction Reports | Per-model evaluation results and reproducibility verdicts |
| RFCs | Design proposals with rationale and status tracking |
| Design Philosophy | Freshness, Convenience, Layered Abstraction, Quality, Reproducibility, Openness |
Contributing
See CONTRIBUTING.md for dev setup and PR workflow.
PRs for any 🔜 item in the support matrix are welcome.
Citation
If you find this work useful, please cite:
@article{choi2026vlaeval,
title={vla-eval: A Unified Evaluation Harness for Vision-Language-Action Models},
author={Choi, Suhwan and Lee, Yunsung and Park, Yubeen and Kim, Chris Dongjoo and Krishna, Ranjay and Fox, Dieter and Yu, Youngjae},
journal={arXiv preprint arXiv:2603.13966},
year={2026}
}
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vla_eval-0.0.1.tar.gz.
File metadata
- Download URL: vla_eval-0.0.1.tar.gz
- Upload date:
- Size: 892.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2e0e0b1d9821537e99fb6e6e87c11a5c9c2f6a46bc4fa6bfddc801106fc363f2
|
|
| MD5 |
ad81c0c1b22c7da838887262ff1fa5ad
|
|
| BLAKE2b-256 |
d953adf5267d8db5d015b0a42edcf734fe479d6000e2151c5e5114bdf434e7cf
|
File details
Details for the file vla_eval-0.0.1-py3-none-any.whl.
File metadata
- Download URL: vla_eval-0.0.1-py3-none-any.whl
- Upload date:
- Size: 141.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9c11bcaede27c432c79a55dea5487f1a02ae4a6b725c7dd33f1de8c1eb02dab
|
|
| MD5 |
54caf12b1bac1414098c85241011878a
|
|
| BLAKE2b-256 |
4adb0d097e9f29f5a4f9bd392cf15d8b02c3dde980f1fd00c19741835dbb9f5b
|