A tool for evaluation of model outputs, primarily MT.
Project description
🍐Pearmut

Platform for Evaluation and Reviewing of Multilingual Tasks: Evaluate model outputs for translation and NLP tasks with support for multimodal data (text, video, audio, images) and multiple annotation protocols (DA, ESA, ESAAI, MQM, and more!).
Table of Contents
- Quick Start
- Campaign Configuration
- Advanced Features
- Campaign Management
- CLI Commands
- Terminology
- Development
- Citation
- Changelog
Quick Start
Install and run locally without cloning:
pip install pearmut
# Download example campaigns
wget https://raw.githubusercontent.com/zouharvi/pearmut/refs/heads/main/examples/esa.json
wget https://raw.githubusercontent.com/zouharvi/pearmut/refs/heads/main/examples/da.json
# Load and start
pearmut add esa.json da.json
pearmut run
Campaign Configuration
Basic Structure
Campaigns are defined in JSON files (see examples/). The simplest configuration uses task-based assignment where each user has pre-defined tasks:
{
"info": {
"assignment": "task-based",
# DA: scores
# ESA: error spans and scores
# MQM: error spans, categories, and scores
"protocol": "ESA",
},
"campaign_id": "wmt25_#_en-cs_CZ",
"data": [
# data for first task/user
[
[
# each evaluation item is a document
{
"instructions": "Evaluate translation from en to cs_CZ", # message to show to users above the first item
"src": "This will be the year that Guinness loses its cool. Cheers to that!",
"tgt": {"modelA": "Nevím přesně, kdy jsem to poprvé zaznamenal. Možná to bylo ve chvíli, ..."},
"item_id": "first item in first document"
},
{
"src": "I'm not sure I can remember exactly when I sensed it. Maybe it was when some...",
"tgt": {"modelA": "Tohle bude rok, kdy Guinness přijde o svůj „cool“ faktor. Na zdraví!"},
"item_id": "second item in first document"
}
...
],
# more document
...
],
# data for second task/user
[
...
],
# arbitrary number of users (each corresponds to a single URL to be shared)
]
}
Each item has to have tgt (dictionary from model names to strings, even for a single model evaluation).
Optionally, you can also include src (source string) and/or ref (reference string).
If neither src nor ref is provided, only the model outputs will be displayed.
For full Pearmut functionality (e.g. automatic statistical analysis), add item_id as well.
Any other keys that you add will simply be stored in the logs.
Load campaigns and start the server:
pearmut add my_campaign.json # Use -o/--overwrite to replace existing
pearmut run
Assignment Types
task-based: Each user has predefined itemssingle-stream: All users draw from a shared pool (random assignment)dynamic: Items are dynamically assigned based on current model performance (see Dynamic Assignment)
Advanced Features
Shuffling Model Translations
By default, Pearmut randomly shuffles the order in which models are shown per each item in order to avoid positional bias.
The shuffle parameter in campaign info controls this behavior:
{
"info": {
"assignment": "task-based",
"protocol": "ESA",
"shuffle": true # Default: true. Set to false to disable shuffling.
},
"campaign_id": "my_campaign",
"data": [...]
}
Custom Score Sliders
For multi-dimensional evaluation tasks (e.g., assessing fluency on a Likert scale), you can define custom sliders with specific ranges and steps:
{
"info": {
"assignment": "task-based",
"protocol": "ESA",
"sliders": [
{"name": "Fluency", "min": 0, "max": 5, "step": 1},
{"name": "Adequacy", "min": 0, "max": 100, "step": 1}
]
},
"campaign_id": "my_campaign",
"data": [...]
}
When sliders is specified, only the custom sliders are shown. Each slider must have name, min, max, and step properties. All sliders must be answered before proceeding.
Custom Instructions
Set campaign-level instructions using the instructions field in info (supports HTML).
Instructions default to protocol-specific ones (DA: scoring, ESA: error spans + scoring, MQM: error spans + categories + scoring).
{
"info": {
"protocol": "DA",
"instructions": "Rate translation quality on a 0-100 scale.<br>Pay special attention to document-level phenomena."
}
}
Pre-filled Error Spans (ESAAI)
Include error_spans to pre-fill annotations that users can review, modify, or delete:
{
"src": "The quick brown fox jumps over the lazy dog.",
"tgt": {"modelA": "Rychlá hnědá liška skáče přes líného psa."},
"error_spans": {
"modelA": [
{
"start_i": 0, # character index start (inclusive)
"end_i": 5, # character index end (inclusive)
"severity": "minor", # "minor", "major", "neutral", or null
"category": null # MQM category string or null
},
{
"start_i": 27,
"end_i": 32,
"severity": "major",
"category": null
}
]
}
}
The error_spans field is a 2D array (one per candidate). See examples/esaai_prefilled.json.
Tutorial and Attention Checks
Add validation rules for tutorials or attention checks:
{
"src": "The quick brown fox jumps.",
"tgt": {"modelA": "Rychlá hnědá liška skáče."},
"validation": {
"modelA": [
{
"warning": "Please set score between 70-80.", # shown on failure (omit for silent logging)
"score": [70, 80], # required score range [min, max]
"error_spans": [{"start_i": [0, 2], "end_i": [4, 8], "severity": "minor"}], # expected spans
"allow_skip": true # show "skip tutorial" button
}
]
}
}
Types:
- Tutorial: Include
allow_skip: trueandwarningto let users skip after feedback - Loud attention checks: Include
warningwithoutallow_skipto force retry - Silent attention checks: Omit
warningto log failures without notification (quality control)
The validation field is an array (one per candidate). Dashboard shows ✅/❌ based on validation_threshold in info (integer for max failed count, float [0,1) for max proportion, default 0).
Score comparison: Use score_greaterthan to ensure one candidate scores higher than another:
{
"src": "AI transforms industries.",
"tgt": {"A": "UI transformuje průmysly.", "B": "Umělá inteligence mění obory."},
"validation": {
"A": [
{"warning": "A has error, score 20-40.", "score": [20, 40]}
],
"B": [
{"warning": "B is correct and must score higher than A.", "score": [70, 90], "score_greaterthan": "A"}
]
}
}
The score_greaterthan field specifies the index of the candidate that must have a lower score than the current candidate.
See examples/tutorial/esa_deen.json for a mock campaign with a fully prepared ESA tutorial.
To use it, simply extract the data attribute and prefix it to each task in your campaign.
Single-stream Assignment
All annotators draw from a shared pool with random assignment:
{
"campaign_id": "my campaign 6",
"info": {
"assignment": "single-stream",
# DA: scores
# MQM: error spans and categories
# ESA: error spans and scores
"protocol": "ESA",
"users": 50, # number of annotators (can also be a list, see below)
},
"data": [...], # list of all items (shared among all annotators)
}
Dynamic Assignment
The dynamic assignment type intelligently selects items based on current model performance to focus annotation effort on top-performing models using contrastive comparisons.
All items must contain outputs from all models for this assignment type to work properly.
{
"campaign_id": "my dynamic campaign",
"info": {
"assignment": "dynamic",
"protocol": "ESA",
"users": 10, # number of annotators
"dynamic_top": 3, # how many top models to consider (required)
"dynamic_contrastive_models": 2, # how many models to compare per item (optional, default: 1)
"dynamic_first": 5, # annotations per model before dynamic kicks in (optional, default: 5)
"dynamic_backoff": 0.1, # probability of uniform sampling (optional, default: 0)
},
"data": [...], # list of all items (shared among all annotators)
}
How it works:
- Initial phase: Each model gets
dynamic_firstannotations with fully random contrastive evaluation - Dynamic phase: After the initial phase, top
dynamic_topmodels (by average score) are identified - Contrastive evaluation: From the top N models,
dynamic_contrastive_modelsmodels are randomly selected for each item - Item prioritization: Items with the least annotations for the selected models are prioritized
- Backoff: With probability
dynamic_backoff, uniform random selection is used instead to maintain exploration
This approach efficiently focuses annotation resources on distinguishing between the best-performing models while ensuring all models get adequate baseline coverage. The contrastive evaluation allows for direct comparison of multiple models simultaneously. For an example, see examples/dynamic.json.
Pre-defined User IDs and Tokens
The users field accepts:
- Number (e.g.,
50): Generate random user IDs - List of strings (e.g.,
["alice", "bob"]): Use specific user IDs - List of dictionaries: Specify custom tokens:
{
"info": {
...
"users": [
{"user_id": "alice", "token_pass": "alice_done", "token_fail": "alice_fail"},
{"user_id": "bob", "token_pass": "bob_done"} # missing tokens are auto-generated
],
},
...
}
Multimodal Annotations
Support for HTML-compatible elements (YouTube embeds, <video> tags, images). Ensure elements are pre-styled. See examples/multimodal.json.
Hosting Assets
Host local assets (audio, images, videos) using the assets key:
{
"campaign_id": "my_campaign",
"info": {
"assets": {
"source": "videos", # Source directory
"destination": "assets/my_videos" # Mount path (must start with "assets/")
}
},
"data": [ ... ]
}
Files from videos/ become accessible at localhost:8001/assets/my_videos/. Creates a symlink, so source directory must exist throughout annotation. Destination paths must be unique across campaigns.
CLI Commands
pearmut add <file(s)>: Add campaign JSON files (supports wildcards)-o/--overwrite: Replace existing campaigns with same ID--server <url>: Server URL prefix (default:http://localhost:8001)
pearmut run: Start server--port <port>: Server port (default: 8001)--server <url>: Server URL prefix
pearmut purge [campaign]: Remove campaign data- Without args: Purge all campaigns
- With campaign name: Purge specific campaign only
Campaign Management
Management link (shown when adding campaigns or running server) provides:
- Annotator progress overview
- Access to annotation links
- Task progress reset (data preserved)
- Download progress and annotations
Completion tokens are shown at annotation end for verification (download correct tokens from dashboard). Incorrect tokens can be shown if quality control fails.
When tokens are supplied, the dashboard will try to show model rankings based on the names in the dictionaries.
Custom Completion Messages
Customize the goodbye message shown to users when they complete all annotations using the instructions_goodbye field in campaign info. Supports arbitrary HTML for styling and formatting with variable replacement: ${TOKEN} (completion token) and ${USER_ID} (user ID). Default: "If someone asks you for a token of completion, show them: ${TOKEN}".
Terminology
- Campaign: An annotation project that contains configuration, data, and user assignments. Each campaign has a unique identifier and is defined in a JSON file.
- Campaign File: A JSON file that defines the campaign configuration, including the campaign ID, assignment type, protocol settings, and annotation data.
- Campaign ID: A unique identifier for a campaign (e.g.,
"wmt25_#_en-cs_CZ"). Used to reference and manage specific campaigns. Typically a campaign is created for a specific language and domain.
- Task: A unit of work assigned to a user. In task-based assignment, each task consists of a predefined set of items for a specific user.
- Item: A single annotation unit within a task. For translation evaluation, an item typically represents a document (source text and target translation). Items can contain text, images, audio, or video.
- Document: A collection of one or more segments (sentence pairs or text units) that are evaluated together as a single item.
- User / Annotator: A person who performs annotations in a campaign. Each user is identified by a unique user ID and accesses the campaign through a unique URL.
- Attention Check: A validation item with known correct answers used to ensure annotator quality. Can be:
- Loud: Shows warning message and forces retry on failure
- Silent: Logs failures without notifying the user (for quality control analysis)
- Token: A completion code shown to users when they finish their annotations. Tokens verify the completion and whether the user passed quality control checks:
- Pass Token (
token_pass): Shown when user meets validation thresholds - Fail Token (
token_fail): Shown when user fails to meet validation requirements
- Pass Token (
- Tutorial: An instructional validation item that teaches users how to annotate. Includes
allow_skip: trueto let users skip if they have seen it before. - Validation: Quality control rules attached to items that check if annotations match expected criteria (score ranges, error span locations, etc.). Used for tutorials and attention checks.
- Model: The system or model that generated the output being evaluated (e.g.,
"GPT-4","Claude"). Used for tracking and ranking model performance. - Dashboard: The management interface that shows campaign progress, annotator statistics, access links, and allows downloading annotations. Accessed via a special management URL with token authentication.
- Protocol: The annotation scheme defining what data is collected:
- Score: Numeric quality rating (0-100)
- Error Spans: Text highlights marking errors with severity (
minor,major) - Error Categories: MQM taxonomy labels for errors
- Template: The annotation interface type. The
basictemplate supports comparing multiple outputs simultaneously. - Assignment: The method for distributing items to users:
- Task-based: Each user has predefined items
- Single-stream: Users draw from a shared pool with random assignment
- Dynamic: Items are intelligently assigned based on model performance to focus on top models
Development
Server responds to data-only requests from frontend (no template coupling). Frontend served from pre-built static/ on install.
Local development:
cd pearmut
# Frontend (separate terminal, recompiles on change)
npm install web/ --prefix web/
npm run build --prefix web/
# optionally keep running indefinitely to auto-rebuild
npm run watch --prefix web/
# Install as editable
pip3 install -e .
# Load examples
pearmut add examples/wmt25_#_en-cs_CZ.json examples/wmt25_#_cs-de_DE.json
pearmut run
Creating new protocols:
- Add HTML and TS files to
web/src - Add build rule to
webpack.config.js - Reference as
info->templatein campaign JSON
See web/src/basic.ts for example.
Deployment
Run on public server or tunnel local port to public IP/domain and run locally.
Citation
If you use this work in your paper, please cite as following.
@misc{zouhar2026pearmut,
author = {Zouhar, Vilém},
title = {Pearmut: Human Evaluation of Translation Made Trivial},
year = {2026}
}
Contributions are welcome! Please reach out to Vilém Zouhar.
Changelog
- v1.0.1
- Support RTL languages
- Add boxes for references
- Add custom score sliders for multi-dimensional evaluation
- Make instructions customizable and protocol-dependent
- Support custom sliders
- Purge/reset whole tasks from dashboard
- Fix resetting individual users in single-stream/dynamic
- Fix notification stacking
- Add campaigns from dashboard
- v0.3.3
- Rename
doc_idtoitem_id - Add Typst, LaTeX, and PDF export for model ranking tables. Hide them by default.
- Add dynamic assignment type with contrastive model comparison
- Add
instructions_goodbyefield with variable substitution - Add visual anchors at 33% and 66% on sliders
- Add German→English ESA tutorial with attention checks
- Validate document model consistency before shuffle
- Fix UI block on any interaction
- Rename
- v0.3.2
- Revert seeding of user IDs
- Set ESA (Error Span Annotation) as default
- Update server IP address configuration
- Show approximate alignment by default
- Unify pointwise and listwise interfaces into
basic - Refactor protocol configuration (breaking change)
- v0.2.11
- Add comment field in settings panel
- Add
score_gtvalidation for listwise comparisons - Add Content-Disposition headers for proper download filenames
- Add model results display to dashboard with rankings
- Add campaign file structure validation
- Purge command now unlinks assets
- v0.2.6
- Add frozen annotation links feature for view-only mode
- Add word-level annotation mode toggle for error spans
- Add
[missing]token support - Improve frontend speed and cleanup toolboxes on item load
- Host assets via symlinks
- Add validation threshold for success/fail tokens
- Implement reset masking for annotations
- Allow pre-defined user IDs and tokens in campaign data
- v0.1.1
- Set server defaults and add VM launch scripts
- Add warning dialog when navigating away with unsaved work
- Add tutorial validation support for pointwise and listwise
- Add ability to preview existing annotations via progress bar
- Add support for ESAAI pre-filled error_spans
- Rename pairwise to listwise and update layout
- Implement single-stream assignment type
- v0.0.3
- Support multimodal inputs and outputs
- Add dashboard
- Implement ESA (Error Span Annotation) and MQM support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pearmut-1.0.1.tar.gz.
File metadata
- Download URL: pearmut-1.0.1.tar.gz
- Upload date:
- Size: 108.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
013e7ef725052177bf1b1a19f49c6685cdab875b3be97a6dbea45f3dec9eb704
|
|
| MD5 |
52efff423487a8140b877894445cea14
|
|
| BLAKE2b-256 |
558cfec4238921b55745549c214d8d407b6d897e7ce877f5389ae6ced0d8a3b7
|
Provenance
The following attestation bundles were made for pearmut-1.0.1.tar.gz:
Publisher:
publish.yml on zouharvi/pearmut
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pearmut-1.0.1.tar.gz -
Subject digest:
013e7ef725052177bf1b1a19f49c6685cdab875b3be97a6dbea45f3dec9eb704 - Sigstore transparency entry: 795699559
- Sigstore integration time:
-
Permalink:
zouharvi/pearmut@95c331634831c82f1e73e9f4decbfc5a8f9964d9 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/zouharvi
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@95c331634831c82f1e73e9f4decbfc5a8f9964d9 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file pearmut-1.0.1-py3-none-any.whl.
File metadata
- Download URL: pearmut-1.0.1-py3-none-any.whl
- Upload date:
- Size: 112.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
540bbfd82b2671d008b656717e13c9de2a953087754eeb8761092baca1cecde2
|
|
| MD5 |
5baeb44c743040c489afd02790cb5af0
|
|
| BLAKE2b-256 |
c80fd8379d43b659840b07a6be689bc130d440cb3577915690013943a29a840f
|
Provenance
The following attestation bundles were made for pearmut-1.0.1-py3-none-any.whl:
Publisher:
publish.yml on zouharvi/pearmut
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pearmut-1.0.1-py3-none-any.whl -
Subject digest:
540bbfd82b2671d008b656717e13c9de2a953087754eeb8761092baca1cecde2 - Sigstore transparency entry: 795699603
- Sigstore integration time:
-
Permalink:
zouharvi/pearmut@95c331634831c82f1e73e9f4decbfc5a8f9964d9 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/zouharvi
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@95c331634831c82f1e73e9f4decbfc5a8f9964d9 -
Trigger Event:
workflow_dispatch
-
Statement type: