A local-first, lightweight harness for AI agent evaluations
Project description
Agent Harness
A local-first, lightweight harness for AI agent evaluations.
Features
- Local-first: No Weave API key required. All logs are JSONL files you can inspect directly.
- Language-agnostic agents: Write agents in Python, Rust, Go, Node, bash - anything that can do stdin/stdout JSON.
- Automatic logging: Raw LLM API requests/responses are captured automatically - no manual logging code needed.
- Parallel execution: Run tasks in parallel with automatic retry for API errors (429, 5xx, timeouts).
- Layered grading: Exact → normalized → numeric → fuzzy matching, with LLM-as-judge support.
- Run metadata: Comprehensive
run.jsonwith token usage, costs, latencies, and custom agent metrics. - Agent metrics: Track custom KPIs (steps, tool usage, etc.) that get aggregated across runs.
- Benchmark plugins: Built-in support for Arithmetic, GAIA, Terminal-Bench, AssistantBench, HLE, ARC-AGI, and BrowseComp.
- Unified architecture: Every benchmark declares an execution mode (DIRECT, INTERACTIVE, CODE_SUBMIT, etc.) with a consistent
grade(task, result, context)interface. - Container-graded tasks: Terminal-Bench integration with Docker-based task environments and automatic test-suite grading.
- CI pipeline: GitHub Actions runs unit tests on every PR to prevent regressions.
Quick Start
# Install with Poetry
cd agent-harness
poetry install
# Or install with pip
pip install -e .
# Set your API key
export ANTHROPIC_API_KEY="sk-..."
# or
export OPENAI_API_KEY="sk-..."
# Run the simple QA agent on the arithmetic benchmark
# Output goes to ./results/arithmetic/{run_id}/
harness run \
--agent agents/simple_qa_agent.py \
--benchmark arithmetic \
--num-tasks 10
# View results
harness view ./results/arithmetic
Agent Protocol
Agents communicate via stdin/stdout using JSON-RPC:
Harness → Agent (stdin):
{"jsonrpc": "2.0", "method": "run_task", "params": {"task_id": "abc", "task_data": {...}}, "id": 1}
Agent → Harness (stdout):
{"jsonrpc": "2.0", "result": {"task_id": "abc", "submission": "42", "metrics": {"steps": 3}}, "id": 1}
Agents can emit logs that get captured automatically:
{"jsonrpc": "2.0", "method": "log", "params": {"type": "thinking", "content": "..."}}
The optional metrics field in the response is used to report agent-specific KPIs.
### Python Agent Helper
For Python agents, use the `Agent` base class:
```python
from harness.agent import Agent
from harness.providers.base import Message
class MyAgent(Agent):
def run_task(self, task_id: str, task_data: dict) -> str:
self.increment("steps") # Track metrics
# LLM calls are automatically logged
response = self.complete([
Message(role="user", content=task_data["question"])
])
self.record_tool_use("llm_call") # Track tool usage
return response.message.content
if __name__ == "__main__":
MyAgent().run()
Any Language
Write agents in any language - the harness auto-detects how to run them based on file extension or project structure.
Supported Languages
| Language | File Extension | Directory Entry | Project Detection |
|---|---|---|---|
| Python | .py |
agent.py, __main__.py |
.venv/ or venv/ → uses venv python |
| Ruby | .rb |
agent.rb |
Gemfile → bundle exec |
| JavaScript | .js, .mjs |
agent.js |
package.json → npm start |
| TypeScript | .ts |
agent.ts |
- |
| Bash | .sh |
agent.sh |
- |
| Perl | .pl |
agent.pl |
- |
| PHP | .php |
agent.php |
- |
| Lua | .lua |
agent.lua |
- |
| Julia | .jl |
agent.jl |
- |
| R | .r, .R |
- | - |
| Go | - | main.go |
go.mod → go run |
| Rust | - | - | Cargo.toml → cargo run |
| Any compiled | - | agent (binary) |
- |
Examples
Ruby agent (agent.rb):
#!/usr/bin/env ruby
require 'json'
ARGF.each_line do |line|
msg = JSON.parse(line)
if msg["method"] == "run_task"
task_id = msg["params"]["task_id"]
question = msg["params"]["task_data"]["question"]
# Your agent logic here
answer = "42"
result = { jsonrpc: "2.0", id: msg["id"], result: { task_id: task_id, submission: answer } }
puts result.to_json
end
end
Node.js agent (agent.js):
const readline = require('readline');
const rl = readline.createInterface({ input: process.stdin });
rl.on('line', (line) => {
const msg = JSON.parse(line);
if (msg.method === 'run_task') {
const { task_id, task_data } = msg.params;
// Your agent logic here
const answer = "42";
console.log(JSON.stringify({
jsonrpc: "2.0",
id: msg.id,
result: { task_id, submission: answer }
}));
}
});
Bash agent (agent.sh):
#!/bin/bash
read line
task_id=$(echo "$line" | jq -r '.params.task_id')
msg_id=$(echo "$line" | jq -r '.id')
echo "{\"jsonrpc\": \"2.0\", \"result\": {\"task_id\": \"$task_id\", \"submission\": \"hello\"}, \"id\": $msg_id}"
Rust agent (with Cargo.toml):
# Directory structure:
# my-rust-agent/
# Cargo.toml
# src/main.rs
harness run --agent ./my-rust-agent --benchmark arithmetic
# Runs: cargo run --manifest-path ./my-rust-agent/Cargo.toml --
Custom run command (manifest.yaml):
# my-agent/manifest.yaml
run: python -m my_custom_module
# or: ./my_binary --flag
# or: dotnet run
Then run:
harness run --agent ./my-agent --benchmark gaia-level1
Virtual Environment Auto-Detection
When running a directory-based Python agent, the harness automatically checks for a virtual environment inside the agent directory. If .venv/bin/python or venv/bin/python exists, it will be used instead of the system python.
This is critical for agents that depend on packages not installed in the harness environment (e.g. smolagents, pymupdf, etc.).
# Example: agent with its own venv
agents/hal_generalist/
├── agent.py
├── requirements.txt
└── .venv/ # ← harness will use .venv/bin/python automatically
└── bin/python
# If the venv lives elsewhere, symlink it:
ln -s ../../.venv-hal agents/hal_generalist/.venv
# Now the harness runs: .venv/bin/python agents/hal_generalist/agent.py
# instead of: python agents/hal_generalist/agent.py
Important: Without this, agents that import packages only available in their venv will fail with
ModuleNotFoundErrorat runtime. If your agent has custom dependencies, always ensure a.venvexists in the agent directory (even as a symlink).
Benchmarks
Available Benchmarks
| Benchmark | Mode | Status | Requirements |
|---|---|---|---|
arithmetic |
DIRECT | Fully implemented | None |
gaia, gaia-level1..3 |
DIRECT | Fully implemented | datasets |
terminal-bench, terminal-bench-core |
INTERACTIVE | Fully implemented | terminal-bench + Docker |
assistant-bench |
DIRECT | Stub (registered) | datasets |
hle |
DIRECT | Stub (registered) | datasets |
arc-agi, arc-agi-1, arc-agi-2 |
DIRECT | Grading implemented | datasets |
browsecomp |
DIRECT | Stub (registered) | datasets |
# List available benchmarks
harness benchmarks
# Run GAIA Level 1
pip install datasets # If not installed
harness run --agent ./my_agent --benchmark gaia-level1 --output ./results
Terminal-Bench
Terminal-Bench evaluates agents on real-world terminal/DevOps tasks inside Docker containers. Unlike GAIA (where grading compares a string answer), Terminal-Bench grades by running a test suite inside the container after the agent finishes.
Each task provides:
- An instruction (what to accomplish)
- A Docker environment (
docker-compose.yaml) - A test suite (
run-tests.sh+tests/) that checks the final container state
Requirements: pip install terminal-bench + a running Docker daemon.
# Install terminal-bench
pip install terminal-bench
# Run 3 easy tasks with the built-in terminal agent
harness run \
--agent agents/terminal_agent.py \
--benchmark terminal-bench \
--dataset-name terminal-bench-core \
--dataset-version 0.1.1 \
--difficulty easy \
--num-tasks 3 \
--model openrouter/deepseek/deepseek-chat-v3-0324 \
--task-timeout 600 \
--parallel 1
# Run from a local dataset directory
harness run \
--agent agents/terminal_agent.py \
--benchmark terminal-bench \
--dataset-path /path/to/local/tasks \
--model openrouter/deepseek/deepseek-chat-v3-0324
# Run the full core dataset
harness run \
--agent agents/terminal_agent.py \
--benchmark terminal-bench-core \
--model openrouter/deepseek/deepseek-chat-v3-0324 \
--task-timeout 3600 \
--parallel 2
Terminal-Bench CLI Options
| Option | Description |
|---|---|
--dataset-name |
Dataset name in the TB registry (e.g. terminal-bench-core) |
--dataset-version |
Dataset version tag (e.g. 0.1.1) |
--dataset-path |
Local path to a dataset directory (overrides name/version) |
--difficulty |
Filter tasks by difficulty: easy, medium, hard |
Terminal Agent
The built-in terminal_agent.py drives an LLM-in-the-loop shell interaction:
- Starts a Docker container for each task
- Captures an initial environment snapshot (
pwd,ls -la) - Runs an LLM loop where the model issues one command at a time
- Runs the task's test suite inside the container
- Returns
PASS/FAILas the submission
Configurable via environment variables:
| Variable | Default | Description |
|---|---|---|
TB_MAX_ITERATIONS |
30 |
Maximum command iterations per task |
TB_COMMAND_TIMEOUT |
120 |
Per-command timeout in seconds |
TB_TEST_TIMEOUT |
120 |
Test suite timeout in seconds |
How Container Grading Works
Unlike string-comparison benchmarks, Terminal-Bench grading is state-based:
- The agent interacts with a Docker container via shell commands
- When done, the harness copies
run-tests.sh+tests/into the container - The test suite runs inside the container and checks the final state
- The pytest output is parsed to determine pass/fail
This means the agent's "submission" is the container state itself — the PASS/FAIL string is just a signal for the harness grading pipeline.
Adding Benchmarks
Create a class that inherits from Benchmark:
from harness.benchmarks.base import Benchmark, ExecutionMode, ExecutionContext, GradeResult
from harness.protocol import Task
class MyBenchmark(Benchmark):
name = "my-benchmark"
description = "My custom benchmark"
execution_mode = ExecutionMode.DIRECT
def get_tasks(self) -> list[Task]:
return [Task(id="t1", data={"question": "What is 2+2?"})]
def grade(self, task: Task, result: any, context: ExecutionContext) -> GradeResult:
expected = "4"
actual = str(result).strip()
passed = actual == expected
return GradeResult(
task_id=task.id,
passed=passed,
score=1.0 if passed else 0.0,
expected=expected,
actual=actual,
method="exact" if passed else "none",
)
Register it in benchmarks/registry.py and it's immediately available via the CLI.
Grading
The harness supports multiple grading modes controlled via the --grader option.
Available Graders
| Grader | Description |
|---|---|
exact |
Exact string match after trimming whitespace |
normalized |
Match after lowercase + whitespace normalization |
numeric |
Match numeric values with tolerance (±0.1%) |
contains |
Check if expected answer is contained in submission |
fuzzy |
Fuzzy string match (90% similarity threshold) |
strict |
Only exact or normalized match |
default |
Try all graders: exact → normalized → numeric → contains → fuzzy |
llm |
Use LLM-as-judge for semantic evaluation |
llm-fallback |
Try deterministic graders first, fall back to LLM |
Examples
# Strict grading - exact or normalized match only
harness run --agent ./agent.py -b gaia-level1 -o results -g strict
# LLM-as-judge for all grading
harness run --agent ./agent.py -b gaia-level1 -o results -g llm --model openrouter/anthropic/claude-sonnet-4-5-20250514
# Deterministic first, then LLM fallback (recommended)
harness run --agent ./agent.py -b gaia-level1 -o results -g llm-fallback --grader-model openrouter/anthropic/claude-sonnet-4-5-20250514
Grading Behavior
The default grading pipeline tries matchers from strictest to most lenient:
- Exact match - "42" == "42"
- Normalized match - "The Answer" == "the answer"
- Numeric match - "2.500" ≈ "2.5" (within 0.1%)
- Contains match - "The answer is 42" contains "42"
- Fuzzy match - "colour" ≈ "color" (90% similar)
For LLM-as-judge (llm or llm-fallback), the harness asks the LLM to evaluate semantic equivalence.
Logging
All logs are JSONL files with raw API request/response data:
{"timestamp": 1234567890.123, "type": "completion", "provider": "litellm/anthropic", "request": {"model": "claude-sonnet-4-5-20250514", "messages": [...]}, "response": {"id": "msg_...", "content": [...]}, "latency_ms": 1523}
View logs with any JSON tool:
cat ./results/trace_task1.jsonl | jq .
Run Metadata
Every run produces a run.json with comprehensive metadata for analysis and database storage:
{
"run_id": "fb0df848",
"timestamp": "2026-02-04T22:38:44.683841Z",
"agent": "agents/metrics_agent.py",
"benchmark": "arithmetic",
"model": "openrouter/deepseek/deepseek-chat-v3-0324",
"grader": "default",
"git_commit": "64145b486f17",
"git_branch": "main",
"git_dirty": true,
"run_command": "harness run --agent ... -b arithmetic ...",
"num_tasks_run": 3,
"num_tasks_success": 3,
"num_tasks_failed": 0,
"successful_task_ids": ["arith_000", "arith_001", "arith_002"],
"failed_task_ids": [],
"score": 100.0,
"passed": 3,
"total_graded": 3,
"total_usage": {
"prompt_tokens": 88,
"completion_tokens": 5,
"total_tokens": 93,
"cached_tokens": 0,
"reasoning_tokens": 0
},
"total_cost_usd": 0.00001372,
"total_latency_ms": 10008.0,
"model_stats": { ... },
"task_stats": [ ... ],
"agent_metrics": { ... }
}
Token Tracking
The harness captures extended token usage across providers:
| Field | Description |
|---|---|
prompt_tokens |
Input tokens |
completion_tokens |
Output tokens |
cached_tokens |
Cached input tokens (OpenAI, Anthropic) |
cache_creation_tokens |
Cache write tokens (Anthropic) |
reasoning_tokens |
Reasoning tokens (o1/o3, DeepSeek R1) |
audio_tokens |
Audio I/O tokens (OpenAI) |
Agent Metrics
Agents can report custom KPIs that get aggregated across all tasks.
In Your Agent
class MyAgent(Agent):
def run_task(self, task_id: str, task_data: dict) -> str:
# Counter metrics
self.increment("steps")
self.increment("tokens_used", 150)
# Arbitrary values
self.metric("confidence", 0.95)
self.metric("sources", ["web", "memory"])
# Tool tracking (counts + sequence)
self.record_tool_use("web_search", query="weather NYC")
self.record_tool_use("calculator")
return answer
Aggregated Output
Metrics are aggregated in run.json:
"agent_metrics": {
"steps_total": 15,
"steps_avg": 5.0,
"steps_count": 3,
"tool_sequence_all": ["web_search", "calculator", "llm_call", ...],
"tool_counts_totals": {"web_search": 3, "calculator": 5, "llm_call": 10}
}
Aggregation Rules
| Type | Aggregation |
|---|---|
| Numeric | {name}_total, {name}_avg, {name}_count |
| List | {name}_all (concatenated) |
| Dict (counters) | {name}_totals (summed per key) |
| Other | {name}_values (unique values) |
Output Organization
Results are organized by benchmark and run ID:
results/
├── arithmetic/
│ ├── arithmetic_echo-agent_20260204_225130_5d8519/
│ │ ├── run.json # Run metadata + aggregated stats
│ │ ├── summary.json # Grading summary
│ │ ├── grades.json # Per-task grades
│ │ ├── trace_arith_000.jsonl
│ │ └── trace_arith_001.jsonl
│ └── arithmetic_qa-agent_gpt-4o_20260204_230000_a1b2c3/
│ └── ...
├── gaia-level1/
│ └── gaia-level1_my-agent_claude-sonnet_20260205_140000_d4e5f6/
│ └── ...
└── custom/ # For --tasks-file runs without --benchmark
└── ...
Run ID Format
Auto-generated run IDs include context for easy identification:
{benchmark}_{agent}_{model}_{YYYYMMDD_HHMMSS}_{random6}
Examples:
arithmetic_echo-agent_20260204_225130_5d8519(no model)gaia-level1_qa-agent_deepseek-chat-v3_20260204_230000_a1b2c3(with model)
This provides:
- Human readable: Know what ran at a glance
- Chronologically sortable: Timestamp-based ordering
- Collision resistant: Timestamp + 24-bit random = billions of runs without collision
Options:
--output: Base directory (default:./results)--run-id: Override with custom run ID
Continuing Runs
The harness continue command re-runs tasks that failed or never completed. It works with:
- Completed runs (has
run.json) — re-runs errored tasks - Interrupted runs (has
status.jsonl/run_config.json) — re-runs errored + incomplete tasks - Old interrupted runs (only trace files) — scans traces for completion status, requires
--agentand--benchmark
# Continue by run ID (exact or partial match)
harness continue 5d8519
# Continue an old interrupted run that has no config files
harness continue b5c291 \
--agent agents/hal_generalist \
--benchmark gaia \
--model openrouter/deepseek/deepseek-chat-v3-0324 \
--parallel 50 \
--task-timeout 1800
# Continue by direct path
harness continue ./results/gaia/gaia_hal-generalist_*_b5c291/
How Recovery Works
The harness recovers run state from whatever files are available, in priority order:
| Source | What it provides |
|---|---|
run.json |
Full metadata from a completed run |
run_config.json |
Agent, benchmark, model, task IDs (written at run start) |
status.jsonl |
Real-time task results (append-only, crash-safe) |
trace_*.jsonl |
Scanned for task_complete / task_error events |
| CLI flags | --agent, --benchmark, --model override or supply missing config |
Crash-Safe Progress Tracking
Every run now writes two recovery files:
run_config.json— Written at the start of the run with full configuration and the list of all task IDs. This ensures the harness knows what was supposed to run even if the process is killed.status.jsonl— Append-only JSONL file written after each task completes or fails. Each line containstask_id,status,submission/error,attempts,duration_ms, andtimestamp. Usesflush()for crash safety.
These files enable harness continue to pick up exactly where a killed run left off — no work is lost.
Trace Scanning (Legacy Runs)
For old runs that pre-date status.jsonl (only have trace_*.jsonl files), the harness scans each trace for completion events:
| Trace content | Classification |
|---|---|
Has task_complete event |
Completed — submission preserved, won't re-run |
Has task_error event |
Errored — will be retried |
Has task_start but no completion |
Interrupted — will be retried |
| Empty file | Incomplete — will be retried |
| No trace file at all | Never started — discovered from benchmark, will be run |
When --benchmark is provided but no run_config.json exists, the harness discovers all benchmark tasks and marks any without traces as incomplete.
CLI Reference
# Run single task
harness run-one --agent ./agent.py --task '{"id": "t1", "data": {"question": "2+2?"}}'
# Run benchmark (output defaults to ./results/{benchmark}/{run_id}/)
harness run --agent ./agent.py --benchmark gaia
# Run with custom output location and run ID
harness run \
--agent ./agent.py \
--benchmark gaia-level1 \
--output ./my-results \
--run-id experiment-001
# Run with all options
harness run \
--agent ./agent.py \
--benchmark gaia-level1 \
--output ./results \
--run-id my-run \
--parallel 10 \
--max-retries 3 \
--task-timeout 300 \
--num-tasks 50 \
--model gpt-4o \
--grader llm-fallback
# Run Terminal-Bench easy tasks
harness run \
--agent agents/terminal_agent.py \
--benchmark terminal-bench \
--dataset-name terminal-bench-core \
--dataset-version 0.1.1 \
--difficulty easy \
--model openrouter/deepseek/deepseek-chat-v3-0324 \
--task-timeout 600 \
--parallel 1
# List benchmarks
harness benchmarks
# View results
harness view ./results/gaia-level1/my-run
# Continue a failed or interrupted run
harness continue <run_id>
harness continue <run_id> --agent ./agent --benchmark gaia --parallel 50
Configuration
Set model via environment variable or CLI:
export HARNESS_MODEL="claude-sonnet-4-5-20250514"
# or
harness run --model gpt-4o ...
API keys are read from a .env file (via python-dotenv) or standard environment variables:
OPENROUTER_API_KEY— Use models via OpenRouter (e.g.openrouter/deepseek/deepseek-chat-v3-0324)ANTHROPIC_API_KEYOPENAI_API_KEYGOOGLE_API_KEYHF_TOKEN— For downloading datasets from HuggingFace- etc. (via LiteLLM)
Copy .env.example or create a .env file in the project root:
OPENROUTER_API_KEY=sk-or-...
HF_TOKEN=hf_...
Development
# Install with dev dependencies
poetry install --with dev
# Run tests (218 tests)
poetry run pytest
# Run a quick test
poetry run harness run-one \
--agent agents/echo_agent.py \
--task '{"id": "test", "data": {"x": 1}}'
CI
GitHub Actions runs the full test suite on every push and PR to main. See .github/workflows/benchmark-smoke.yml.
Roadmap
- M1: Agent protocol + single task runner
- M2: JSONL logging with raw API capture
- M3: LiteLLM provider with auto-logging
- M4: Parallel runner with retry logic
- M5: Benchmark system with GAIA
- M6: Run metadata with token/cost tracking
- M7: Agent metrics system
- LLM-as-judge grading layer
Next Up
- Continue run:
harness continue <run_id>- Re-run errored/interrupted tasks with crash-safe recovery (status.jsonl, trace scanning, CLI overrides) - HuggingFace integration: Create HF dataset repo to store
run.jsonfiles - Push to HF:
harness push <run_id>- Upload run.json to HuggingFace dataset - HAL Generalist Agent: Port the HAL Generalist Agent GAIA scaffold and run full DeepSeek evaluation
- Terminal-Bench integration: Container-graded terminal tasks with Docker+tmux, LLM-in-the-loop terminal agent
- Full Terminal-Bench run: DeepSeek V3 (
deepseek-chat-v3-0324) on the fullterminal-bench-coredataset via OpenRouter
Done (P0)
- Unified benchmark architecture: ExecutionMode enum,
grade(task, result, context)signature, TaskOrchestrator, GradingPipeline - New benchmarks registered: AssistantBench, HLE, ARC-AGI, BrowseComp (stubs ready for dataset wiring)
- ARC-AGI grading: Grid-match logic with multi-attempt support
- CI: GitHub Actions unit tests on every PR
Future
- Wire stub benchmark dataset loaders (AssistantBench, HLE, BrowseComp)
- P1-P5 execution modes in orchestrator (CODE_SUBMIT, INTERACTIVE, CONVERSATIONAL, TOOL_USE, GUI_AGENT)
- M8: Sandbox tiers (venv, firejail, docker)
- M9: Better viewer / dashboard
- More benchmarks (SWE-bench, GPQA, etc.)
License
MIT
Citing
If you use Agent Harness in your research, please cite it:
@Misc{agentharness,
title = {Agent Harness: A Local-First, Lightweight Harness for AI Agent Evaluations},
author = {Franck Ndzomga},
howpublished = {\url{https://github.com/fsndzomga/agent-harness}},
doi = {10.5281/zenodo.18568843},
year = {2026}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_eval_harness-0.1.0.tar.gz.
File metadata
- Download URL: agent_eval_harness-0.1.0.tar.gz
- Upload date:
- Size: 76.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9f3ded23ac1ea0cc60b1d8f9a21cb9ee2178f9616621bc45deee868e667d428
|
|
| MD5 |
ffaad530135d6461d72c755338abdce1
|
|
| BLAKE2b-256 |
d96679bd2649bf0f0a24960637f6d540a465dc81f9607050b1798dd96be88aea
|
File details
Details for the file agent_eval_harness-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agent_eval_harness-0.1.0-py3-none-any.whl
- Upload date:
- Size: 85.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
11f19c0e04fbb294b0b289340a6a83f1e02e293b1ea0541045f631bd98cd59f6
|
|
| MD5 |
9e4c76ef634ff5a454e21e9797ad8f34
|
|
| BLAKE2b-256 |
9901d7f7cef56809241bebeb60500fa8fe4f104f5a5e964496287dd955001ea1
|