Find hidden cost leaks and blind spots in your agentic AI workflows
Project description
๐ต๏ธ monk
Find the money your AI agents are silently burning.
monk analyzes trace logs from any AI agent โ LangGraph, smolagents, MemGPT, custom โ and surfaces the cost leaks and behavioral failures that dashboards miss.
$ monk run ./traces/
๐ต๏ธ monk โ Agentic Workflow Blind Spot Detector
Source: ./traces/ | Calls analysed: 4,610
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 12 blind spots found ยท ~$118.61/day estimated waste ยท ~$3,558/month โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ด [1] Retry loop: 'calculator_tool' called 5x in a row across 38 sessions
Fix: Add a result-cache keyed on (tool, args). Eliminate re-computation.
๐ด [2] Error cascade: tool failure ignored โ 8 downstream LLM calls wasted
Fix: Guard every tool call โ if status=error, short-circuit before next LLM call.
๐ด [3] Token spike: single web_search injected 583K tokens (26x session median)
Fix: Truncate tool outputs to 1,000 tokens before injecting into context.
Benchmark results
We evaluated monk on 7 real-world agentic trace datasets โ including PatronusAI's TRAIL benchmark, which provides human-labeled error annotations across 20 error categories.
| Dataset | Traces | Findings | Est. waste/day |
|---|---|---|---|
| TRAIL (GAIA + SWE-bench agents) | 879 spans / 33 traces | 137 | $13.48 |
| Finance / 10-K ReAct (LangGraph) | 4,610 calls | 558 | $118.61 |
| GAIA smolagents | 1,253 spans | 296 | $0.74 |
| MemGPT (multi-turn conversations) | 500 calls | 22 | $0.41 |
| Nvidia Nemotron (customer service) | 413 calls | 14 | $0.00 |
| WildClaw (Claude Opus 4.6) | 288 calls | 1 | $0.00 |
| Total | 7,972 | 1,041 | ~$133/day |
~$3,990/month in avoidable agent costs identified across 7 datasets.
WildClaw produced 1 finding โ a well-tuned production agent. monk correctly fires rarely on clean traces. That's the signal working as intended.
TRAIL precision / recall
TRAIL is the most rigorous public benchmark for agentic error detection, with human-labeled ground truth across 20 error categories.
| Version | Precision | Recall | F1 | Detectors |
|---|---|---|---|---|
| v0.1 | 84.85% | 84.85% | 84.85% | 5 |
| v0.2 (current) | 100% | 100% | 100% | 13 |
v0.2 catches all 33 error-containing TRAIL traces with zero false positives.
Full methodology and per-detector breakdown: BENCHMARK.md
What monk detects
13 detectors across two levels โ trace-level (any format) and span-level (OpenTelemetry):
Trace detectors โ work on OpenAI, Anthropic, LangSmith, or raw JSONL:
| Detector | What it finds |
|---|---|
retry_loop |
Same tool called 3+ consecutive times โ agent stuck |
empty_return |
Tool returns null, agent retries anyway |
model_overkill |
gpt-4o / claude-opus doing formatting, summarising, or classification |
context_bloat |
System prompt >55% of token budget, or unbounded history growth |
agent_loop |
Agent cycling AโBโAโB without making progress |
Span detectors โ require OpenTelemetry traces:
| Detector | What it finds |
|---|---|
error_cascade |
Tool fails silently โ agent continues making 6โ8 more LLM calls on a poisoned context |
token_bloat |
Single-call token spikes (worst seen: 583K, 26ร session median) |
latency_spike |
Outlier call latency vs. session median |
cross_turn_memory |
Same tool + args re-fetched across turns (pure cache waste) |
tool_dependency |
Cycles and deep chains in the tool call graph |
output_format |
Model violates explicit format rules in its own system prompt |
plan_execution |
Model writes a plan, then executes none of it |
span_consistency |
Model asserts facts with no supporting tool call (hallucinated evidence) |
All detectors are deterministic โ no LLM-as-judge, no API calls, no surprises.
Install
pip install monk-ai
Usage
# Analyse a trace file or folder
monk run agent_traces.jsonl
monk run ./traces/
# Run specific detectors
monk run traces/ --detectors retry_loop,error_cascade,token_bloat
# Export findings as JSON for CI
monk run traces/ --json findings.json
# Only surface high-severity findings
monk run traces/ --min-severity high
CI integration โ monk exits 1 if high-severity findings exist:
- name: monk trace audit
run: monk run ./traces/ --min-severity high
Trace format
monk auto-detects OpenAI, Anthropic, LangSmith, and OpenTelemetry formats.
For custom logging, any JSONL with these fields works:
{"session_id": "abc123", "model": "gpt-4o", "input_tokens": 1200, "output_tokens": 80, "tool_name": "web_search", "tool_result": "..."}
For full span-level analysis (recommended), export OpenTelemetry traces โ monk parses both OTLP proto-JSON and flat JSONL span formats.
Why we built this
Most observability tools show you what happened. monk finds what's costing you.
The patterns here โ retry loops, silent tool failures, token spikes, agents re-fetching the same data turn after turn โ came from auditing real production agentic workflows. They don't show up as errors in your logs. They don't trigger alerts. They just quietly multiply your inference bill.
87% of the GAIA and SWE-bench agent runs we analyzed had at least one unhandled tool error that caused downstream LLM calls to be wasted. The worst token spike we saw was 583,787 tokens โ 26ร the session median โ from a single unfiltered web page injected into context.
These are solvable problems. monk finds them.
Benchmark datasets
All evaluation datasets are publicly available:
- PatronusAI/TRAIL โ github.com/patronus-ai/trail-benchmark
- monk benchmark fixtures (TRAIL, MemGPT, Nemotron, Finance, WildClaw, GAIA) โ huggingface.co/datasets/Blueconomy/monk-benchmarks
Roadmap
- Real-time mode via OpenTelemetry SDK (auto-instrument your agent in 2 lines)
- Prompt compression suggestions
- Slack / PagerDuty alerts on finding threshold breaches
- Web dashboard
Contributing
PRs welcome. See CONTRIBUTING.md.
To add a detector:
- Create
monk/detectors/your_detector.pyextendingBaseDetector - Register in
monk/detectors/__init__.py - Add tests in
tests/test_detectors.py
Detectors must be deterministic โ same traces โ same findings.
License
MIT
Built by Blueconomy AI โ Techstars '25
If monk saves you money, a โญ helps others find it.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file monk_ai-0.3.0.tar.gz.
File metadata
- Download URL: monk_ai-0.3.0.tar.gz
- Upload date:
- Size: 57.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1531221761b179eb5a26039ce7a14c3828004f440771eeef0374763d0d898a6e
|
|
| MD5 |
7bfe61b7ff1dbd2295743a7e7d1c89f2
|
|
| BLAKE2b-256 |
caedaad26909ad43d4bc2c940555ad398314cf0614cc85c1beef014064ae4421
|
File details
Details for the file monk_ai-0.3.0-py3-none-any.whl.
File metadata
- Download URL: monk_ai-0.3.0-py3-none-any.whl
- Upload date:
- Size: 60.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
428c7878cc475d49e3be05944e8f9c8b757c22eda738d8ce3341747f57f12a6a
|
|
| MD5 |
bd485e56747109e5c3d51ca30f8e8af2
|
|
| BLAKE2b-256 |
a318b799cc45cf1ed56b4c30072d5971a67c7719f093c3a181aa2ef995801c51
|