Find hidden cost leaks and blind spots in your agentic AI workflows โ 14 deterministic detectors, live dashboard, workflow simulator
Project description
๐ต๏ธ monk
Find the money your AI agents are silently burning.
monk analyzes trace logs from any AI agent โ LangGraph, smolagents, MemGPT, custom โ and surfaces the cost leaks and behavioral failures that dashboards miss.
$ monk run ./traces/
๐ต๏ธ monk โ Agentic Workflow Blind Spot Detector
Source: ./traces/ | Calls analysed: 4,610
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 12 blind spots found ยท ~$118.61/day estimated waste ยท ~$3,558/month โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ด [1] Retry loop: 'calculator_tool' called 5x in a row across 38 sessions
Fix: Add a result-cache keyed on (tool, args). Eliminate re-computation.
๐ด [2] Error cascade: tool failure ignored โ 8 downstream LLM calls wasted
Fix: Guard every tool call โ if status=error, short-circuit before next LLM call.
๐ด [3] Token spike: single web_search injected 583K tokens (26ร session median)
Fix: Truncate tool outputs to 1,000 tokens before injecting into context.
Install
pip install monk-ai
Quickstart
monk quickstart
Writes 33 built-in demo traces, runs analysis, and opens the live dashboard at http://localhost:9090 โ all in one command.
Benchmark results
Evaluated on 8 real-world agentic trace datasets โ including PatronusAI's TRAIL benchmark with human-labeled ground truth across 20 error categories.
| Dataset | Records | Findings | Est. waste/day |
|---|---|---|---|
| taubench (banking / e-commerce agents) | 17,932 calls | 7,864 | $68.49 |
| Finance / 10-K ReAct (LangGraph) | 4,610 calls | 558 | $118.61 |
| GAIA smolagents | 1,253 spans | 296 | $0.74 |
| TRAIL โ GAIA + SWE-bench (ground truth) | 879 spans | 137 | $13.48 |
| MemGPT (multi-turn) | 500 calls | 22 | $0.41 |
| Nvidia Nemotron (customer service) | 413 calls | 14 | โ |
| WildClaw (Claude Opus 4.6) | 288 calls | 1 | โ |
| Total | 25,875 | 8,892 | ~$201/day |
~$6,000/month in avoidable agent costs identified across 8 datasets.
WildClaw โ a well-tuned production Claude agent โ produced exactly 1 finding. monk correctly fires rarely on clean traces.
TRAIL precision / recall (ground truth benchmark)
| Version | Precision | Recall | F1 | Detectors |
|---|---|---|---|---|
| v0.1 | 84.85% | 84.85% | 84.85% | 5 |
| v0.4.6 (current) | 100% | 100% | 100% | 14 |
Zero false positives. All 33 error-containing TRAIL traces caught.
Full methodology: BENCHMARK.md
What monk detects
14 detectors. All deterministic โ no LLM-as-judge, no external API calls.
Trace detectors โ work on OpenAI, Anthropic, LangSmith, or raw JSONL:
| Detector | What it finds |
|---|---|
retry_loop |
Same tool called 3+ consecutive times |
empty_return |
Tool returns null/empty, agent retries anyway |
model_overkill |
Expensive model doing formatting or classification |
context_bloat |
System prompt >55% of budget, or unbounded history growth |
agent_loop |
Agent cycling AโBโAโB without progress |
text_io |
Low output compression, unbounded input growth |
Span detectors โ require OpenTelemetry traces:
| Detector | What it finds |
|---|---|
error_cascade |
Tool fails silently โ downstream LLM calls wasted on poisoned context |
token_bloat |
Token spikes (worst seen: 583K โ 26ร the session median) |
latency_spike |
Single-call outlier latency vs. session median |
cross_turn_memory |
Same tool + args re-fetched across turns |
tool_dependency |
Cycles and deep chains in the tool call graph |
output_format |
Model violates its own system prompt's format rules |
plan_execution |
Model writes a plan, then never executes it |
span_consistency |
Model asserts facts with no supporting tool call |
Usage
# Fastest path โ built-in demo, analysis, live dashboard
monk quickstart
# Analyse a trace file or folder
monk run agent_traces.jsonl
monk run ./traces/
# Run specific detectors
monk run traces/ --detectors retry_loop,error_cascade,token_bloat
# Export findings as JSON for CI
monk run traces/ --json findings.json
# Only surface high-severity findings
monk run traces/ --min-severity high
# Download real benchmark datasets and analyze them
monk demo
# Generate synthetic traces with configurable failure patterns
monk simulate # all patterns
monk simulate --pattern retry_loop,agent_loop # specific patterns
monk simulate --sessions 10 --run # generate + analyze immediately
# Start the live dashboard
monk serve ./traces/ --port 9090
CI integration โ monk exits 1 if high-severity findings exist:
- name: monk trace audit
run: monk run ./traces/ --min-severity high
Real-time instrumentation โ catch issues as they happen, not after:
import monk
monk.instrument() # patches openai + anthropic automatically
# monk prints findings live as your agent runs
Live dashboard
monk serve ./traces/ --port 9090
Opens a web dashboard at http://localhost:9090 with:
- KPI cards: waste/day, projected/month, total findings, calls analyzed
- Severity breakdown with color-coded cards (high / medium / low)
- Waste ranked by detector with gradient bars
- Recent findings feed with fix suggestions
- Dataset downloader (tau-bench, Finance, TRAIL, GAIA, MemGPT, Nemotron)
- Prometheus metrics at
/metricsfor Grafana integration - Auto-refreshes every 15 seconds
- "โก Load sample" button โ populates demo data in one click
Simulate workflows
monk simulate --pattern retry_loop,empty_return --sessions 5 --run
Generate synthetic trace data with specific failure patterns. Useful for:
- Testing your detectors before you have real production traces
- Reproducing a specific failure mode in isolation to verify a fix
- Demoing cost leaks to stakeholders with realistic numbers
- Validating that a code change actually eliminated a pattern
Available patterns: retry_loop, empty_return, agent_loop, context_bloat, model_overkill, healthy
The healthy pattern generates clean sessions with no failures โ verifying monk produces zero findings on well-behaved agents.
Trace format
monk auto-detects OpenAI, Anthropic, LangSmith, and OpenTelemetry formats. For custom logs, any JSONL with these fields works:
{"session_id": "abc123", "model": "gpt-4o", "input_tokens": 1200, "output_tokens": 80, "tool_name": "web_search", "tool_result": "..."}
For full span-level analysis, export OpenTelemetry traces โ monk parses both OTLP proto-JSON and flat JSONL span formats.
Why we built this
Most observability tools show you what happened. monk finds what's costing you.
The patterns here โ retry loops, silent tool failures, token spikes, agents re-fetching the same data โ don't show up as errors. They don't trigger alerts. They just quietly multiply your inference bill.
87% of the GAIA and SWE-bench agent runs we analyzed had at least one unhandled tool error that caused downstream LLM calls to be wasted. The worst token spike: 583,787 tokens from a single unfiltered web page, 26ร the session median. These are solvable problems. monk finds them.
Datasets
All benchmark fixtures are public:
- PatronusAI/TRAIL โ github.com/patronus-ai/trail-benchmark
- monk benchmark fixtures (TRAIL, MemGPT, Nemotron, Finance, WildClaw, GAIA, taubench) โ huggingface.co/datasets/Blueconomy/monk-benchmarks
Roadmap
- 14 deterministic detectors (trace + span level)
- Live dashboard with dataset downloader
- Real-time instrumentation (
monk.instrument()) -
monk simulateโ synthetic workflow sandbox - Prometheus metrics + Grafana-ready
/metrics - Prompt compression suggestions
- Slack / PagerDuty alerts
- Confidence scores per finding
Contributing
To add a detector: create monk/detectors/your_detector.py extending BaseDetector, register it in monk/detectors/__init__.py, add tests. Detectors must be deterministic โ same traces โ same findings.
See the full guide: CONTRIBUTING.md
License
MIT โ github.com/Blueconomy/monk
Built by Blueconomy AI โ Techstars '25
If monk saves you money, a โญ helps others find it.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file monk_ai-0.4.7.tar.gz.
File metadata
- Download URL: monk_ai-0.4.7.tar.gz
- Upload date:
- Size: 70.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4b92bd2413cf4c2360283f0a0967cdac51833afcde323a79f8e52b7c66b0312e
|
|
| MD5 |
fd54d4df768393c77aece37654cdaced
|
|
| BLAKE2b-256 |
70ed9cdc2b2ea919196d77a67367d8bf6247f83ef3496d33fe690247db181454
|
File details
Details for the file monk_ai-0.4.7-py3-none-any.whl.
File metadata
- Download URL: monk_ai-0.4.7-py3-none-any.whl
- Upload date:
- Size: 82.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
05138d908e1e0c77c75c0affd1434a1686e7309d6a40e8d9655515cfb5040902
|
|
| MD5 |
a042f617db79ce482f2b46f41682c637
|
|
| BLAKE2b-256 |
3754840204f9f4bbf41d3232c6af9a10343264cf2b4a49abf25347f5a5035735
|