Skip to main content

Find hidden cost leaks and blind spots in your agentic AI workflows

Project description

๐Ÿ•ต๏ธ monk

Find the money your AI agents are silently burning.

monk analyzes trace logs from any AI agent โ€” LangGraph, smolagents, MemGPT, custom โ€” and surfaces the cost leaks and behavioral failures that dashboards miss.

$ monk run ./traces/

  ๐Ÿ•ต๏ธ  monk โ€” Agentic Workflow Blind Spot Detector
  Source: ./traces/   |   Calls analysed: 4,610

  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
  โ”‚  12 blind spots found  ยท  ~$118.61/day estimated waste  ยท  ~$3,558/month  โ”‚
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

  ๐Ÿ”ด [1] Retry loop: 'calculator_tool' called 5x in a row across 38 sessions
  Fix: Add a result-cache keyed on (tool, args). Eliminate re-computation.

  ๐Ÿ”ด [2] Error cascade: tool failure ignored โ†’ 8 downstream LLM calls wasted
  Fix: Guard every tool call โ€” if status=error, short-circuit before next LLM call.

  ๐Ÿ”ด [3] Token spike: single web_search injected 583K tokens (26x session median)
  Fix: Truncate tool outputs to 1,000 tokens before injecting into context.

Benchmark results

We evaluated monk on 7 real-world agentic trace datasets โ€” including PatronusAI's TRAIL benchmark, which provides human-labeled error annotations across 20 error categories.

Dataset Traces Findings Est. waste/day
TRAIL (GAIA + SWE-bench agents) 879 spans / 33 traces 137 $13.48
Finance / 10-K ReAct (LangGraph) 4,610 calls 558 $118.61
GAIA smolagents 1,253 spans 296 $0.74
MemGPT (multi-turn conversations) 500 calls 22 $0.41
Nvidia Nemotron (customer service) 413 calls 14 $0.00
WildClaw (Claude Opus 4.6) 288 calls 1 $0.00
Total 7,972 1,041 ~$133/day

~$3,990/month in avoidable agent costs identified across 7 datasets.

WildClaw produced 1 finding โ€” a well-tuned production agent. monk correctly fires rarely on clean traces. That's the signal working as intended.

TRAIL precision / recall

TRAIL is the most rigorous public benchmark for agentic error detection, with human-labeled ground truth across 20 error categories.

Version Precision Recall F1 Detectors
v0.1 84.85% 84.85% 84.85% 5
v0.2 (current) 100% 100% 100% 13

v0.2 catches all 33 error-containing TRAIL traces with zero false positives.
Full methodology and per-detector breakdown: BENCHMARK.md


What monk detects

13 detectors across two levels โ€” trace-level (any format) and span-level (OpenTelemetry):

Trace detectors โ€” work on OpenAI, Anthropic, LangSmith, or raw JSONL:

Detector What it finds
retry_loop Same tool called 3+ consecutive times โ€” agent stuck
empty_return Tool returns null, agent retries anyway
model_overkill gpt-4o / claude-opus doing formatting, summarising, or classification
context_bloat System prompt >55% of token budget, or unbounded history growth
agent_loop Agent cycling Aโ†’Bโ†’Aโ†’B without making progress

Span detectors โ€” require OpenTelemetry traces:

Detector What it finds
error_cascade Tool fails silently โ†’ agent continues making 6โ€“8 more LLM calls on a poisoned context
token_bloat Single-call token spikes (worst seen: 583K, 26ร— session median)
latency_spike Outlier call latency vs. session median
cross_turn_memory Same tool + args re-fetched across turns (pure cache waste)
tool_dependency Cycles and deep chains in the tool call graph
output_format Model violates explicit format rules in its own system prompt
plan_execution Model writes a plan, then executes none of it
span_consistency Model asserts facts with no supporting tool call (hallucinated evidence)

All detectors are deterministic โ€” no LLM-as-judge, no API calls, no surprises.


Install

pip install monk-ai

Usage

# Analyse a trace file or folder
monk run agent_traces.jsonl
monk run ./traces/

# Run specific detectors
monk run traces/ --detectors retry_loop,error_cascade,token_bloat

# Export findings as JSON for CI
monk run traces/ --json findings.json

# Only surface high-severity findings
monk run traces/ --min-severity high

CI integration โ€” monk exits 1 if high-severity findings exist:

- name: monk trace audit
  run: monk run ./traces/ --min-severity high

Trace format

monk auto-detects OpenAI, Anthropic, LangSmith, and OpenTelemetry formats.

For custom logging, any JSONL with these fields works:

{"session_id": "abc123", "model": "gpt-4o", "input_tokens": 1200, "output_tokens": 80, "tool_name": "web_search", "tool_result": "..."}

For full span-level analysis (recommended), export OpenTelemetry traces โ€” monk parses both OTLP proto-JSON and flat JSONL span formats.


Why we built this

Most observability tools show you what happened. monk finds what's costing you.

The patterns here โ€” retry loops, silent tool failures, token spikes, agents re-fetching the same data turn after turn โ€” came from auditing real production agentic workflows. They don't show up as errors in your logs. They don't trigger alerts. They just quietly multiply your inference bill.

87% of the GAIA and SWE-bench agent runs we analyzed had at least one unhandled tool error that caused downstream LLM calls to be wasted. The worst token spike we saw was 583,787 tokens โ€” 26ร— the session median โ€” from a single unfiltered web page injected into context.

These are solvable problems. monk finds them.


Benchmark datasets

All evaluation datasets are publicly available:


Roadmap

  • Real-time mode via OpenTelemetry SDK (auto-instrument your agent in 2 lines)
  • Prompt compression suggestions
  • Slack / PagerDuty alerts on finding threshold breaches
  • Web dashboard

Contributing

PRs welcome. See CONTRIBUTING.md.

To add a detector:

  1. Create monk/detectors/your_detector.py extending BaseDetector
  2. Register in monk/detectors/__init__.py
  3. Add tests in tests/test_detectors.py

Detectors must be deterministic โ€” same traces โ†’ same findings.


License

MIT


Built by Blueconomy AI โ€” Techstars '25
If monk saves you money, a โญ helps others find it.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monk_ai-0.2.1.tar.gz (50.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

monk_ai-0.2.1-py3-none-any.whl (52.4 kB view details)

Uploaded Python 3

File details

Details for the file monk_ai-0.2.1.tar.gz.

File metadata

  • Download URL: monk_ai-0.2.1.tar.gz
  • Upload date:
  • Size: 50.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for monk_ai-0.2.1.tar.gz
Algorithm Hash digest
SHA256 567dbfe3c92f00b7007537811aa738c8f066afb52e30e8566226185b2d8f2af7
MD5 e176d1f702e73deb6408c906239bc83a
BLAKE2b-256 b68b0ca530fd7acbb79217bc3d34bd00667c142e16b889cb2f79c34c79c9ede1

See more details on using hashes here.

File details

Details for the file monk_ai-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: monk_ai-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 52.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for monk_ai-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 11621388f5e4970f4314c432fb52fcc47bc2c136cad5d38a3adfdd8e4d07fd14
MD5 f1649e6d54b2c6a8b751f576faebf136
BLAKE2b-256 2894f4895374f2605e54d39b1d3b8442b2b13716fe5f2946e1f40429cac26af1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page