Skip to main content

Find hidden cost leaks and blind spots in your agentic AI workflows โ€” 15 deterministic detectors, LangGraph support, team usage CSV analyzer, live dashboard, workflow simulator

Project description

๐Ÿ•ต๏ธ monk

Find the money your AI agents are silently burning.

monk analyzes trace logs from any AI agent โ€” LangGraph, smolagents, MemGPT, custom โ€” and surfaces the cost leaks and behavioral failures that dashboards miss.

$ monk run ./traces/

  ๐Ÿ•ต๏ธ  monk โ€” Agentic Workflow Blind Spot Detector
  Source: ./traces/   |   Calls analysed: 4,610

  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
  โ”‚  12 blind spots found  ยท  ~$118.61/day estimated waste  ยท  ~$3,558/month  โ”‚
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

  ๐Ÿ”ด [1] Retry loop: 'calculator_tool' called 5x in a row across 38 sessions
  Fix: Add a result-cache keyed on (tool, args). Eliminate re-computation.

  ๐Ÿ”ด [2] Error cascade: tool failure ignored โ†’ 8 downstream LLM calls wasted
  Fix: Guard every tool call โ€” if status=error, short-circuit before next LLM call.

  ๐Ÿ”ด [3] Token spike: single web_search injected 583K tokens (26ร— session median)
  Fix: Truncate tool outputs to 1,000 tokens before injecting into context.

Install

pip install monk-ai

Quickstart

monk quickstart

Writes 33 built-in demo traces, runs analysis, and opens the live dashboard at http://localhost:9090 โ€” all in one command.


Benchmark results

Evaluated on 8 real-world agentic trace datasets โ€” including PatronusAI's TRAIL benchmark with human-labeled ground truth across 20 error categories.

Dataset Records Findings Est. waste/day
taubench (banking / e-commerce agents) 17,932 calls 7,864 $68.49
Finance / 10-K ReAct (LangGraph) 4,610 calls 558 $118.61
GAIA smolagents 1,253 spans 296 $0.74
TRAIL โ€” GAIA + SWE-bench (ground truth) 879 spans 137 $13.48
MemGPT (multi-turn) 500 calls 22 $0.41
Nvidia Nemotron (customer service) 413 calls 14 โ€”
WildClaw (Claude Opus 4.6) 288 calls 1 โ€”
Total 25,875 8,892 ~$201/day

~$6,000/month in avoidable agent costs identified across 8 datasets.

WildClaw โ€” a well-tuned production Claude agent โ€” produced exactly 1 finding. monk correctly fires rarely on clean traces.

TRAIL precision / recall (ground truth benchmark)

Version Precision Recall F1 Detectors
v0.1 84.85% 84.85% 84.85% 5
v0.4.8 (current) 100% 100% 100% 15

Zero false positives. All 33 error-containing TRAIL traces caught.
Full methodology: BENCHMARK.md


What monk detects

15 detectors. All deterministic โ€” no LLM-as-judge, no external API calls.

Trace detectors โ€” work on OpenAI, Anthropic, LangGraph, LangSmith, or raw JSONL:

Detector What it finds
retry_loop Same tool called 3+ consecutive times
empty_return Tool returns null/empty, agent retries anyway
model_overkill Expensive model doing formatting or classification
context_bloat System prompt >55% of budget, or unbounded history growth
agent_loop Agent cycling Aโ†’Bโ†’Aโ†’B without progress
handoff_loop Multi-agent transfer cycling (Supervisor/Swarm Aโ†”B bouncing)
text_io Low output compression, unbounded input growth

Span detectors โ€” require OpenTelemetry traces:

Detector What it finds
error_cascade Tool fails silently โ†’ downstream LLM calls wasted on poisoned context
token_bloat Token spikes (worst seen: 583K โ€” 26ร— the session median)
latency_spike Single-call outlier latency vs. session median
cross_turn_memory Same tool + args re-fetched across turns
tool_dependency Cycles and deep chains in the tool call graph
output_format Model violates its own system prompt's format rules
plan_execution Model writes a plan, then never executes it
span_consistency Model asserts facts with no supporting tool call

Usage

# Fastest path โ€” built-in demo, analysis, live dashboard
monk quickstart

# Analyse a trace file or folder
monk run agent_traces.jsonl
monk run ./traces/

# Run specific detectors
monk run traces/ --detectors retry_loop,error_cascade,token_bloat

# Export findings as JSON for CI
monk run traces/ --json findings.json

# Only surface high-severity findings
monk run traces/ --min-severity high

# Download real benchmark datasets and analyze them
monk demo

# Generate synthetic traces with configurable failure patterns
monk simulate                                  # all patterns
monk simulate --pattern retry_loop,agent_loop  # specific patterns
monk simulate --sessions 10 --run              # generate + analyze immediately

# Start the live dashboard
monk serve ./traces/ --port 9090

CI integration โ€” monk exits 1 if high-severity findings exist:

- name: monk trace audit
  run: monk run ./traces/ --min-severity high

Real-time instrumentation โ€” catch issues as they happen, not after:

import monk
monk.instrument()  # patches openai + anthropic automatically

# monk prints findings live as your agent runs

Live dashboard

monk serve ./traces/ --port 9090

Opens a web dashboard at http://localhost:9090 with:

  • KPI cards: waste/day, projected/month, total findings, calls analyzed
  • Severity breakdown with color-coded cards (high / medium / low)
  • Waste ranked by detector with gradient bars
  • Recent findings feed with fix suggestions
  • Dataset downloader (tau-bench, Finance, TRAIL, GAIA, MemGPT, Nemotron)
  • Prometheus metrics at /metrics for Grafana integration
  • Auto-refreshes every 15 seconds
  • "โšก Load sample" button โ€” populates demo data in one click

Simulate workflows

monk simulate --pattern retry_loop,empty_return --sessions 5 --run

Generate synthetic trace data with specific failure patterns. Useful for:

  • Testing your detectors before you have real production traces
  • Reproducing a specific failure mode in isolation to verify a fix
  • Demoing cost leaks to stakeholders with realistic numbers
  • Validating that a code change actually eliminated a pattern

Available patterns: retry_loop, empty_return, agent_loop, context_bloat, model_overkill, healthy, supervisor, swarm

The supervisor preset models a LangGraph Supervisor routing expensive gpt-4o to a gpt-4o-mini specialist โ€” surfaces model_overkill. The swarm preset models peer agents bouncing transfer_to_* without resolving โ€” surfaces handoff_loop.

The healthy pattern generates clean sessions with no failures โ€” verifying monk produces zero findings on well-behaved agents.


Trace format

monk auto-detects OpenAI, Anthropic, LangGraph, LangSmith, and OpenTelemetry formats. For custom logs, any JSONL with these fields works:

{"session_id": "abc123", "model": "gpt-4o", "input_tokens": 1200, "output_tokens": 80, "tool_name": "web_search", "tool_result": "..."}

LangGraph โ€” save your app.invoke() response directly to JSONL. monk extracts one TraceCall per AIMessage with usage_metadata, preserving agent names and transfer_to_* handoff calls:

import json
result = app.invoke({"messages": [...]})
with open("traces/session.jsonl", "a") as f:
    f.write(json.dumps(result) + "\n")

For full span-level analysis, export OpenTelemetry traces โ€” monk parses both OTLP proto-JSON and flat JSONL span formats.


Why we built this

Most observability tools show you what happened. monk finds what's costing you.

The patterns here โ€” retry loops, silent tool failures, token spikes, agents re-fetching the same data โ€” don't show up as errors. They don't trigger alerts. They just quietly multiply your inference bill.

87% of the GAIA and SWE-bench agent runs we analyzed had at least one unhandled tool error that caused downstream LLM calls to be wasted. The worst token spike: 583,787 tokens from a single unfiltered web page, 26ร— the session median. These are solvable problems. monk finds them.


Datasets

All benchmark fixtures are public:


Roadmap

  • 14 deterministic detectors (trace + span level)
  • Live dashboard with dataset downloader
  • Real-time instrumentation (monk.instrument())
  • monk simulate โ€” synthetic workflow sandbox
  • Prometheus metrics + Grafana-ready /metrics
  • Prompt compression suggestions
  • Slack / PagerDuty alerts
  • Confidence scores per finding

Contributing

To add a detector: create monk/detectors/your_detector.py extending BaseDetector, register it in monk/detectors/__init__.py, add tests. Detectors must be deterministic โ€” same traces โ†’ same findings.

See the full guide: CONTRIBUTING.md


License

MIT โ€” github.com/Blueconomy/monk


Built by Blueconomy AI โ€” Techstars '25
If monk saves you money, a โญ helps others find it.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monk_ai-0.4.10.tar.gz (107.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

monk_ai-0.4.10-py3-none-any.whl (111.2 kB view details)

Uploaded Python 3

File details

Details for the file monk_ai-0.4.10.tar.gz.

File metadata

  • Download URL: monk_ai-0.4.10.tar.gz
  • Upload date:
  • Size: 107.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for monk_ai-0.4.10.tar.gz
Algorithm Hash digest
SHA256 e1f8669974d4bf85294cbdd77f5cc85965528223a5c7f4cfc8cab2301b36b628
MD5 e39765e43696ce8a74159d3f01838f30
BLAKE2b-256 8151858c8fc793f2568dc8cbdc47e3a7ae52283578dd15eb98ab05c05c2875b3

See more details on using hashes here.

File details

Details for the file monk_ai-0.4.10-py3-none-any.whl.

File metadata

  • Download URL: monk_ai-0.4.10-py3-none-any.whl
  • Upload date:
  • Size: 111.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for monk_ai-0.4.10-py3-none-any.whl
Algorithm Hash digest
SHA256 7956150a57ef86dcb1cb52c37d9af7952da2d1d4e261bbd38bdf588cf8c99d4a
MD5 9ee35dfbaeb2983f49667e345cb361c7
BLAKE2b-256 cc0c246786f0c138160f369ce24d63fbf915631f17c416079fd0fcc6380c370b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page