Skip to main content

CLI tool for API monitoring, analytics, and LLM-powered improvement insights

Project description

apimon

A CLI tool for API monitoring, analytics, and AI-powered improvement insights. Designed for both humans and AI agents.

Features

  • Proxy-based Monitoring: Sits between clients and your API server to capture all traffic
  • Rich Analytics: Hit counts, response times, error rates, percentiles (p50/p90/p95/p99)
  • Pattern Detection: Automatically identifies slow routes, chatty APIs, caching opportunities, and security anomalies
  • LLM-Powered Insights: Generate AI-driven analysis using OpenAI, Google Gemini, or Anthropic Claude
  • Agent-Friendly: Full --json support and --ai mode for non-interactive automation
  • Interactive TUI: Textual-based terminal UI for live monitoring

Installation

pip install apimon

For LLM integration (optional):

pip install apimon[llm]

Quick Start

1. Start the proxy server

apimon proxy --target-port 3000

2. Make requests through the proxy

curl http://localhost:8080/api/users

3. View analytics

# Human-friendly dashboard
apimon dashboard

# Or interactive TUI
apimon ui

4. Generate AI insights

export OPENAI_API_KEY="sk-..."
apimon insights --provider openai

Commands Reference

Command Description
apimon proxy Start the proxy server
apimon ui Interactive TUI dashboard
apimon ui --ai Machine-readable JSON snapshot
apimon dashboard Terminal analytics dashboard
apimon stats Route statistics
apimon requests Recent requests
apimon request <id> Detailed view of a request
apimon suggestions Rule-based improvement suggestions
apimon insights LLM-powered AI analysis
apimon graph ASCII graphs of activity
apimon export <file> Export data to JSON file
apimon clear Clear all stored data

AI Agent Integration

apimon is designed to be used by AI agents. Every command supports --json output, and apimon ui --ai provides a complete snapshot for automated analysis.

Non-Interactive Mode (--ai)

# Get full analytics snapshot as JSON (no TUI)
apimon ui --ai

# Include LLM analysis
apimon ui --ai --provider openai

# Pipe to jq for specific fields
apimon ui --ai | jq .analytics_summary
apimon ui --ai | jq .cache_candidates
apimon ui --ai | jq .unique_error_messages

JSON Output Fields

The --ai mode returns a comprehensive JSON object:

{
  "apimon_version": "0.1.0",
  "db_path": "apimon.db",
  "hours_analyzed": 24,
  "analytics_summary": {
    "total_requests": 8158,
    "error_rate": 48.1,
    "avg_response_time_ms": 26.15,
    "unique_routes": 7
  },
  "response_time_percentiles": {
    "p50": 1.16,
    "p90": 96.52,
    "p95": 123.33,
    "p99": 146.41
  },
  "route_stats": [...],
  "top_routes_by_traffic": [...],
  "route_percentiles": [...],
  "status_code_distribution": [...],
  "method_distribution": [...],
  "error_summary": [...],
  "unique_error_messages": [...],
  "slowest_routes": [...],
  "cache_candidates": [...],
  "hourly_summary": [...],
  "error_rate_trend": [...],
  "suggestions": [...],
  "llm_prompt": "...",
  "llm_provider": "openai",
  "llm_insights": "...",
  "llm_error": null
}

Key Fields for Agents

Field Description
analytics_summary Overall stats: total requests, error rate, avg response time
response_time_percentiles Global p50, p90, p95, p99 latencies
top_routes_by_traffic Routes ranked by hit count with traffic share %
route_percentiles Per-route latency percentiles
unique_error_messages Actual error response bodies grouped by route/status
cache_candidates GET routes suitable for caching, with benefit scores
error_rate_trend Hourly error rates to detect spikes
suggestions Rule-based improvement suggestions
llm_prompt The exact prompt sent to the LLM (for debugging/reuse)
llm_insights LLM response (if --provider was specified)

JSON Mode for Individual Commands

# Route statistics
apimon stats --json
apimon stats --json | jq '.[] | select(.error_rate > 10)'

# Recent requests
apimon requests --json --limit 100
apimon requests --json --method POST | jq '.[] | select(.response_status >= 400)'

# Single request detail
apimon request 42 --json

# Suggestions
apimon suggestions --json
apimon suggestions --json | jq '.[] | select(.severity == "high")'

# LLM insights
apimon insights --json --provider openai
apimon insights --json --provider openai | jq -r .insights

# Graph data
apimon graph --json
apimon graph --json | jq .time_series

Example: Agent Workflow

# 1. Get full snapshot
DATA=$(apimon ui --ai --provider openai)

# 2. Check for critical issues
echo "$DATA" | jq '.suggestions[] | select(.severity == "high")'

# 3. Get caching recommendations
echo "$DATA" | jq '.cache_candidates'

# 4. Read LLM analysis
echo "$DATA" | jq -r '.llm_insights'

# 5. Get the prompt for custom LLM calls
echo "$DATA" | jq -r '.llm_prompt' > prompt.txt

LLM Providers

Environment Variables

Provider Environment Variable
OpenAI OPENAI_API_KEY
Gemini GEMINI_API_KEY
Anthropic ANTHROPIC_API_KEY

Usage

# OpenAI (default)
export OPENAI_API_KEY="sk-..."
apimon insights --provider openai
apimon ui --ai --provider openai

# Google Gemini
export GEMINI_API_KEY="..."
apimon insights --provider gemini

# Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-..."
apimon insights --provider anthropic

# Pass key directly (not recommended for scripts)
apimon insights --provider openai --api-key sk-...

LLM Prompt Contents

The LLM receives comprehensive data including:

  • Overall analytics summary
  • Response time percentiles (global and per-route)
  • Traffic distribution by route
  • Unique error messages with response bodies
  • Caching candidates with benefit scores
  • Error rate trends by hour
  • Full route statistics

The prompt asks for:

  1. Critical issues requiring immediate attention
  2. Performance bottlenecks analysis
  3. Caching strategy with TTL recommendations
  4. Error pattern analysis
  5. Architecture recommendations
  6. Prioritized action items

Interactive TUI

apimon ui

On startup, you'll be prompted to configure an LLM provider (optional). Press Skip to use the TUI without LLM features.

Keyboard Shortcuts

Key Action
1 Routes tab
2 Requests tab
3 Analytics tab
r Refresh data
d Toggle dark mode
q Quit

The Analytics tab includes a "Get LLM Insights" button that calls the configured LLM and displays results inline.


Proxy Options

apimon proxy --target-host localhost --target-port 3000 --port 8080
Option Default Description
--target-host localhost Your API server host
--target-port 3000 Your API server port
--port 8080 Proxy listen port
--db-path apimon.db SQLite database path

Data Storage

All data is stored in a local SQLite database (apimon.db by default).

Route Pattern Normalization

The proxy automatically normalizes parameterized routes:

  • /users/123/users/{id}
  • /posts/abc-def-123/posts/{id}
  • /api/v2/items/api/{version}/items

This enables meaningful aggregation of statistics.

Clear Data

apimon clear --yes  # Non-interactive (for scripts/agents)
apimon clear        # Prompts for confirmation

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cli_apimon-0.1.1.tar.gz (27.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cli_apimon-0.1.1-py3-none-any.whl (27.8 kB view details)

Uploaded Python 3

File details

Details for the file cli_apimon-0.1.1.tar.gz.

File metadata

  • Download URL: cli_apimon-0.1.1.tar.gz
  • Upload date:
  • Size: 27.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for cli_apimon-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ec11ff88f00a8aaaa5dc1275c2633534a3e5f5f1aa6bdb935cd7aa65518424f8
MD5 0ce60ace88a0eeb95d02c093012bb072
BLAKE2b-256 3c2c1389ee5f34a3bd233f40f6a7d3a6a3c8e34a5778f1754712e0ca551d0f3c

See more details on using hashes here.

File details

Details for the file cli_apimon-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: cli_apimon-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 27.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for cli_apimon-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b82af02cdfd317d97b0fcc1564bc2d271a2c0d7f621b6b29fafa73888f0f73b9
MD5 a6cc87b3e3691f50b9c34190325375a4
BLAKE2b-256 e164cdddae3d225484011f6bb61566b992b1c00c7e451002bf8ee5e0bbb3c847

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page