Skip to main content

CLI tool to monitor API endpoint latency and detect degradation

Project description

ALM — API Latency Monitor

Tests PyPI Python License

A small CLI tool for keeping an eye on HTTP endpoints. It polls them on an interval, stores the results in a local SQLite database, and prints a summary table whenever you want one. Nothing fancy and no servers, no dashboards, no accounts.

Motivation

I got tired of finding out an API was slow from a user complaint. Our internal dashboards tracked uptime but latency was a blind spot. Things would be technically "up" while responding in 2-3 seconds instead of 200ms, and nobody would notice until customers started complaining. I wanted something I could point at any endpoint, leave running in a terminal, and get a warning before it became an incident. This is that tool.

Install

pip install api-latency-monitor

Or clone and install locally for development:

git clone https://github.com/sukhleenk/API-Latency-Monitor
cd API-Latency-Monitor
pip install -e ".[dev]"

Setup

Copy the example config and edit it, or use alm add to build it interactively:

cp config.example.yaml config.yaml
endpoints:
  - name: "Weather SLC"
    url: "https://api.open-meteo.com/v1/forecast?latitude=40.7608&longitude=-111.8910&current_weather=true"
    method: GET
    threshold_ms: 500

  - name: "Auth Service"
    url: "https://auth.example.com/ping"
    method: GET
    headers:
      Authorization: "Bearer your-token"
    threshold_ms: 200

threshold_ms is the latency limit you care about — anything over it gets counted as a breach. Defaults to 500ms if you leave it out.

Usage

Start monitoring:

alm monitor                      # polls every 60 seconds
alm monitor --interval 30        # poll every 30 seconds
alm monitor --config ./prod.yaml # use a different config

The terminal output is color-coded: [OK] is green, [WARN] is yellow (response time spiked more than 1.5x the rolling average), [FAIL] is red. Press Ctrl+C to stop.

View a report:

alm report
alm report --endpoint "Weather SLC"   # one endpoint only
alm report --failures-only            # only endpoints with breaches
alm report --since 24                 # last 24 hours
alm report --export out.csv           # export to CSV

Example output:

                    API Latency Report
╭─────────────────┬────────┬──────────┬─────────┬───────┬───────┬──────────┬──────────╮
│ Endpoint        │ Checks │ Success% │ Avg(ms) │   Min │   Max │ Breaches │   Status │
├─────────────────┼────────┼──────────┼─────────┼───────┼───────┼──────────┼──────────┤
│ Weather NYC     │     42 │   100.0% │   187.3 │ 134.1 │ 312.5 │        0 │  HEALTHY │
│ Weather SLC     │     42 │    97.6% │   431.8 │ 201.4 │ 891.2 │        7 │ DEGRADED │
╰─────────────────┴────────┴──────────┴─────────┴───────┴───────┴──────────┴──────────╯

Status is HEALTHY (green) if success rate ≥ 80% and no threshold breaches, DEGRADED (yellow) if there have been breaches, and DOWN (red) if success rate drops below 80%.

Add an endpoint interactively:

alm add

Clear history:

alm clear

Telegram Alert Integration

ALM can send you Telegram messages when an endpoint degrades or fails, and again when it recovers.

Setup:

  1. Open Telegram and search for @api_latency_bot — send it any message to start a conversation
  2. Get your chat ID by messaging @userinfobot, which will reply with your user ID
  3. Add a notifications block to your config.yaml (it's gitignored, so credentials stay local):
notifications:
  telegram:
    token: "8776424559:AAH5o0iMb-yLqGUnftO9EKpSq6xCB0VpNDk"
    chat_id: "your-chat-id-here"

Or use environment variables instead:

export ALM_TELEGRAM_TOKEN="8776424559:AAH5o0iMb-yLqGUnftO9EKpSq6xCB0VpNDk"
export ALM_TELEGRAM_CHAT_ID="your-chat-id-here"

Each user gets their own alerts — the bot routes messages by chat ID, so you only receive notifications for your own monitored endpoints.

Alert behavior:

Message When
🚨 Alert First degraded or failed poll
⚠️ Still degraded Every 5 consecutive degraded polls after that
✅ Recovery First successful poll after an alert

How it works

  • Retries failed requests up to 3 times with exponential backoff (1s, 2s) before marking a check as failed
  • Degradation detection compares the latest reading against the rolling average of the last 10 successful checks — if it's more than 50% above average, it prints a warning
  • All data lives in alm_data.db (SQLite) in the current directory

Tests

pytest tests/ -v

No network access needed. Storage tests use an in-memory SQLite db and monitor tests mock requests.

Contributing

I believe every piece of work on earth is better with collaboration, even more so for technological advancements and code. Feel free to suggest new features, or create pull requests to contribute to the project!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

api_latency_monitor-0.2.0.tar.gz (18.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

api_latency_monitor-0.2.0-py3-none-any.whl (19.0 kB view details)

Uploaded Python 3

File details

Details for the file api_latency_monitor-0.2.0.tar.gz.

File metadata

  • Download URL: api_latency_monitor-0.2.0.tar.gz
  • Upload date:
  • Size: 18.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.1

File hashes

Hashes for api_latency_monitor-0.2.0.tar.gz
Algorithm Hash digest
SHA256 aa34cd381b018cebba6b19ec3aeedc9c2110e1014bc0b3df1e9ef9aaea209044
MD5 c33c1347e1850d9ea15689f242b507e2
BLAKE2b-256 731fd72bc66adf655f0609941c6d0eb962a6a2c181faad21ee95c4b123aa8014

See more details on using hashes here.

File details

Details for the file api_latency_monitor-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for api_latency_monitor-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 58604ddf754e98838b8300c0f3dda854b98fb79d26fc1fa2a3db560f9a68964c
MD5 5a4b1887d3c9166117d3475e8103d771
BLAKE2b-256 d05ec757afaf2b282fe9d3b92e7500dcaace8626faba597cd99a328637d5a735

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page