No project description provided
Project description
Prometheus Distributed Client
A Prometheus metrics client with persistent storage backends (Redis or SQLite) for short-lived processes and distributed systems.
Why Use This?
The official prometheus_client stores metrics in memory, which doesn't work well for:
- Short-lived processes that exit before Prometheus can scrape them or cannot expose a
/metricsendpoint - Multiprocess applications (web servers with multiple workers, parallel jobs, task queues)
- Distributed systems where multiple instances need to share metrics
- Serverless functions that need metrics to persist across invocations
This library solves these problems by storing metrics in Redis or SQLite, allowing:
- ✅ Short-lived processes can write metrics without running an HTTP server
- ✅ Multiprocess applications can aggregate metrics efficiently in one place
- ✅ Metrics persist across process boundaries and restarts
- ✅ Multiple processes can update the same metrics atomically
- ✅ Separate HTTP server can serve metrics from storage
- ✅ Automatic TTL/expiration to prevent stale data
- ✅ Full compatibility with Prometheus exposition format
Installation
pip install prometheus-distributed-client
For Redis backend:
pip install prometheus-distributed-client redis
For SQLite backend (included in Python standard library):
pip install prometheus-distributed-client
Quick Start
Redis Backend
from redis import Redis
from prometheus_client import CollectorRegistry, generate_latest
from prometheus_distributed_client import setup
from prometheus_distributed_client.redis import Counter, Gauge, Histogram
# Setup Redis backend
setup(
redis=Redis(host='localhost', port=6379),
redis_prefix='myapp',
redis_expire=3600 # TTL in seconds
)
# Create registry and metrics
REGISTRY = CollectorRegistry()
requests = Counter(
'http_requests_total',
'Total HTTP requests',
['method', 'endpoint'],
registry=REGISTRY
)
# Use metrics in your application
requests.labels(method='GET', endpoint='/api/users').inc()
# Serve metrics (in separate process or same)
print(generate_latest(REGISTRY).decode('utf8'))
SQLite Backend
import sqlite3
from prometheus_client import CollectorRegistry, generate_latest
from prometheus_distributed_client import setup
from prometheus_distributed_client.sqlite import Counter, Gauge, Histogram
# Setup SQLite backend (no TTL or prefix needed)
setup(sqlite='metrics.db') # or setup(sqlite=sqlite3.connect(':memory:'))
# Use exactly like Redis backend
REGISTRY = CollectorRegistry()
requests = Counter('http_requests_total', 'Total requests', registry=REGISTRY)
requests.inc()
Key Difference: TTL Behavior
- Redis: Uses TTL to prevent pollution in the shared central database
- SQLite: No TTL needed - file-based storage is automatically cleaned up when the file is deleted (e.g., container restart)
Supported Metric Types
All standard Prometheus metric types are supported:
- Counter: Monotonically increasing values
- Gauge: Values that can go up or down
- Summary: Observations with count and sum
- Histogram: Observations in configurable buckets
Architecture
Storage Format
Both backends use a unified storage format that prevents component desynchronization:
Redis:
Key: prometheus_myapp_requests
Fields:
_total:{"method":"GET","endpoint":"/api"} → 42
_created:{"method":"GET","endpoint":"/api"} → 1234567890.0
SQLite:
metric_key | subkey | value
------------------------+-------------------------------------------+-------------
requests | _total:{"method":"GET","endpoint":"/api"} | 42
requests | _created:{"method":"GET","endpoint":"/api"}| 1234567890.0
This design ensures:
- All metric components share the same key/row
- TTL applies atomically to the entire metric (Redis only)
- No desynchronization between
_total,_created, etc.
Comparison with Alternatives
vs Pushgateway
The Prometheus Pushgateway is another solution for short-lived processes, but has limitations:
| Feature | prometheus-distributed-client | Pushgateway |
|---|---|---|
| Multiple processes updating same metric | ✅ Atomic updates | ❌ Last write wins |
| Automatic metric expiration | ✅ Configurable TTL | ❌ Manual deletion |
| Label-based updates | ✅ Supports all labels | ⚠️ Can overwrite groups |
| Storage backend | Redis or SQLite | In-memory |
| Deployment complexity | Library (no extra service) | Requires separate service |
vs Multiprocess Mode
For multiprocess applications (e.g., Gunicorn with multiple workers), prometheus-distributed-client provides:
- ✅ Centralized storage: All metrics in one place (Redis or SQLite)
- ✅ Simple collection: Single
/metricsendpoint to scrape - ✅ Atomic updates: Race-free increments across processes
- ✅ Easy cleanup: Automatic TTL-based expiration
- ✅ Better observability: Query metrics directly from storage for debugging
Advanced Usage
Custom TTL (Redis Only)
# Short TTL for transient metrics
setup(redis=redis, redis_expire=60) # 1 minute
# Long TTL for important metrics
setup(redis=redis, redis_expire=86400) # 24 hours
# Note: SQLite doesn't use TTL
Multiple Applications Sharing Backend
# Application 1
setup(redis=redis, redis_prefix='app1')
# Application 2
setup(redis=redis, redis_prefix='app2')
# Metrics are isolated by prefix
Flask Integration
from flask import Flask
from prometheus_client import generate_latest
app = Flask(__name__)
@app.route('/metrics')
def metrics():
return generate_latest(REGISTRY)
Manual Cleanup (SQLite)
SQLite doesn't use TTL. To manually clean up metrics:
# Clear all metrics
conn = get_sqlite_conn()
cursor = conn.cursor()
cursor.execute("DELETE FROM metrics")
conn.commit()
# Or delete the database file
import os
os.remove('metrics.db')
Testing
The library includes comprehensive test suites for both backends:
# Install dependencies
poetry install
# Run all tests
make test
# Run specific backend tests
poetry run pytest tests/redis_test.py -v
poetry run pytest tests/sqlite_test.py -v
For Redis tests, create .redis.json:
{
"host": "localhost",
"port": 6379,
"db": 0
}
Performance Considerations
Redis
- Pros: Atomic operations, high performance, distributed, shared across applications
- Cons: Requires Redis server, network latency, needs TTL to prevent pollution
- Best for: Distributed systems, high concurrency, shared metrics collection
- TTL: Required (default 3600s) to prevent stale metrics in shared database
SQLite
- Pros: No external dependencies, simple deployment, no TTL complexity
- Cons: File locking, less concurrent performance, not shared across hosts
- Best for: Single-server applications, embedded systems, Docker containers
- TTL: Not needed - file cleanup happens on container restart/file deletion
Development
# Install dependencies
poetry install
# Run linters
make lint
# Build package
make build
# Publish to PyPI
make publish
License
GPLv3 - See LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Credits
Built by François Schmidts and contributors.
Based on the official prometheus_client.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file prometheus_distributed_client-2.1.4.tar.gz.
File metadata
- Download URL: prometheus_distributed_client-2.1.4.tar.gz
- Upload date:
- Size: 21.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.14.2 Linux/6.18.8-200.fc43.x86_64
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1d64983191dfb2f59f3b0caa6faccb6b480cccfde2c10bc0ac019f2d41c1123c
|
|
| MD5 |
6117ed4c455c44f78cd1b01152fae391
|
|
| BLAKE2b-256 |
9717b0c255e81b6a880d23db09c99dd3f75b80a8a9a5f8dcb31bc206dd9f1542
|
File details
Details for the file prometheus_distributed_client-2.1.4-py3-none-any.whl.
File metadata
- Download URL: prometheus_distributed_client-2.1.4-py3-none-any.whl
- Upload date:
- Size: 21.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.14.2 Linux/6.18.8-200.fc43.x86_64
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d0bf35f99cbb3d751750dc22e01ab2ca9b7dac6ca6ce316df1e9a5604bbf1a26
|
|
| MD5 |
0ba82d8556dd40452bb19b270ab4aac1
|
|
| BLAKE2b-256 |
5696d76c100ed4dcd33c7fd8c7a74bbc226dd4c3c7d37fdb7ba17771c6244aca
|