Skip to main content

Simple, robust distributed rate limiter for Python — Redis, PostgreSQL, or in-memory

Project description

rate-sync

rate-sync

PyPI version Python 3.12+ License: MIT Tests

Distributed rate limiting for Python with Redis, PostgreSQL, or in-memory backends.


What It Solves

Brute force and credential stuffing on your auth endpoints

Attackers try thousands of password combinations. A sliding window limiter on your login endpoint stops them — and composite limiting blocks both single-IP attacks and distributed botnets hitting the same account.

[limiters.auth_credential]
store = "redis"
algorithm = "sliding_window"
limit = 5                # 5 attempts
window_seconds = 300     # per 5 minutes

Deep dive: Authentication Protection · Abuse Prevention

Different API quotas for free, pro, and enterprise customers

Your free tier gets 100 requests/hour. Pro gets 1,000. Enterprise gets 10,000. Define each tier as a limiter and clone it per user — rate-sync handles the rest.

limiter = await get_or_clone_limiter(f"api_{tier}", api_key)

Deep dive: API Tiering

One tenant consuming all your platform's resources

In multi-tenant systems, a single noisy tenant can starve everyone else. Per-tenant rate limiting enforces fair usage — each tenant gets their share, and no one can monopolize your infrastructure.

# Each tenant gets their own enforced limit
clone_limiter("platform_api", f"tenant:{tenant_id}")
await acquire(f"tenant:{tenant_id}")

Deep dive: Multi-Tenant Fairness

Webhook floods taking down your customers' endpoints

Your platform sends webhooks to customer URLs. A bulk import triggers 10,000 events, and suddenly you're DDoS-ing your own customers. Per-endpoint rate limiting with automatic retries keeps delivery smooth without overwhelming anyone.

limiter = await get_or_clone_limiter("webhook_endpoint", endpoint_url)
async with limiter.acquire_context(timeout=30.0):
    await http_client.post(endpoint_url, json=payload)

Deep dive: Webhook Delivery

File uploads and heavy operations eating all your resources

Five concurrent 1GB uploads can exhaust server memory. PDF generation can peg every CPU core. Concurrency limiting caps how many heavy operations run simultaneously — completely separate from request rate.

[limiters.upload]
store = "redis"
max_concurrent = 10          # max 10 uploads at once
timeout = 300.0

Deep dive: File Uploads & Heavy Resources

Background workers overwhelming third-party APIs

You have 20 Celery workers calling the Stripe API, which allows 100 req/s. Without coordination, your workers exceed the limit and get throttled. rate-sync coordinates across all workers through a shared Redis backend.

[limiters.stripe_api]
store = "redis"
rate_per_second = 90.0   # stay under Stripe's 100/s limit
max_concurrent = 10       # max 10 in-flight calls

Deep dive: Background Jobs

Rate limits that actually work across multiple servers

In-memory counters reset when a process restarts and can't coordinate across instances. rate-sync uses Redis or PostgreSQL as a shared backend, so limits are enforced consistently across your entire fleet.

[stores.redis]
engine = "redis"
url = "${REDIS_URL}"

[limiters.api]
store = "redis"          # all instances share this
rate_per_second = 100.0

Deep dive: Production Deployment

Knowing what's being blocked and why

Rate limiting without observability is flying blind. rate-sync exposes built-in metrics (acquisitions, wait times, timeouts) and integrates with Prometheus for dashboards and alerting.

state = await limiter.get_state()
# → LimiterState(allowed=True, remaining=42, reset_at=1706367600)

Deep dive: Observability · Monitoring Patterns


Features

  • Declarative configuration - Define limits in TOML, use anywhere
  • Multiple backends - Redis (recommended), PostgreSQL, or memory
  • Dual limiting - Rate limiting (req/sec) + concurrency limiting (max parallel)
  • Two algorithms - Token bucket for throughput, sliding window for quotas
  • FastAPI integration - Dependencies, middleware, exception handlers
  • Async-first - Built on asyncio with full type hints

Installation

pip install rate-sync             # Memory backend only
pip install rate-sync[redis]      # + Redis support
pip install rate-sync[postgres]   # + PostgreSQL support
pip install rate-sync[fastapi]    # + FastAPI integration
pip install rate-sync[all]        # All backends + integrations

Quick Start

1. Create rate-sync.toml:

[stores.main]
engine = "memory"  # or "redis", "postgres"

[limiters.api]
store = "main"
rate_per_second = 10.0

2. Use it:

from ratesync import acquire

await acquire("api")  # Blocks until rate limit allows

That's it. Configuration auto-loads on import.

Usage Patterns

Context Manager (recommended for concurrency limits)

async with acquire("api"):
    response = await client.get(url)

Decorator

from ratesync import rate_limited

@rate_limited("api")
async def fetch_data():
    return await client.get(url)

Per-User/Tenant Limits (Recommended)

Use template strings — placeholders are resolved at call time:

@rate_limited("api:{user_id}")
async def fetch_user_data(user_id: str):
    return await client.get(url)

@rate_limited("api:{tenant_id}:{user_id}")
async def multi_tenant_api(tenant_id: str, user_id: str):
    return await client.post(url)

Per-User Limits (Manual)

For advanced cases where you need direct limiter control:

from ratesync import get_or_clone_limiter

limiter = await get_or_clone_limiter("api", user_id)
async with limiter.acquire_context():
    response = await client.get(url)

Backends

Memory (development)

[stores.local]
engine = "memory"

Redis (production)

[stores.redis]
engine = "redis"
url = "redis://localhost:6379/0"

PostgreSQL

[stores.db]
engine = "postgres"
url = "postgresql://user:pass@localhost/mydb"

Algorithms

Token Bucket (default)

Controls request throughput with optional concurrency limits:

[limiters.external_api]
store = "redis"
rate_per_second = 100.0  # Max 100 req/sec
max_concurrent = 10      # Max 10 in-flight requests
timeout = 30.0           # Wait up to 30s for a slot

Sliding Window

Counts requests in a time window. Ideal for login protection:

[limiters.login]
store = "redis"
algorithm = "sliding_window"
limit = 5              # Max 5 attempts
window_seconds = 300   # Per 5 minutes

FastAPI Integration

Requires: pip install rate-sync[fastapi]

from fastapi import Depends, FastAPI
from ratesync.contrib.fastapi import (
    RateLimitDependency,
    RateLimitExceededError,
    rate_limit_exception_handler,
)

app = FastAPI()
app.add_exception_handler(RateLimitExceededError, rate_limit_exception_handler)

@app.get("/api/data")
async def get_data(_: None = Depends(RateLimitDependency("api"))):
    return {"status": "ok"}

Programmatic Configuration

Skip the TOML file if you prefer code:

from ratesync import configure_store, configure_limiter, acquire

configure_store("main", strategy="redis", url="redis://localhost:6379/0")
configure_limiter("api", store_id="main", rate_per_second=100.0)

await acquire("api")

Documentation

Topic Link
Configuration Reference docs/configuration.md
API Reference docs/api-reference.md
FastAPI Integration docs/fastapi-integration.md
Redis Setup docs/setup/redis-setup.md
PostgreSQL Setup docs/setup/postgres-setup.md
Observability docs/observability.md

Patterns

Contributing

git clone https://github.com/rate-sync/python.git
cd python
poetry install
poetry run pytest

See CONTRIBUTING.md for guidelines.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rate_sync-0.4.0.tar.gz (79.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rate_sync-0.4.0-py3-none-any.whl (97.2 kB view details)

Uploaded Python 3

File details

Details for the file rate_sync-0.4.0.tar.gz.

File metadata

  • Download URL: rate_sync-0.4.0.tar.gz
  • Upload date:
  • Size: 79.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.12 Darwin/25.3.0

File hashes

Hashes for rate_sync-0.4.0.tar.gz
Algorithm Hash digest
SHA256 41b2467ea793e8a569107d04627ce48c62a0d678a8fc0950bca084618c589196
MD5 12933a0f046ff8cbf8c2388e38a222bc
BLAKE2b-256 ebe89f064ce3bb67c628dcb0289a137298f0c83c49456f79c0cb41dfd9baffad

See more details on using hashes here.

File details

Details for the file rate_sync-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: rate_sync-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 97.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.12 Darwin/25.3.0

File hashes

Hashes for rate_sync-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7237031287f70d1ffed2e2794b036bd6eb5826da950a0125e9c557ef5246bbd0
MD5 f3d017c2ab2eacaef97fbb408d87cf8b
BLAKE2b-256 9fa70039609a7fa8bffd80757460698831f926fd99b1ed978a1d0abc5bca9cbc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page