Skip to main content

A Python library that automatically manages API rate limits, preventing 429 errors

Project description

smartratelimit

Python Version License

A Python library that automatically manages API rate limits, preventing 429 errors and optimizing API usage without requiring developers to manually track or implement rate limiting logic.

Features

  • ๐Ÿš€ Automatic Detection: Automatically detects rate limits from HTTP response headers
  • ๐Ÿ”„ Zero Configuration: Works out of the box with most APIs
  • ๐Ÿ’พ Persistent State: Supports in-memory, SQLite, and Redis storage
  • ๐Ÿ”€ Multi-Process Safe: Share rate limits across multiple processes with Redis
  • ๐ŸŽฏ Smart Waiting: Automatically waits when limits are reached
  • ๐Ÿ“Š Status Monitoring: Check current rate limit status anytime
  • ๐Ÿ”Œ Easy Integration: Works with requests, httpx, and aiohttp
  • ๐Ÿ”„ Advanced Retry: Configurable retry strategies with exponential backoff
  • ๐Ÿ“Š Metrics: Built-in metrics collection and Prometheus export
  • ๐Ÿ› ๏ธ CLI Tools: Command-line interface for monitoring and management

Installation

pip install smartratelimit

For async support:

pip install smartratelimit[httpx]  # For httpx support
pip install smartratelimit[aiohttp]  # For aiohttp support
pip install smartratelimit[all]  # For all optional dependencies

Quick Start

Basic Usage

from smartratelimit import RateLimiter

# Create a rate limiter (auto-detects limits from headers)
limiter = RateLimiter()

# Make requests - rate limiting is automatic!
response = limiter.request('GET', 'https://api.github.com/users/octocat')
print(response.json())

With SQLite Persistence

# Persist rate limits across application restarts
limiter = RateLimiter(storage='sqlite:///rate_limits.db')

response = limiter.request('GET', 'https://api.github.com/users')
# Rate limit state is saved to database

With Redis (Multi-Process)

# Share rate limits across multiple processes/workers
limiter = RateLimiter(storage='redis://localhost:6379/0')

# Works with Gunicorn, Celery, etc.
response = limiter.request('GET', 'https://api.github.com/users')

With Default Limits

# Set default limits for APIs that don't provide headers
limiter = RateLimiter(
    default_limits={'requests_per_minute': 60}
)

for user in users:
    response = limiter.request('POST', 'https://api.example.com/notify', json={'user': user})

Wrap Existing Session

import requests
from smartratelimit import RateLimiter

session = requests.Session()
session.headers.update({'Authorization': 'Bearer token'})

limiter = RateLimiter()
limiter.wrap_session(session)

# Now all session requests are rate-limited
response = session.get('https://api.example.com/data')

Check Rate Limit Status

limiter = RateLimiter()

# Make some requests
limiter.request('GET', 'https://api.github.com/users')

# Check status
status = limiter.get_status('api.github.com')
if status:
    print(f"Remaining: {status.remaining}/{status.limit}")
    print(f"Resets in: {status.reset_in} seconds")
    print(f"Utilization: {status.utilization * 100:.1f}%")

Manual Rate Limit Configuration

limiter = RateLimiter()

# Manually set rate limits
limiter.set_limit('api.example.com', limit=100, window='1h')
limiter.set_limit('api.another.com', limit=60, window='1m')

# Window formats: '1h', '30m', '60s', '1d'

Custom Header Mapping

limiter = RateLimiter(
    headers_map={
        'limit': 'X-My-API-Limit',
        'remaining': 'X-My-API-Remaining',
        'reset': 'X-My-API-Reset'
    }
)

Raise Exception Instead of Waiting

limiter = RateLimiter(raise_on_limit=True)

try:
    response = limiter.request('GET', 'https://api.example.com/data')
except RateLimitExceeded as e:
    print(f"Rate limit exceeded: {e}")

Async Support with httpx

import httpx
from smartratelimit import AsyncRateLimiter

async with AsyncRateLimiter() as limiter:
    async with httpx.AsyncClient() as client:
        response = await limiter.arequest_httpx(
            client, 'GET', 'https://api.github.com/users'
        )
        print(response.json())

Async Support with aiohttp

import aiohttp
from smartratelimit import AsyncRateLimiter

async with AsyncRateLimiter() as limiter:
    async with aiohttp.ClientSession() as session:
        response = await limiter.arequest_aiohttp(
            session, 'GET', 'https://api.github.com/users'
        )
        data = await response.json()
        print(data)

Advanced Retry Logic

from smartratelimit import RateLimiter
from smartratelimit.retry import RetryConfig, RetryHandler, RetryStrategy

# Configure retry with exponential backoff
retry_config = RetryConfig(
    max_retries=3,
    strategy=RetryStrategy.EXPONENTIAL,
    base_delay=1.0,
    backoff_factor=2.0,
)

retry_handler = RetryHandler(retry_config)
limiter = RateLimiter()

def make_request():
    return limiter.request('GET', 'https://api.example.com/data')

# Automatically retry on 429, 503, 504
response = retry_handler.retry_sync(make_request)

Metrics Collection

from smartratelimit import RateLimiter
from smartratelimit.metrics import MetricsCollector

limiter = RateLimiter()
metrics = MetricsCollector()

response = limiter.request('GET', 'https://api.github.com/users')
status = limiter.get_status('api.github.com')
metrics.record_request('api.github.com', response.status_code, status)

# Export Prometheus metrics
prometheus_metrics = metrics.export_prometheus()
print(prometheus_metrics)

CLI Tools

# Check rate limit status
smartratelimit status --endpoint api.github.com

# Probe endpoint for rate limits
smartratelimit probe https://api.github.com/users

# Clear stored rate limits
smartratelimit clear --endpoint api.github.com

# Clear all rate limits
smartratelimit clear

Supported APIs

The library automatically detects rate limits from headers for:

  • โœ… GitHub API
  • โœ… Stripe API
  • โœ… Twitter API
  • โœ… OpenAI API
  • โœ… Any API using standard X-RateLimit-* headers
  • โœ… APIs with Retry-After headers (429 responses)

API Reference

RateLimiter

__init__(storage='memory', default_limits=None, headers_map=None, raise_on_limit=False)

Create a new rate limiter.

Parameters:

  • storage (str): Storage backend. Options:
    • 'memory' (default): In-memory storage
    • 'sqlite:///path': SQLite storage (persistent, single-machine)
    • 'redis://host:port': Redis storage (distributed, multi-process)
  • default_limits (dict): Default limits when headers aren't available. Example: {'requests_per_minute': 60}
  • headers_map (dict): Custom header name mapping
  • raise_on_limit (bool): If True, raise RateLimitExceeded instead of waiting

request(method, url, **kwargs) -> requests.Response

Make a rate-limited HTTP request.

Parameters:

  • method (str): HTTP method (GET, POST, PUT, DELETE, PATCH)
  • url (str): Request URL
  • **kwargs: Additional arguments passed to requests.request()

Returns: requests.Response object

wrap_session(session: requests.Session) -> None

Wrap an existing requests.Session with rate limiting.

get_status(endpoint: str) -> RateLimitStatus | None

Get current rate limit status for an endpoint.

Returns: RateLimitStatus object or None if no info available

set_limit(endpoint: str, limit: int, window: str = '1h') -> None

Manually set rate limit for an endpoint.

Parameters:

  • endpoint: Endpoint URL or domain
  • limit: Maximum number of requests
  • window: Time window ('1h', '1m', '30s', '1d')

clear(endpoint: str | None = None) -> None

Clear stored rate limit data.

Parameters:

  • endpoint: Specific endpoint to clear, or None to clear all

RateLimitStatus

Status information about current rate limits.

Properties:

  • endpoint (str): Endpoint URL
  • limit (int): Total rate limit
  • remaining (int): Remaining requests
  • reset_time (datetime): When the limit resets
  • window (timedelta): Time window for the limit
  • reset_in (float): Seconds until reset (property)
  • is_exceeded (bool): Whether limit is exceeded (property)
  • utilization (float): Utilization percentage 0.0-1.0 (property)

Examples

Web Scraper

from smartratelimit import RateLimiter

limiter = RateLimiter()

for url in urls:
    response = limiter.request('GET', url)
    html = response.text
    # Process HTML...

API Integration in FastAPI

from fastapi import FastAPI
from smartratelimit import RateLimiter

app = FastAPI()
limiter = RateLimiter()

@app.get("/notify")
def notify_user(user_id: str):
    response = limiter.request(
        'POST',
        'https://api.sendgrid.com/v3/mail/send',
        json={'to': user_id, 'message': 'Hello!'}
    )
    return {"status": "sent"}

Batch Processing

from smartratelimit import RateLimiter

limiter = RateLimiter(default_limits={'requests_per_minute': 60})

results = []
for item in items:
    response = limiter.request('POST', 'https://api.example.com/process', json=item)
    results.append(response.json())

Roadmap

v0.1.0 - MVP

  • โœ… Basic rate limiting with token bucket algorithm
  • โœ… Automatic header detection
  • โœ… In-memory storage
  • โœ… requests library integration
  • โœ… Status monitoring

v0.2.0 - Production Ready

  • โœ… SQLite persistence
  • โœ… Redis backend for distributed applications
  • โœ… Multi-process support
  • โœ… Performance benchmarks
  • โœ… Comprehensive test coverage

v0.3.0 (Current) - Advanced Features

  • โœ… httpx and aiohttp async support
  • โœ… Advanced retry logic with configurable strategies
  • โœ… CLI tools (status, clear, probe commands)
  • โœ… Monitoring/metrics export (Prometheus format)

Contributing

Contributions are welcome! Please read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.

License

This project is licensed under the MIT License.

See the LICENSE file for the full license text.

Documentation

Comprehensive documentation is available in the docs/ directory:

Support

Acknowledgments

Inspired by the need for a simple, automatic rate limiting solution that works with any API without configuration.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

smartratelimit-0.3.0.tar.gz (37.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

smartratelimit-0.3.0-py3-none-any.whl (24.9 kB view details)

Uploaded Python 3

File details

Details for the file smartratelimit-0.3.0.tar.gz.

File metadata

  • Download URL: smartratelimit-0.3.0.tar.gz
  • Upload date:
  • Size: 37.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for smartratelimit-0.3.0.tar.gz
Algorithm Hash digest
SHA256 c1ace33a93340155cde55c0ed96921d1c413cdd33712825284812a3eced03b8a
MD5 95e75c0a47b7d99a6ef56a907f60738d
BLAKE2b-256 72295f813d3867f48c4d730632422013bb8bf082ffadc3144ea37085cfeff369

See more details on using hashes here.

File details

Details for the file smartratelimit-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: smartratelimit-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 24.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for smartratelimit-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 280b14e9704a46226af02c61f26f7e64b1efcba2d337b233194e7da05c182c6e
MD5 42ec79f9c588eddef75bb7a9197cfb5c
BLAKE2b-256 f5cb4a06321a5656b9ed06facba5cabf6c872e255f0d9e8cb2b99708a476e355

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page