A Python library that automatically manages API rate limits, preventing 429 errors
Project description
smartratelimit
A Python library that automatically manages API rate limits, preventing 429 errors and optimizing API usage without requiring developers to manually track or implement rate limiting logic.
Features
- ๐ Automatic Detection: Automatically detects rate limits from HTTP response headers
- ๐ Zero Configuration: Works out of the box with most APIs
- ๐พ Persistent State: Supports in-memory, SQLite, and Redis storage
- ๐ Multi-Process Safe: Share rate limits across multiple processes with Redis
- ๐ฏ Smart Waiting: Automatically waits when limits are reached
- ๐ Status Monitoring: Check current rate limit status anytime
- ๐ Easy Integration: Works with
requests,httpx, andaiohttp - ๐ Advanced Retry: Configurable retry strategies with exponential backoff
- ๐ Metrics: Built-in metrics collection and Prometheus export
- ๐ ๏ธ CLI Tools: Command-line interface for monitoring and management
Installation
pip install smartratelimit
For async support:
pip install smartratelimit[httpx] # For httpx support
pip install smartratelimit[aiohttp] # For aiohttp support
pip install smartratelimit[all] # For all optional dependencies
Quick Start
Basic Usage
from smartratelimit import RateLimiter
# Create a rate limiter (auto-detects limits from headers)
limiter = RateLimiter()
# Make requests - rate limiting is automatic!
response = limiter.request('GET', 'https://api.github.com/users/octocat')
print(response.json())
With SQLite Persistence
# Persist rate limits across application restarts
limiter = RateLimiter(storage='sqlite:///rate_limits.db')
response = limiter.request('GET', 'https://api.github.com/users')
# Rate limit state is saved to database
With Redis (Multi-Process)
# Share rate limits across multiple processes/workers
limiter = RateLimiter(storage='redis://localhost:6379/0')
# Works with Gunicorn, Celery, etc.
response = limiter.request('GET', 'https://api.github.com/users')
With Default Limits
# Set default limits for APIs that don't provide headers
limiter = RateLimiter(
default_limits={'requests_per_minute': 60}
)
for user in users:
response = limiter.request('POST', 'https://api.example.com/notify', json={'user': user})
Wrap Existing Session
import requests
from smartratelimit import RateLimiter
session = requests.Session()
session.headers.update({'Authorization': 'Bearer token'})
limiter = RateLimiter()
limiter.wrap_session(session)
# Now all session requests are rate-limited
response = session.get('https://api.example.com/data')
Check Rate Limit Status
limiter = RateLimiter()
# Make some requests
limiter.request('GET', 'https://api.github.com/users')
# Check status
status = limiter.get_status('api.github.com')
if status:
print(f"Remaining: {status.remaining}/{status.limit}")
print(f"Resets in: {status.reset_in} seconds")
print(f"Utilization: {status.utilization * 100:.1f}%")
Manual Rate Limit Configuration
limiter = RateLimiter()
# Manually set rate limits
limiter.set_limit('api.example.com', limit=100, window='1h')
limiter.set_limit('api.another.com', limit=60, window='1m')
# Window formats: '1h', '30m', '60s', '1d'
Custom Header Mapping
limiter = RateLimiter(
headers_map={
'limit': 'X-My-API-Limit',
'remaining': 'X-My-API-Remaining',
'reset': 'X-My-API-Reset'
}
)
Raise Exception Instead of Waiting
limiter = RateLimiter(raise_on_limit=True)
try:
response = limiter.request('GET', 'https://api.example.com/data')
except RateLimitExceeded as e:
print(f"Rate limit exceeded: {e}")
Async Support with httpx
import httpx
from smartratelimit import AsyncRateLimiter
async with AsyncRateLimiter() as limiter:
async with httpx.AsyncClient() as client:
response = await limiter.arequest_httpx(
client, 'GET', 'https://api.github.com/users'
)
print(response.json())
Async Support with aiohttp
import aiohttp
from smartratelimit import AsyncRateLimiter
async with AsyncRateLimiter() as limiter:
async with aiohttp.ClientSession() as session:
response = await limiter.arequest_aiohttp(
session, 'GET', 'https://api.github.com/users'
)
data = await response.json()
print(data)
Advanced Retry Logic
from smartratelimit import RateLimiter
from smartratelimit.retry import RetryConfig, RetryHandler, RetryStrategy
# Configure retry with exponential backoff
retry_config = RetryConfig(
max_retries=3,
strategy=RetryStrategy.EXPONENTIAL,
base_delay=1.0,
backoff_factor=2.0,
)
retry_handler = RetryHandler(retry_config)
limiter = RateLimiter()
def make_request():
return limiter.request('GET', 'https://api.example.com/data')
# Automatically retry on 429, 503, 504
response = retry_handler.retry_sync(make_request)
Metrics Collection
from smartratelimit import RateLimiter
from smartratelimit.metrics import MetricsCollector
limiter = RateLimiter()
metrics = MetricsCollector()
response = limiter.request('GET', 'https://api.github.com/users')
status = limiter.get_status('api.github.com')
metrics.record_request('api.github.com', response.status_code, status)
# Export Prometheus metrics
prometheus_metrics = metrics.export_prometheus()
print(prometheus_metrics)
CLI Tools
# Check rate limit status
smartratelimit status --endpoint api.github.com
# Probe endpoint for rate limits
smartratelimit probe https://api.github.com/users
# Clear stored rate limits
smartratelimit clear --endpoint api.github.com
# Clear all rate limits
smartratelimit clear
Supported APIs
The library automatically detects rate limits from headers for:
- โ GitHub API
- โ Stripe API
- โ Twitter API
- โ OpenAI API
- โ
Any API using standard
X-RateLimit-*headers - โ
APIs with
Retry-Afterheaders (429 responses)
API Reference
RateLimiter
__init__(storage='memory', default_limits=None, headers_map=None, raise_on_limit=False)
Create a new rate limiter.
Parameters:
storage(str): Storage backend. Options:'memory'(default): In-memory storage'sqlite:///path': SQLite storage (persistent, single-machine)'redis://host:port': Redis storage (distributed, multi-process)
default_limits(dict): Default limits when headers aren't available. Example:{'requests_per_minute': 60}headers_map(dict): Custom header name mappingraise_on_limit(bool): IfTrue, raiseRateLimitExceededinstead of waiting
request(method, url, **kwargs) -> requests.Response
Make a rate-limited HTTP request.
Parameters:
method(str): HTTP method (GET, POST, PUT, DELETE, PATCH)url(str): Request URL**kwargs: Additional arguments passed torequests.request()
Returns: requests.Response object
wrap_session(session: requests.Session) -> None
Wrap an existing requests.Session with rate limiting.
get_status(endpoint: str) -> RateLimitStatus | None
Get current rate limit status for an endpoint.
Returns: RateLimitStatus object or None if no info available
set_limit(endpoint: str, limit: int, window: str = '1h') -> None
Manually set rate limit for an endpoint.
Parameters:
endpoint: Endpoint URL or domainlimit: Maximum number of requestswindow: Time window ('1h', '1m', '30s', '1d')
clear(endpoint: str | None = None) -> None
Clear stored rate limit data.
Parameters:
endpoint: Specific endpoint to clear, orNoneto clear all
RateLimitStatus
Status information about current rate limits.
Properties:
endpoint(str): Endpoint URLlimit(int): Total rate limitremaining(int): Remaining requestsreset_time(datetime): When the limit resetswindow(timedelta): Time window for the limitreset_in(float): Seconds until reset (property)is_exceeded(bool): Whether limit is exceeded (property)utilization(float): Utilization percentage 0.0-1.0 (property)
Examples
Web Scraper
from smartratelimit import RateLimiter
limiter = RateLimiter()
for url in urls:
response = limiter.request('GET', url)
html = response.text
# Process HTML...
API Integration in FastAPI
from fastapi import FastAPI
from smartratelimit import RateLimiter
app = FastAPI()
limiter = RateLimiter()
@app.get("/notify")
def notify_user(user_id: str):
response = limiter.request(
'POST',
'https://api.sendgrid.com/v3/mail/send',
json={'to': user_id, 'message': 'Hello!'}
)
return {"status": "sent"}
Batch Processing
from smartratelimit import RateLimiter
limiter = RateLimiter(default_limits={'requests_per_minute': 60})
results = []
for item in items:
response = limiter.request('POST', 'https://api.example.com/process', json=item)
results.append(response.json())
Roadmap
v0.1.0 - MVP
- โ Basic rate limiting with token bucket algorithm
- โ Automatic header detection
- โ In-memory storage
- โ
requestslibrary integration - โ Status monitoring
v0.2.0 - Production Ready
- โ SQLite persistence
- โ Redis backend for distributed applications
- โ Multi-process support
- โ Performance benchmarks
- โ Comprehensive test coverage
v0.3.0 (Current) - Advanced Features
- โ
httpxandaiohttpasync support - โ Advanced retry logic with configurable strategies
- โ CLI tools (status, clear, probe commands)
- โ Monitoring/metrics export (Prometheus format)
Contributing
Contributions are welcome! Please read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
License
This project is licensed under the MIT License.
See the LICENSE file for the full license text.
Documentation
Comprehensive documentation is available in the docs/ directory:
- ๐ Quick Start Guide - Get started in 5 minutes
- ๐ Complete Tutorial - Step-by-step guide
- ๐ API Reference - Complete API documentation
- ๐ป Examples - Real-world examples with free APIs
- ๐พ Storage Backends - SQLite and Redis guide
- โก Async Guide - Async/await usage
- ๐ Retry Strategies - Advanced retry logic
- ๐ Metrics Guide - Collecting and exporting metrics
- ๐ ๏ธ CLI Guide - Command-line tools
- ๐ฏ Advanced Features - Advanced patterns
Support
- ๐ Documentation
- ๐ Issue Tracker
- ๐ฌ Discussions
Acknowledgments
Inspired by the need for a simple, automatic rate limiting solution that works with any API without configuration.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file smartratelimit-0.3.0.tar.gz.
File metadata
- Download URL: smartratelimit-0.3.0.tar.gz
- Upload date:
- Size: 37.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c1ace33a93340155cde55c0ed96921d1c413cdd33712825284812a3eced03b8a
|
|
| MD5 |
95e75c0a47b7d99a6ef56a907f60738d
|
|
| BLAKE2b-256 |
72295f813d3867f48c4d730632422013bb8bf082ffadc3144ea37085cfeff369
|
File details
Details for the file smartratelimit-0.3.0-py3-none-any.whl.
File metadata
- Download URL: smartratelimit-0.3.0-py3-none-any.whl
- Upload date:
- Size: 24.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
280b14e9704a46226af02c61f26f7e64b1efcba2d337b233194e7da05c182c6e
|
|
| MD5 |
42ec79f9c588eddef75bb7a9197cfb5c
|
|
| BLAKE2b-256 |
f5cb4a06321a5656b9ed06facba5cabf6c872e255f0d9e8cb2b99708a476e355
|