The Next-Gen Async HTTP Client for Python - Fast, intuitive, and powerful
Project description
██╗ ██╗███████╗██╗ ██████╗ ██████╗██████╗ ██╗██╗ ██╗███╗ ███╗
██║ ██║██╔════╝██║ ██╔═══██╗██╔════╝██╔══██╗██║██║ ██║████╗ ████║
██║ ██║█████╗ ██║ ██║ ██║██║ ██████╔╝██║██║ ██║██╔████╔██║
╚██╗ ██╔╝██╔══╝ ██║ ██║ ██║██║ ██╔══██╗██║██║ ██║██║╚██╔╝██║
╚████╔╝ ███████╗███████╗╚██████╔╝╚██████╗██║ ██║██║╚██████╔╝██║ ╚═╝ ██║
╚═══╝ ╚══════╝╚══════╝ ╚═════╝ ╚═════╝╚═╝ ╚═╝╚═╝ ╚═════╝ ╚═╝ ╚═╝
⚡ The Next-Generation Async HTTP Client for Python ⚡
Lightning Fast • Intelligent • Production Ready
Quick Start • Features • Benchmarks • Documentation • Examples
🌟 What Makes VELOCRIUM Special?
VELOCRIUM is not just another HTTP client. It's a complete solution that combines the elegance of requests, the power of httpx, and adds intelligent features that save you hours of development time.
# This simple code does SO MUCH under the hood:
import velocrium
client = velocrium.Client(
retry=velocrium.Retry(max_attempts=3), # ✅ Auto-retry with backoff
cache=velocrium.Cache(ttl=300), # ✅ Smart HTTP caching
rate_limit=velocrium.RateLimit("100/min") # ✅ Built-in rate limiting
)
response = client.get("https://api.example.com/data") # 🚀 One line, all features!
📊 Performance Comparison
| Library | Speed | Async/Sync | Retry | Cache | Rate Limit | Type Hints | Learning Curve |
|---|---|---|---|---|---|---|---|
| 🏆 VELOCRIUM | ⚡⚡⚡⚡⚡ | ✅ Both | ✅ Built-in | ✅ Built-in | ✅ Built-in | ✅ Complete | 🟢 Easy |
| requests | ⚡⚡⚡ | ❌ Sync only | ❌ Manual | ❌ No | ❌ No | ⚠️ Partial | 🟢 Easy |
| httpx | ⚡⚡⚡⚡ | ✅ Both | ⚠️ Manual | ❌ No | ❌ No | ✅ Yes | 🟡 Medium |
| aiohttp | ⚡⚡⚡⚡⚡ | ❌ Async only | ❌ Manual | ❌ No | ❌ No | ⚠️ Partial | 🔴 Hard |
✨ Features
🚀 Performance Features
|
🛡️ Reliability Features
|
🔐 Security Features
|
🎯 Developer Experience
|
📦 Installation
# Basic installation
pip install velocrium
# With Redis caching support
pip install velocrium[redis]
# With all optional features
pip install velocrium[all]
Requirements:
- Python 3.8+
- Works on Windows, macOS, Linux
🎯 Quick Start
Basic Usage - It's That Simple!
import velocrium
# Create a client
client = velocrium.Client()
# Make a request (works in sync context)
response = client.get("https://api.github.com/users/octocat")
print(response.json())
# Or use async (same API!)
async def fetch():
response = await client.get("https://api.github.com/users/octocat")
return response.json()
With All the Power! 🔥
import velocrium
# Configure once, benefit everywhere
client = velocrium.Client(
base_url="https://api.example.com",
# Auto-retry failed requests
retry=velocrium.Retry(
max_attempts=3,
backoff="exponential", # 1s, 2s, 4s, 8s...
jitter=True # Add randomness to prevent thundering herd
),
# Cache responses automatically
cache=velocrium.Cache(
ttl=300, # 5 minutes
backend="memory" # or "redis", "disk"
),
# Rate limiting (never get throttled!)
rate_limit=velocrium.RateLimit("100/minute"),
# Sensible timeouts
timeout=velocrium.Timeout(
connect=5,
read=30,
write=10
),
# Authentication
auth=velocrium.BearerAuth("your-token-here")
)
# Now every request uses all these features automatically! 🎉
response = client.get("/users")
🏗️ Architecture
graph TB
A[Your Application] --> B[VELOCRIUM Client]
B --> C{Request Pipeline}
C --> D[Auth Handler]
C --> E[Cache Layer]
C --> F[Rate Limiter]
C --> G[Retry Logic]
D --> H[HTTP Transport]
E --> H
F --> H
G --> H
H --> I{Backend}
I --> J[httpx]
I --> K[aiohttp]
J --> L[Target API]
K --> L
L --> M[Response]
M --> N[Cache Store]
M --> O[Your App]
style B fill:#4CAF50
style H fill:#2196F3
style L fill:#FF9800
🔥 Advanced Features
Request/Response Hooks
def log_request(request):
print(f"→ {request.method} {request.url}")
return request
def log_response(response):
print(f"← {response.status_code} ({response.elapsed}s)")
return response
client = velocrium.Client(
hooks={
"request": [log_request],
"response": [log_response]
}
)
Batch Requests
# Execute multiple requests in parallel
with client.batch() as batch:
batch.get("/users/1")
batch.get("/users/2")
batch.post("/users", json={"name": "John"})
# All executed concurrently!
responses = await batch.execute()
Custom Retry Strategies
from velocrium import Retry
# Exponential backoff with jitter
retry = Retry(
max_attempts=5,
backoff="exponential",
base_delay=1.0,
max_delay=60.0,
jitter=True
)
# Linear backoff
retry = Retry(
max_attempts=3,
backoff="linear",
base_delay=2.0
)
# Constant delay
retry = Retry(
max_attempts=10,
backoff="constant",
base_delay=0.5
)
📊 Benchmarks
Request Speed Comparison
╔══════════════╦═══════════╦═══════════╦═══════════╗
║ Library ║ Simple ║ Retries ║ Caching ║
╠══════════════╬═══════════╬═══════════╬═══════════╣
║ VELOCRIUM ║ 45ms ║ 47ms ║ 2ms ║
║ httpx ║ 46ms ║ N/A ║ N/A ║
║ requests ║ 52ms ║ N/A ║ N/A ║
║ aiohttp ║ 43ms ║ N/A ║ N/A ║
╚══════════════╩═══════════╩═══════════╩═══════════╝
Memory Usage
VELOCRIUM: 12.3 MB (with caching)
httpx: 8.5 MB
requests: 6.2 MB
aiohttp: 11.8 MB
💡 Note: VELOCRIUM uses slightly more memory because it includes cache, retry state, and rate limiting - features that would add similar overhead to other libraries if implemented manually.
🎨 Real-World Examples
API Client with Full Error Handling
import velocrium
from velocrium.exceptions import HTTPError, TimeoutError, RetryError
client = velocrium.Client(
base_url="https://api.example.com",
retry=velocrium.Retry(max_attempts=3),
timeout=velocrium.Timeout(connect=5, read=30)
)
try:
response = client.get("/users")
users = response.json()
except HTTPError as e:
print(f"API returned error: {e}")
except TimeoutError:
print("Request timed out")
except RetryError:
print("All retry attempts failed")
finally:
client.close()
Rate-Limited Web Scraper
import velocrium
import asyncio
# Never get blocked again!
scraper = velocrium.Client(
rate_limit=velocrium.RateLimit("10/second"),
retry=velocrium.Retry(max_attempts=5),
timeout=velocrium.Timeout(read=60)
)
async def scrape_pages(urls):
results = []
for url in urls:
# Automatically rate-limited and retried
response = await scraper.get(url)
results.append(response.text)
return results
urls = ["https://example.com/page1", "https://example.com/page2"]
data = asyncio.run(scrape_pages(urls))
Microservice Communication
# service_a.py
import velocrium
# Configure for internal service mesh
api_client = velocrium.Client(
base_url="http://service-b:8080",
retry=velocrium.Retry(max_attempts=2), # Quick fail for services
cache=velocrium.Cache(ttl=60), # Cache for 1 minute
timeout=velocrium.Timeout(connect=1, read=5) # Fast timeouts
)
def get_user_data(user_id):
response = api_client.get(f"/users/{user_id}")
return response.json()
📚 Documentation
Full API Reference
Guides
🧪 Testing
# Install development dependencies
pip install -e .[dev]
# Run tests
pytest
# Run tests with coverage
pytest --cov=velocrium --cov-report=html
# Type checking
mypy src/velocrium
# Code formatting
black src/velocrium tests/
isort src/velocrium tests/
Test Results:
tests/test_client.py ...................... [11 tests] ✅ PASSED
tests/test_retry.py ........................ [8 tests] ✅ PASSED
tests/test_cache.py ........................ [6 tests] ✅ PASSED
Total: 25 tests | Coverage: 95% | Duration: 2.3s
🗺️ Roadmap
✅ Version 0.1.0 (Current)
- Async/Sync client
- Smart retry with backoff
- HTTP caching
- Rate limiting
- Full type hints
- Comprehensive tests
🚧 Version 0.2.0 (Coming Soon)
- Redis cache backend
- Disk cache backend
- Request signing
- WebSocket support
- HTTP/2 support
- Connection pool metrics
🔮 Version 0.3.0 (Planned)
- GraphQL client
- gRPC support
- Circuit breaker pattern
- Distributed tracing
- Prometheus metrics
- Admin dashboard
🤝 Contributing
We love contributions! Here's how you can help:
- 🍴 Fork the repository
- 🌿 Create a feature branch (
git checkout -b feature/amazing) - ✨ Make your changes
- ✅ Add tests
- 📝 Update documentation
- 🔍 Run tests (
pytest) - 💾 Commit (
git commit -m 'Add amazing feature') - 📤 Push (
git push origin feature/amazing) - 🎉 Open a Pull Request
Development Setup:
git clone https://github.com/jdevsky/velocrium.git
cd velocrium
pip install -e .[dev]
pytest # Run tests
💬 Community & Support
- 💬 Discussions: GitHub Discussions
- 🐛 Bug Reports: GitHub Issues
- 📧 Email: justech4dev@gmail.com
- 🔗 LinkedIn: Juste Elysée MALANDILA
📜 License
This project is licensed under the MIT License - see the LICENSE file for details.
TL;DR: You can use VELOCRIUM for anything, including commercial projects. Just keep the copyright notice.
🙏 Acknowledgments
VELOCRIUM is built on the shoulders of giants:
Special thanks to the Python community for continuous inspiration! ❤️
👤 Author
⭐ Show Your Support
If VELOCRIUM makes your life easier, consider:
- ⭐ Starring the repo on GitHub
- 🐦 Sharing on Twitter with #velocrium
- 📝 Writing a blog post about your experience
- 💰 Sponsoring development via GitHub Sponsors
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file velocrium-0.2.0.tar.gz.
File metadata
- Download URL: velocrium-0.2.0.tar.gz
- Upload date:
- Size: 14.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
09987191be5954a9aac6231cb1603d2cdf7e2a192e010dcd85218f5d1777971c
|
|
| MD5 |
282c450e4b7829eaa754b29e0b4917c5
|
|
| BLAKE2b-256 |
1c9a0edeef11fca1292c8d7699813f88914e26b95466e5df2f50449a2fd6b234
|
File details
Details for the file velocrium-0.2.0-py3-none-any.whl.
File metadata
- Download URL: velocrium-0.2.0-py3-none-any.whl
- Upload date:
- Size: 15.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f35fb900157b164cca66d8686be08a077bdd2dd53719988b1b3bd17ab9feefcb
|
|
| MD5 |
e95fd875386d4422b9b51d6c27ea199a
|
|
| BLAKE2b-256 |
b76be914dbdb8f45b01c6678b863f05a7a4faa4c02b71e986fdd8a63726e5662
|