Fast parallel HTTP requests for Python, powered by Rust
Project description
floodr - Fast Parallel HTTP Requests for Python
A high-performance Python library for parallel HTTP requests, built with Rust for speed and reliability. Perfect for bulk API requests, web scraping, and any scenario where you need to fetch multiple URLs concurrently.
Features
- 🚀 Fast: Built with Rust for maximum performance
- 🔄 Async: Full async/await support with Python's asyncio
- 🎯 Simple API: Intuitive interface similar to requests/httpx
- 🏊 Connection Pooling: Automatic connection reuse for better performance
- 🎛️ Configurable: Control timeouts, concurrency limits, and more
- 🛡️ Type Safe: Full type hints and runtime validation
- 📦 Zero Dependencies: Minimal Python dependencies (just pydantic)
Installation
From PyPI
pip install floodr
With uv
uv add floodr
From source
# Clone the repository
git clone https://github.com/yourusername/floodr.git
cd floodr
# Create a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install maturin (build tool)
pip install maturin
# Build and install
maturin develop --release
Requirements
- Python 3.9 or higher
- Rust toolchain (only for building from source)
Quick Start
import asyncio
import floodr
async def main():
# Simple parallel GET requests
urls = [
"https://api.github.com/users/github",
"https://api.github.com/users/torvalds",
"https://api.github.com/users/rust-lang"
]
responses = await floodr.get(urls)
for url, resp in zip(urls, responses):
data = resp.json()
print(f"{data['name']} has {data['public_repos']} public repos")
asyncio.run(main())
API Reference
The floodr library provides a modern API using Pydantic models for request/response handling:
from floodr import Request, Response, request
# Create requests with Pydantic validation
requests = [
Request(url="https://httpbin.org/get"),
Request(
url="https://httpbin.org/post",
method="POST",
json={"key": "value"},
headers={"X-Custom": "header"}
),
Request(
url="https://httpbin.org/get",
params={"search": "query", "page": "1"}
),
]
# Execute multiple requests in parallel
responses = await request(requests)
# Access response data
for resp in responses:
print(f"Status: {resp.status_code}")
print(f"Headers: {resp.headers}")
print(f"Content: {resp.text}")
if resp.ok:
data = resp.json() # Parse JSON response
# Use the client for connection reuse
from floodr import Client
client = Client(max_connections=2048)
responses = await client.request(requests)
# Control concurrency for large batches
responses = await request(requests, max_concurrent=100)
Request Model
The Request model supports the following fields:
url(required): The URL to request (validated as proper URL)method: HTTP method (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS) - defaults to GETheaders: Dictionary of HTTP headersparams: URL query parameters (dict or dict with list values for multiple params)json: JSON body data (automatically serialized)data: Form data or raw body (string, bytes, or dict)timeout: Request timeout in seconds
Response Model
The Response model provides:
status_code: HTTP status codeheaders: Response headers as dictcontent: Raw response body as bytestext: Response body as string (property)ok: True if status is 2xx (property)json(): Parse response as JSONelapsed: Time taken for request in secondsurl: Final URL after redirectserror: Error message if request failed (network errors, timeouts, etc.)raise_for_status(): Raise exception for errors
Error Handling
floodr provides comprehensive error handling without interrupting batch processing:
from floodr import Request, request
requests = [
Request(url="https://httpbin.org/status/200"), # Success
Request(url="https://httpbin.org/status/404"), # HTTP error
Request(url="https://httpbin.org/status/500"), # Server error
Request(url="https://invalid-domain.com", timeout=2.0), # Network error
]
responses = await request(requests)
for req, resp in zip(requests, responses):
if resp.error:
# Network error (DNS, connection, timeout)
print(f"Request failed: {resp.error}")
elif not resp.ok:
# HTTP error (4xx, 5xx)
print(f"HTTP {resp.status_code} error for {resp.url}")
else:
# Success
print(f"Success: {resp.status_code}")
# Or use raise_for_status() for exception-based handling
for resp in responses:
try:
resp.raise_for_status()
# Process successful response
data = resp.json()
except Exception as e:
print(f"Error: {e}")
Error Types
-
Network Errors: Connection failures, DNS resolution, timeouts
status_code: 0error: Contains error messageok: False
-
HTTP Errors: 4xx and 5xx responses
status_code: The actual HTTP statuserror: None (valid HTTP response)ok: False
-
Validation Errors: Invalid URLs, methods, or data
- Raised immediately when creating Request objects
- Standard Python exceptions (ValidationError)
Concurrency Control
The max_concurrent parameter allows you to control the number of simultaneous requests:
- None (default): Automatic concurrency based on batch size
- For ≤100 requests: All requests run concurrently
- For >100 requests: Limits to
batch_size / 10(between 100-500)
- Custom value: Set a specific limit for concurrent requests
This is particularly useful for:
- Avoiding overwhelming target servers
- Managing memory usage for very large batches
- Complying with rate limits
- Optimizing performance based on network conditions
Example:
# Fetch many URLs with controlled concurrency
requests = [Request(url=f"https://api.example.com/item/{i}") for i in range(1000)]
# Automatic concurrency (would use ~100 concurrent requests)
responses = await request(requests)
# Limited concurrency (gentler on the server)
responses = await request(requests, max_concurrent=50)
# Using client
client = Client()
responses = await client.request(requests, max_concurrent=20)
Connection Pool Warming
When you know you'll be making many concurrent requests to a specific domain, you can pre-warm the connection pool to reduce latency:
import floodr
# Pre-establish 100 connections to the domain
await floodr.warmup("https://api.example.com", num_connections=100)
# Now make your actual requests - they'll reuse the warmed connections
requests = [Request(url=f"https://api.example.com/item/{i}") for i in range(100)]
responses = await request(requests) # Much lower latency!
Why Warm Connections?
When making HTTP requests, establishing new connections involves:
- DNS resolution
- TCP handshake
- TLS negotiation (for HTTPS)
This can add 50-200ms per connection. By pre-warming the pool, subsequent requests can reuse existing connections, significantly reducing latency.
Advanced Warming
For more control, use warmup_advanced:
# Warm specific endpoints with detailed results
results = await floodr.warmup_advanced(
base_url="https://api.example.com",
paths=["/health", "/api/v1/status", "/api/v1/users"],
num_connections=50,
method="HEAD" # Use HEAD for minimal data transfer
)
# Check warmup results
for result in results:
print(f"{result['url']}: {result['status']} in {result['elapsed']:.3f}s")
Using with Client
The Client class also supports warming:
client = Client(max_connections=1000)
# Warm the client's connection pool
await client.warmup("https://api.example.com", num_connections=100)
# Use the warmed client
responses = await client.request(requests)
Best Practices
- Warm before bulk requests: If you're about to make 100+ requests to a domain, warm with 10-20% of that number
- Use HEAD requests: The default HEAD method minimizes data transfer during warming
- Consider server limits: Don't warm more connections than the server can handle
- Reuse warmed pools: The global client maintains connections for 5 minutes
Performance
floodr is designed for high-performance parallel requests:
- 6-10x faster than pure Python solutions like aiohttp for parallel requests
- Automatic concurrency management prevents overwhelming servers
- Memory efficient with streaming responses
- Connection pooling for reduced latency
Benchmarks
| Concurrent Requests | aiohttp | httpx | floodr | Speedup |
|---|---|---|---|---|
| 100 | 1.2s | 1.1s | 0.3s | 4x |
| 500 | 5.8s | 5.2s | 0.8s | 7x |
| 1000 | 11.4s | 10.8s | 1.6s | 7x |
Benchmark against httpbin.org, results may vary based on network conditions
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
Development Setup
# Clone and install in development mode
git clone https://github.com/yourusername/floodr.git
cd floodr
pip install -e ".[dev]"
# Run tests
pytest
# Run all tests including integration tests
./scripts/test_all.sh
# Format code
black floodr tests
# Lint
ruff check floodr tests
License
This project is licensed under the MIT License - see the LICENSE file for details.
Technical Details
floodr is a Python library with a Rust backend for maximum performance. It's not intended to be used as a standalone Rust crate, but rather as a Python package that leverages Rust's speed and safety.
Acknowledgments
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file floodr-0.1.0.tar.gz.
File metadata
- Download URL: floodr-0.1.0.tar.gz
- Upload date:
- Size: 73.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
446fd9838eb63ffe640382394704ece044050ad72a2ea651a4308e6634b7ee9c
|
|
| MD5 |
248787255563ab6cebd72917368c2a5a
|
|
| BLAKE2b-256 |
6cc5b62f5461a613b1edffba71416bb73d819a08cf621bdbda245701eac4f890
|
Provenance
The following attestation bundles were made for floodr-0.1.0.tar.gz:
Publisher:
release.yml on cemoody/floodr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
floodr-0.1.0.tar.gz -
Subject digest:
446fd9838eb63ffe640382394704ece044050ad72a2ea651a4308e6634b7ee9c - Sigstore transparency entry: 268695728
- Sigstore integration time:
-
Permalink:
cemoody/floodr@07ff2275b2526c37896501aa456424503ab7ee0b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cemoody
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@07ff2275b2526c37896501aa456424503ab7ee0b -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file floodr-0.1.0-cp39-abi3-win_amd64.whl.
File metadata
- Download URL: floodr-0.1.0-cp39-abi3-win_amd64.whl
- Upload date:
- Size: 1.7 MB
- Tags: CPython 3.9+, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
587ba02e06d5c89d039d21a018bb7a44103018077de0306223de90069b2aa498
|
|
| MD5 |
ecfeab3702589538bd139b7bee79d8e4
|
|
| BLAKE2b-256 |
12259893be13f5b4d87016384b4f29d8ae3f6b21206cf12454d61cb363794be5
|
Provenance
The following attestation bundles were made for floodr-0.1.0-cp39-abi3-win_amd64.whl:
Publisher:
release.yml on cemoody/floodr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
floodr-0.1.0-cp39-abi3-win_amd64.whl -
Subject digest:
587ba02e06d5c89d039d21a018bb7a44103018077de0306223de90069b2aa498 - Sigstore transparency entry: 268695731
- Sigstore integration time:
-
Permalink:
cemoody/floodr@07ff2275b2526c37896501aa456424503ab7ee0b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cemoody
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@07ff2275b2526c37896501aa456424503ab7ee0b -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file floodr-0.1.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: floodr-0.1.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.9 MB
- Tags: CPython 3.9+, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
36e369c180e49d1ca57b8c932acba1a09b82bab0c9c21222a861aee67116712f
|
|
| MD5 |
8d6651b2efc86846a00df1d6e0accad3
|
|
| BLAKE2b-256 |
8cbb21b0874ae2b081442800d4c59128448c6a3b4896be4d70620e360e3628f2
|
Provenance
The following attestation bundles were made for floodr-0.1.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:
Publisher:
release.yml on cemoody/floodr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
floodr-0.1.0-cp39-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl -
Subject digest:
36e369c180e49d1ca57b8c932acba1a09b82bab0c9c21222a861aee67116712f - Sigstore transparency entry: 268695730
- Sigstore integration time:
-
Permalink:
cemoody/floodr@07ff2275b2526c37896501aa456424503ab7ee0b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cemoody
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@07ff2275b2526c37896501aa456424503ab7ee0b -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file floodr-0.1.0-cp39-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: floodr-0.1.0-cp39-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 1.7 MB
- Tags: CPython 3.9+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e574e3cd981f147de25fb20605b8db3725f4e4c6aee04e12e1a7601bbb964ae
|
|
| MD5 |
fe64ea33e7e28cf6cb651b83a46e31c3
|
|
| BLAKE2b-256 |
19be91870d334bb02a153a1088539c03d8486d4189164582332d5de338d3767e
|
Provenance
The following attestation bundles were made for floodr-0.1.0-cp39-abi3-macosx_11_0_arm64.whl:
Publisher:
release.yml on cemoody/floodr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
floodr-0.1.0-cp39-abi3-macosx_11_0_arm64.whl -
Subject digest:
6e574e3cd981f147de25fb20605b8db3725f4e4c6aee04e12e1a7601bbb964ae - Sigstore transparency entry: 268695733
- Sigstore integration time:
-
Permalink:
cemoody/floodr@07ff2275b2526c37896501aa456424503ab7ee0b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/cemoody
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@07ff2275b2526c37896501aa456424503ab7ee0b -
Trigger Event:
workflow_dispatch
-
Statement type: