A pure-Python HTTP framework built for free-threaded Python 3.13+
Project description
Barq
⚠️ Experimental: This project is a proof-of-concept exploring free-threaded Python (PEP 703) for HTTP frameworks. Not production-ready.
A pure-Python HTTP framework built for free-threaded Python 3.13+. No C extensions, no async/await just threads with true parallelism.
Requirements
- Python 3.13+ with free-threading enabled (
python3.13t) - uv package manager
Installation
uv add barq
Development Setup
git clone https://github.com/grandimam/barq.git
cd barq
# Install
uv sync
# Run
uv run python examples/basic.py
# Test
curl http://localhost:8000/
curl http://localhost:8000/items/1
curl -X POST http://localhost:8000/items -H "Content-Type: application/json" -d '{"name":"Widget","price":9.99}'
Running Benchmarks
# Install dev dependencies
uv sync --dev
# Run benchmark
uv run python benchmarks/run_benchmark.py 1000 10
Quick Start
from typing import Annotated
from pydantic import BaseModel
from barq import Barq, Depends
app = Barq()
class Item(BaseModel):
name: str
price: float
@app.get("/")
def index() -> dict:
return {"message": "Hello, World!"}
@app.get("/items/{item_id}")
def get_item(item_id: int) -> dict:
return {"id": item_id}
@app.post("/items")
def create_item(body: Item) -> Item:
return body
if __name__ == "__main__":
app.run(host="127.0.0.1", port=8000, workers=4)
Features
- Pure Python: No C extensions, no Rust, no Cython
- Free-threaded: True parallelism without the GIL (Python 3.13t)
- Type-driven: Pydantic models auto-parsed from request body
- Dependency injection:
Depends()with request-scoped caching - Minimal: ~465 lines of code in 4 files
Benchmarks
System
| Component | Value |
|---|---|
| CPU | Apple M2 Pro |
| Cores | 12 |
| Python | 3.13.0 (free-threaded) |
| Platform | Darwin arm64 |
Optimal Configs (500 requests, 10 concurrent clients)
Each framework using its optimal single-process configuration:
- Barq: 4 threads + blocking sqlite3
- FastAPI: async + aiosqlite
| Scenario | Barq (4 threads) | FastAPI (async) | Difference |
|---|---|---|---|
| JSON | 10,009 req/s | 3,829 req/s | Barq: +161% |
| DB Query | 8,501 req/s | 1,996 req/s | Barq: +326% |
| CPU Bound | 866 req/s | 260 req/s | Barq: +232% |
Multiprocess Comparison (500 requests, 10 concurrent clients)
Barq threads vs FastAPI with 4 worker processes:
| Scenario | Barq (4 threads) | FastAPI (4 processes) | Difference |
|---|---|---|---|
| JSON | 10,114 req/s | 5,665 req/s | Barq: +79% |
| DB Query | 9,962 req/s | 1,015 req/s | Barq: +881% |
| CPU Bound | 879 req/s | 1,231 req/s | FastAPI: +29% |
Analysis
- I/O-bound (JSON, DB): Barq wins in both configurations due to shared memory and no IPC overhead
- CPU-bound (async): Barq wins because async cannot parallelize CPU work (blocks event loop)
- CPU-bound (multiprocess): FastAPI wins due to process isolation (no memory contention)
Notes:
- CPU benchmark uses pure Python arithmetic
- C extensions like
hashlibhave internal locks that prevent parallelism even with free-threaded Python - For CPU-bound async workloads, FastAPI users would typically use
run_in_executoror multiple processes
Architecture
┌─────────────────────────────────────────────────────────┐
│ Barq App │
│ (app.py: routing, DI, handlers) │
├─────────────────────────────────────────────────────────┤
│ Request / Response │
│ (types.py: dataclasses) │
├─────────────────────────────────────────────────────────┤
│ HTTP Parser │
│ (http.py: parse/write HTTP/1.1) │
├─────────────────────────────────────────────────────────┤
│ ThreadPoolExecutor │
│ (server.py: socket handling) │
└─────────────────────────────────────────────────────────┘
Project Structure
src/barq/
├── __init__.py # exports
├── app.py # Barq, Depends, Router
├── types.py # Request, Response, HTTPException
├── server.py # Server, ThreadPool, SocketReader
└── http.py # HTTPParser, write_response
Why Free-Threaded Python?
Traditional Python has the GIL (Global Interpreter Lock), which prevents true parallelism in threads. Web frameworks work around this using:
- Async/await (FastAPI, Starlette): Cooperative multitasking
- Multiprocessing (Gunicorn, uvicorn): Separate processes with IPC overhead
Free-threaded Python (PEP 703) removes the GIL, enabling:
- Simple synchronous code that runs in parallel
- Shared memory between threads (no serialization)
- Lower overhead than multiprocessing
Limitations
- Experimental—not battle-tested
- HTTP/1.1 only (no HTTP/2, no WebSocket)
- No middleware system (yet)
- C extensions with internal locks don't parallelize
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file barq-0.1.0.tar.gz.
File metadata
- Download URL: barq-0.1.0.tar.gz
- Upload date:
- Size: 22.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
33cc59eb04458f35829cc7e13844e5d951670f42cd28ce8c6ae1320427406999
|
|
| MD5 |
09b0d0bfc354106615934594930cf048
|
|
| BLAKE2b-256 |
765baaaa89235ba30a6e60bb301211cc9b2f21a6c5161b6d134a57353060917c
|
File details
Details for the file barq-0.1.0-py3-none-any.whl.
File metadata
- Download URL: barq-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e1bbdfe871b6a6f5aaa5cf1873a34369b2969d4e0f0532ce9cec33487d2d67a4
|
|
| MD5 |
ad28c89a58879458618d483221b6f03d
|
|
| BLAKE2b-256 |
f0afd3939158d2d4e232fe03c6c99281d35746e55af254a368bfce068a3f09db
|