High-performance QUIC/HTTP3 library — picoquic-backed, qh3-compatible asyncio API
Project description
aiopquic - Async QUIC + WebTransport (picoquic)
aiopquic is a Python/Cython binding to picoquic, providing high-performance QUIC transport and WebTransport for asyncio applications.
Overview
aiopquic exposes picoquic's QUIC implementation through a lock-free SPSC ring buffer architecture that bridges the picoquic network thread with Python's asyncio event loop. It provides an asyncio QUIC/HTTP3 transport API in the spirit of aioquic (and its fork qh3) — similar shapes for QuicConfiguration, QuicConnection, connect / serve, and event types — plus a native WebTransport client/server layered on picoquic's H3 + h3zero. Not a drop-in replacement: semantics differ around backpressure (send_stream_data raises BufferError on full per-stream ring) and flow-control sizing.
Architecture
- SPSC Ring Buffers -- Lock-free single producer/single consumer rings for event passing between threads, separate TX and RX rings per
TransportContext. - TX path -- Asyncio pushes into per-stream byte ring; picoquic pulls at wire rate via
prepare_to_send. - RX path -- picoquic pushes per-event
StreamChunks; ownership transfers at pop for 1-copy delivery. - Cross-platform wake fd -- Linux
eventfdfor efficient asyncioadd_reader()notification;pipe()self-pipe fallback on macOS / BSD. - Dedicated Network Thread -- picoquic runs in its own thread via
picoquic_start_network_thread(). - Cython Bridge -- Thin Cython layer over C callbacks, minimal overhead.
- WebTransport --
asyncio.webtransport.WebTransportSession(client + server) over picoquic'spicowt_*API and h3zero.
Features
- QUIC client and server:
connect,serve,QuicConnectionProtocol - Stream data send/receive with FIN signaling, stream reset, stop_sending
- WebTransport client + server:
serve_webtransport,WebTransportSession - QUIC datagram TX + RX (note: WebTransport datagram TX not yet wired)
- Connection migration / 0-RTT (inherited from picoquic)
- Connection management: create, close, idle timeout, application close codes
- Per-cnx multiplexing on the server side via
QuicEngine - TLS keylog (NSS Key Log Format) for pcap decryption
- Native picoquic_ct / picohttp_ct subprocess smoke (catches upstream regressions on every submodule update)
Test Results
Tests pass on Linux and macOS. The interop suite is opt-in (network-dependent).
| Suite | Coverage |
|---|---|
test_spsc_ring |
per-event malloc ring lifecycle |
test_buffer |
Cython Buffer |
test_transport |
Transport lifecycle, wake fd, wake-up, connection management |
test_loopback |
17 tests — handshake, streams, FIN, reset, datagrams, ALPN mismatch, idle timeout, app-close codes, stop_sending, many-streams stress, TX-ring overflow |
test_asyncio |
client/server stream + datagram exchange via connect / serve |
test_baton_pattern |
Pure-QUIC baton-style stream multiplexing (UNI ↔ BIDI) |
test_native_picoquic |
picoquic_ct / picohttp_ct subprocess driver |
test_interop |
Real public endpoints (opt-in) |
tests/bench/ |
microbenches: ring push/pop, single-shot/sustained/parallel/bidirectional throughput, datagrams, RTT latency, handshake rate, byte-verifying object stress + stream churn + concurrent streams (opt-in via pytest tests/bench) |
Performance
Sustained single-stream throughput, 30s steady-state, byte-verifying, high-level asyncio API (QuicConnection.send_stream_data):
| platform | 1 KiB | 4 KiB | 16 KiB |
|---|---|---|---|
| AMD Ryzen 7 PRO 7840U / WSL2 / Linux 6.6 | 1,570 Mbps | 2,118 Mbps | 2,031 Mbps |
| Apple M-series / macOS Sonoma | 953 Mbps | 1,130 Mbps | 1,104 Mbps |
These are over local UDP loopback at the QUIC default MTU (~1,400 B). The realistic ceiling at that MTU is the kernel's per-syscall sendmsg rate, not bandwidth. On Ryzen WSL2, raw iperf3 -u -l 1400 over loopback maxes at 3.15 Gbps (≈ 280 K syscalls/s); raise the datagram size and it climbs cleanly — 4 KiB → 7.9, 8 KiB → 12.8, 32 KiB → 33.7 Gbps. So QUIC pinned at MTU is in a regime where the syscall rate is the wall.
In that regime, here's where the layers land on Ryzen WSL2:
| layer | ss_mbps | of UDP@1400 ceiling |
|---|---|---|
iperf3 -u -l 1400 (raw UDP loopback) |
3,150 | 100 % |
picoquicdemo -a perf (picoquic over UDP) |
2,184 | 69 % |
aiopquic lowlevel (SPSC ring + UDP) |
2,322 | 74 % |
aiopquic highlevel (asyncio + SPSC + UDP) |
2,031 | 64 % |
sim_link_bench (picoquic only, no kernel UDP) |
11,216 | — (off-axis) |
The asyncio wrapper costs ~10 % below the lowlevel SPSC path; picoquic's own QUIC framing/encryption/ACK overhead accounts for ~25 % vs raw UDP. Both are normal for QUIC-over-loopback at MTU.
sim_link_bench (tests/bench/sim_link/) drives picoquic over its picoquictest_sim_link simulated link — packets are routed in-process between two picoquic_quic_t instances, no kernel UDP, no sockets, no syscall-rate ceiling. It isolates picoquic protocol CPU cost from the loopback wall and is platform-independent. The 11.2 Gbps number above is what picoquic can do without any kernel involvement on this hardware. Build with ./tests/bench/sim_link/build.sh after ./build_picoquic.sh.
Calibrate on your own hardware:
# UDP-over-loopback path (what aiopquic users actually see)
pytest tests/bench/bench_baselines_highlevel.py -s -v # 30s default
pytest tests/bench/bench_baselines_highlevel.py -s -v --duration=60
# Protocol-only reference (no kernel UDP)
PICOQUIC_SOLUTION_DIR=third_party/picoquic/ \
tests/bench/sim_link/sim_link_bench --duration-s 30 --rate-gbps 100
Microbenches (ring lifecycle, stream churn, concurrent-streams short bursts) live under tests/bench/ for development reference. Their reported numbers are not representative of sustained throughput — short windows inflate numbers from warmup transients (a 100-stream churn case at 256 B per stream measures ~1 ms of work, dominated by setup cost).
Installation
Wheels for cp312 / cp313 / cp314 on Linux (manylinux_2_34, glibc 2.34+) and macOS arm64 are published to PyPI:
uv pip install aiopquic # or: pip install aiopquic
For older Linux (glibc 2.28–2.33) install via sdist; build toolchain required.
From source
git clone https://github.com/gmarzot/aiopquic.git
cd aiopquic
git submodule update --init --recursive
./bootstrap_python.sh # creates .venv with uv-managed Python 3.14t and pins cython 3.2+
source .venv/bin/activate
./build_picoquic.sh # builds picotls, picoquic, native test drivers
uv pip install -e '.[dev]' # or: pip install -e '.[dev]'
On macOS, set OPENSSL_ROOT_DIR if Homebrew OpenSSL is not auto-detected (the build script tries openssl@3 then openssl@1.1).
Usage
Low-level Transport API
from aiopquic._binding._transport import TransportContext
server = TransportContext()
server.start(port=4433, cert_file="cert.pem", key_file="key.pem", alpn="moq-00", is_client=False)
client = TransportContext()
client.start(port=0, alpn="moq-00", is_client=True)
client.create_client_connection("127.0.0.1", 4433, sni="localhost", alpn="moq-00")
Asyncio API
from aiopquic.asyncio.client import connect
from aiopquic.quic.configuration import QuicConfiguration
configuration = QuicConfiguration(alpn_protocols=["myproto"], is_client=True)
async with connect("server", 4433, configuration=configuration) as protocol:
quic = protocol._quic
stream_id = quic.get_next_available_stream_id()
quic.send_stream_data(stream_id, payload, end_stream=True)
protocol.transmit()
payload is opaque bytes; the library doesn't impose framing. Consumers
that want HTTP/3 layer on top of aiopquic's picowt-backed h3zero
plumbing; consumers that want WebTransport use serve_webtransport /
connect_webtransport. Most direct users of the asyncio API ship their
own protocol bytes (MoQT, custom binary frames, etc.).
WebTransport
from aiopquic.asyncio.webtransport import (
serve_webtransport, WebTransportSession,
)
# See src/aiopquic/asyncio/webtransport.py and tests/ for full examples.
Development
uv pip install -e '.[dev]' # or: pip install -e '.[dev]'
python -m pytest tests/ -v -m "not interop and not native"
# Microbenches (opt-in)
python -m pytest tests/bench
Known Limitations
- Free-threaded Python (3.14t) not yet supported -- the TX-ring producer side,
TransportContextlifecycle, and the WebTransport engine state currently rely on the GIL for serialization. FT support deferred until a per-context locking audit lands. - STOP_SENDING error codes surface as 0 today: picoquic's public stream-error getter only returns the RESET_STREAM code. STOP_SENDING's code lives in
stream->remote_stop_errorinpicoquic_internal.h(no public getter). A small helper that pulls the field is straightforward future work — see TODO insrc/aiopquic/_binding/c/callback.h. - Per-stream wrapper cleanup before connection close -- per-stream
aiopquic_stream_ctx_t*wrappers are freed at connection close rather than at stream RESET/FIN. Bounded leak per cnx; flagged for follow-up.
TODO
- Windows support (eventfd alternative — IOCP / WSAEventSelect on the wake-fd path)
- Free-threaded Python (3.14t) support after producer-side locking audit
- STOP_SENDING error-code surfacing helper (read
remote_stop_errorfrom picoquic_internal.h) - Per-stream wrapper cleanup on RESET/FIN before connection close
- WebTransport datagram TX path through the C bridge
- Datagram benches: latency percentiles, payload-size sweep, loss / jitter under load (today's
bench_datagramis fire-and-count throughput only) - Pure stream open/close microbench (lifecycle rate without payload, separate from
bench_stream_churn_highlevelwhich bundles writes + FIN) - Submit aiopquic to the QUIC interop runner for cross-implementation coverage
Resources
- picoquic -- QUIC implementation by Christian Huitema
- picotls -- TLS 1.3 implementation
- Media Over QUIC Working Group
A Marz Research project.
Author: G. S. Marzot <gmarzot@marzresearch.net>
License
MIT License -- see LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aiopquic-0.2.7.tar.gz.
File metadata
- Download URL: aiopquic-0.2.7.tar.gz
- Upload date:
- Size: 407.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
144a894d00e85341dc39d97b166bcb9391fc2cb48474cef8024dc7bdf7ee36b9
|
|
| MD5 |
cc18ecb6d28971108e60bde9c3ce29e7
|
|
| BLAKE2b-256 |
0e099c8afa5ff29994f4df5158c323b25d967527b5aa60ce0aa6089f02655174
|
File details
Details for the file aiopquic-0.2.7-cp314-cp314-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: aiopquic-0.2.7-cp314-cp314-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 4.2 MB
- Tags: CPython 3.14, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
256c44a86198ab2ec190cb247cfd41eba29932a533ecbeff789cdf6d66a969e7
|
|
| MD5 |
b2736ec1b9d0cc6680de2c85c2d49939
|
|
| BLAKE2b-256 |
46f9b4e585c733e9c80e06877b2dea82a5a1f42e24b1413b2cd7e412803ac82a
|
File details
Details for the file aiopquic-0.2.7-cp314-cp314-macosx_14_0_arm64.whl.
File metadata
- Download URL: aiopquic-0.2.7-cp314-cp314-macosx_14_0_arm64.whl
- Upload date:
- Size: 3.6 MB
- Tags: CPython 3.14, macOS 14.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
40ecefea263f6176103c3f8b6b8667db7c4789c16a245f69efa4fe41a215f3a1
|
|
| MD5 |
0dfbb98efdbf3cb75791efc676abca2e
|
|
| BLAKE2b-256 |
88068ce7841f0e70bde6d4d8866ca7bcc19ff0f344dfb093e22e73bbdde76346
|
File details
Details for the file aiopquic-0.2.7-cp313-cp313-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: aiopquic-0.2.7-cp313-cp313-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 4.2 MB
- Tags: CPython 3.13, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2dbf621e629cd8e5a5de23cc78031208e9a8439cbb5b43536afd2f1dd1ae4def
|
|
| MD5 |
51c781fef8019bae8c6d3b2a89340e8a
|
|
| BLAKE2b-256 |
a419f7cb7d74860147d690a9e3a2728d1054bd2df56c8be4c197b3fa2b873721
|
File details
Details for the file aiopquic-0.2.7-cp313-cp313-macosx_14_0_arm64.whl.
File metadata
- Download URL: aiopquic-0.2.7-cp313-cp313-macosx_14_0_arm64.whl
- Upload date:
- Size: 3.6 MB
- Tags: CPython 3.13, macOS 14.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
169fc276d134b8f888c411cf4127a53dca0133b77197003329505bf7e53dcc90
|
|
| MD5 |
0117b72bf789f23ec669c0d5e4394797
|
|
| BLAKE2b-256 |
ad46549272342188ef910692df69b9fb8faefcefa9700e32a285e24db4aa9f86
|
File details
Details for the file aiopquic-0.2.7-cp312-cp312-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: aiopquic-0.2.7-cp312-cp312-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 4.2 MB
- Tags: CPython 3.12, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c45295f0ba2438b903ff4cd78123f347e9c9399f931cf9731703809ca82d856
|
|
| MD5 |
78076c28bfdba447c60d91c42605e1c4
|
|
| BLAKE2b-256 |
3a7ebc29506507563d907a5583feae6dd9f982d694a6e3553eaa3536f4d18464
|
File details
Details for the file aiopquic-0.2.7-cp312-cp312-macosx_14_0_arm64.whl.
File metadata
- Download URL: aiopquic-0.2.7-cp312-cp312-macosx_14_0_arm64.whl
- Upload date:
- Size: 3.6 MB
- Tags: CPython 3.12, macOS 14.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
94dabfdaec49488c8fc2d9a3f944ce83f39859f0acef0b44aabe24bb385d59ca
|
|
| MD5 |
9c3b305f4e7112ccae79556cf228815a
|
|
| BLAKE2b-256 |
d6ed91f0c5111e2399189c6b63292ae7bfb8844a5d094f2f2e738164ad57fb91
|