High-performance debounce library for Python with async support and LLM framework integrations
Project description
debouncer
High-performance async debounce library for Python 3.13+ with message coalescing, designed for LLM-powered applications.
Built with Claude Code — this entire codebase (architecture, implementation, tests, and this README) was written by Claude via Claude Code.
The problem
When users interact with LLM-powered apps, they often send messages in quick succession:
t=0.0s User: "hi"
t=2.0s User: "everything ok?"
Without debouncing, this triggers 2 separate LLM API calls. With debouncing, both messages are coalesced into 1 call:
LLM receives: ["hi", "everything ok?"]
Install
uv add debouncer
With LangChain integration:
uv add debouncer[langchain]
Or with pip:
pip install debouncer
Quick start
Decorator API
from debouncer import debounce
@debounce(delay=2.0, max_wait=10.0)
async def handle(messages: list[str]) -> str:
return await llm.invoke("\n".join(messages))
# Each call pushes a message; the function fires after 2s of quiet
await handle("hi")
await handle("everything ok?")
# -> handle is called once with ["hi", "everything ok?"]
Imperative API
from debouncer import Debouncer, DebounceConfig
async with Debouncer(config=DebounceConfig(delay=2.0, max_wait=10.0)) as d:
await d.push("hello")
await d.push("world")
batch = await d.next_batch() # ["hello", "world"]
Async iterator
async for batch in debouncer.batches():
response = await llm.invoke(batch)
Strategies
| Strategy | Enum | Description |
|---|---|---|
| Trailing | Strategy.TRAILING |
Default. Resets timer on each message, max_wait caps total buffering time. |
| Adaptive | Strategy.ADAPTIVE |
Learns from user behavior via EMA. Fast typists get shorter delays. |
| Actor | Strategy.ACTOR |
Queue-based with natural backpressure via asyncio.Queue. |
from debouncer import Debouncer, DebounceConfig, Strategy
d = Debouncer(config=DebounceConfig(
delay=2.0,
max_wait=10.0,
strategy=Strategy.ADAPTIVE,
))
Session manager
Independent debounce state per user/session:
from debouncer import SessionManager
async with SessionManager(delay=2.0, max_wait=10.0) as mgr:
await mgr.push("session-123", "hi")
await mgr.push("session-123", "everything ok?")
await mgr.push("session-456", "different user")
batch_123 = await mgr.next_batch("session-123") # ["hi", "everything ok?"]
batch_456 = await mgr.next_batch("session-456") # ["different user"]
Idle sessions are garbage-collected automatically.
LangChain middleware
from langchain.agents import create_agent
from debouncer.integrations.langchain import DebounceMiddleware
agent = create_agent(
model="gpt-4.1",
tools=[...],
middleware=[DebounceMiddleware(delay=2.0, max_wait=10.0)],
)
The middleware implements wrap_model_call to coalesce consecutive HumanMessage objects before they reach the model. Disable coalescing with coalesce=False.
API reference
DebounceConfig
| Parameter | Type | Default | Description |
|---|---|---|---|
delay |
float |
2.0 |
Quiet-period delay in seconds |
max_wait |
float | None |
10.0 |
Max buffering time. None = no limit |
strategy |
Strategy |
TRAILING |
Debounce strategy |
Debouncer
| Method | Description |
|---|---|
push(message) |
Add a message to the buffer |
next_batch() |
Await the next flushed batch |
flush() |
Force-flush immediately |
close() |
Close and flush pending messages |
batches() |
Async iterator over batches |
delay |
Get/set the delay (validates against max_wait) |
@debounce
@debounce # defaults: delay=2.0, max_wait=10.0
@debounce(delay=1.0, max_wait=5.0) # custom config
# wrapper attributes:
handler.debouncer # access the Debouncer instance
handler.flush() # force-flush
await handler.close() # shut down
Development
# Install dependencies
uv sync --extra dev
# Run tests
uv run pytest tests/ -v
# Run tests with coverage
uv run pytest tests/ -v --cov=src --cov-report=term
# Lint
uv run ruff check src/ tests/
# Format
uv run ruff format src/ tests/
# Type check
uv run mypy src/
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file quieto-0.1.0.tar.gz.
File metadata
- Download URL: quieto-0.1.0.tar.gz
- Upload date:
- Size: 73.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2e80bace2956f1ccd8b967480c6f81e3d3484c127e00da6da95305e294a76e9d
|
|
| MD5 |
abb37e32e96b0fbcb96d14daddbd20dd
|
|
| BLAKE2b-256 |
cc4c8921c33e30c408eb10e0c53a2e4402eafc6fcf8119b538149a1f8c4af792
|
Provenance
The following attestation bundles were made for quieto-0.1.0.tar.gz:
Publisher:
publish.yml on vnniciusg/quieto
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
quieto-0.1.0.tar.gz -
Subject digest:
2e80bace2956f1ccd8b967480c6f81e3d3484c127e00da6da95305e294a76e9d - Sigstore transparency entry: 991606539
- Sigstore integration time:
-
Permalink:
vnniciusg/quieto@fe30931ee80293f3a15b7733d41f310ab733c21d -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/vnniciusg
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@fe30931ee80293f3a15b7733d41f310ab733c21d -
Trigger Event:
push
-
Statement type:
File details
Details for the file quieto-0.1.0-py3-none-any.whl.
File metadata
- Download URL: quieto-0.1.0-py3-none-any.whl
- Upload date:
- Size: 18.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
89b5e82b81450337f1679f3f56a8783d6c535bfef79a8c608fc5708a665c89ae
|
|
| MD5 |
6e83c289342ac60b4da2eb1400a4f43c
|
|
| BLAKE2b-256 |
4a22b27d5542c59526bb6e5eaf3dc675d95b43c1a79fcb28558229e2f9e8fb2a
|
Provenance
The following attestation bundles were made for quieto-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on vnniciusg/quieto
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
quieto-0.1.0-py3-none-any.whl -
Subject digest:
89b5e82b81450337f1679f3f56a8783d6c535bfef79a8c608fc5708a665c89ae - Sigstore transparency entry: 991606552
- Sigstore integration time:
-
Permalink:
vnniciusg/quieto@fe30931ee80293f3a15b7733d41f310ab733c21d -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/vnniciusg
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@fe30931ee80293f3a15b7733d41f310ab733c21d -
Trigger Event:
push
-
Statement type: