Time-series and event-streaming checkpointers for LangGraph using TimescaleDB, QuestDB, and Kafka
Project description
LangGraph Time-Series Checkpointers
Custom LangGraph checkpointer implementations optimized for Time-Series Databases and event-streaming backends. Drop-in replacements for PostgresSaver that deliver superior write throughput and richer state history for AI agent workloads.
Why Time-Series Memory?
Standard checkpointers (PostgresSaver) store state using row-level locking and WAL-based durability. This is ideal for simple apps, but becomes a bottleneck when you have:
- ๐ญ Multiple agents writing concurrently (fleet management, parallel pipelines)
- ๐ High-frequency state updates (trading bots, real-time monitoring)
- ๐ Long-running agents accumulating millions of checkpoints over weeks
- โก Event-sourced pipelines where checkpoints are durable, replayable events
Time-Series databases and event-streaming systems are purpose-built for these exact workloads.
Installation
pip install langgraph-checkpoint-timeseries
Supported Backends
1. TimescaleDB (TimescaleDBSaver)
Perfect for relational agent memory interspersed with IoT or metrics data. Uses UNLOGGED tables and pipeline mode for high-throughput. Full PostgreSQL ecosystem compatibility.
2. QuestDB (QuestDBSaver)
Extremely high-throughput ingestion using timestamp(ts) PARTITION BY DAY. Ideal for write-heavy, append-only workloads with sub-millisecond query latency.
3. Kafka (KafkaSaver)
Event-sourced agent memory on Apache Kafka. Writes are fire-and-forget produce calls (no round-trip wait), making this the fastest option for pure write throughput. Ideal for event-sourced architectures where checkpoints fan out to downstream consumers.
Quick Start
TimescaleDB
from langgraph_checkpoint_timeseries import TimescaleDBSaver
with TimescaleDBSaver.from_conn_string("postgresql://postgres:postgres@localhost:5432/postgres") as saver:
saver.setup()
app = workflow.compile(checkpointer=saver)
QuestDB
from langgraph_checkpoint_timeseries import QuestDBSaver
with QuestDBSaver.from_conn_string("postgresql://admin:quest@localhost:8812/qdb") as saver:
saver.setup()
app = workflow.compile(checkpointer=saver)
Kafka
from langgraph_checkpoint_timeseries import KafkaSaver
with KafkaSaver.from_bootstrap_servers("localhost:9092") as saver:
saver.setup() # creates topics, replays existing state
app = workflow.compile(checkpointer=saver)
Docker
All backends are available via Docker Compose:
docker compose up -d
| Service | URL |
|---|---|
| TimescaleDB | localhost:5432 |
| QuestDB | localhost:9000 (UI), localhost:8812 (PG wire) |
| Kafka | localhost:9092 |
| Kafka UI | localhost:8080 |
Benchmarks
Benchmarked against the standard PostgresSaver on the same hardware (2026-03-24):
| Scenario | PostgresSaver | TimescaleDB | QuestDB | KafkaSaver | ๐ Winner |
|---|---|---|---|---|---|
| Sequential Writes (1K) | 345 ops/s | 411 ops/s | 341 ops/s | 37,959 ops/s | KafkaSaver |
| Concurrent Writes (15Tร200) | 333 ops/s | 1,192 ops/s | 1,145 ops/s | 20,616 ops/s | KafkaSaver |
| High-Volume Writes (5K) | 332 ops/s | 375 ops/s | 378 ops/s | 36,044 ops/s | KafkaSaver |
| History Query (list 100) | 6,788 ops/s | 1,259 ops/s | 577 ops/s | 93,186 ops/s | KafkaSaver |
Concurrent Writes โ Where TSDB Shines Over Postgres
PostgresSaver 333 ops/s
TimescaleDB โโ 1,192 ops/s
QuestDB โโ 1,145 ops/s
KafkaSaver โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 20,616 ops/s
KafkaSaver is ~110x faster on write throughput. TimescaleDB and QuestDB are ~3.5x faster than PostgresSaver under concurrent load.
Note on Kafka read numbers: The history query advantage for Kafka reflects an in-memory read projection (state is replayed from the topic at startup). Writes are genuinely faster due to async produce eliminating network round-trips. For production, pair Kafka with a secondary read store (Redis, DuckDB) for durable cross-restart reads.
Full results: see benchmark_results.md.
When to Use This
| Use Case | Recommended Backend |
|---|---|
| Simple apps, prototyping | PostgresSaver |
| Multi-agent, high concurrency | TimescaleDBSaver |
| Maximum write throughput (IoT, trading) | QuestDBSaver |
| Full PostgreSQL ecosystem + time-series | TimescaleDBSaver |
| Event-sourced agents, audit log, fan-out | KafkaSaver |
๐ญ Multi-Agent / High-Concurrency Systems
When you have multiple AI agents writing state simultaneously (fleet of IoT monitoring agents, parallel customer service bots), TimescaleDB and QuestDB handle write contention far better than standard Postgres. Our benchmarks show ~3.5x throughput under 15-thread concurrent load.
๐ High-Frequency Decision Agents
Trading bots, real-time bidding agents, or any system making hundreds of decisions per second benefit from the optimized ingestion pipelines of time-series databases. UNLOGGED tables and disabled synchronous commit eliminate WAL overhead entirely.
โก Event-Sourced & Streaming Pipelines
When agent checkpoints need to be consumed by multiple downstream services (analytics, alerting, replay), KafkaSaver makes each state transition a first-class Kafka event. Any consumer group can subscribe independently โ no DB access needed.
๐ Long-Running Agents with Data Retention
Agents that run for weeks or months accumulate millions of checkpoints. Time-series databases offer efficient partition-based cleanup (DROP PARTITION) instead of expensive DELETE operations, keeping performance stable over time.
๐ Debugging & Compliance Auditing
When you need to answer "What was the agent thinking at 14:03:22?", time-series databases provide native timestamp-indexed queries. Correlate agent decisions with real-world events stored in the same database.
Examples
See the examples/ directory for practical demos:
- IoT Monitoring Agent (
examples/timescaledb_iot_agent.py) โ Streaming sensor data with time-series checkpointing - Algorithmic Trading Agent (
examples/questdb_trading_agent.py) โ High-frequency state updates and rapid decision preservation - Event-Sourced Moderation Pipeline (
examples/kafka_event_sourced_agent.py) โ Parallel moderation agents with Kafka audit log and fan-out
Running Tests
# Ensure Docker services are up
docker compose up -d
# Install dependencies
pip install -e ".[dev]"
# Run all tests
pytest tests/ -v
# Run Kafka tests only
pytest tests/test_kafka.py -v
License
MIT โ see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langgraph_checkpoint_timeseries-0.2.0.tar.gz.
File metadata
- Download URL: langgraph_checkpoint_timeseries-0.2.0.tar.gz
- Upload date:
- Size: 16.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
570c97d39cfbc4a37ca2b1c662b0540d8a46fdf9076794fca2735f0dee22b36f
|
|
| MD5 |
9344d276b7b450d3e029adb11dc51988
|
|
| BLAKE2b-256 |
e25d240b53666238fedde2387c57e415100cb3f3e3b27309df840f463b10c8f1
|
File details
Details for the file langgraph_checkpoint_timeseries-0.2.0-py3-none-any.whl.
File metadata
- Download URL: langgraph_checkpoint_timeseries-0.2.0-py3-none-any.whl
- Upload date:
- Size: 15.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6c4ae3a17c62f7e6d97614bdf0292265c3d171acc4928b9a7eeaca0ee77319f7
|
|
| MD5 |
9cf936c950dad5355b2d965e32c3c420
|
|
| BLAKE2b-256 |
9649bacec653425cae1a7fa4f75a746b77b90841ea85a4a3e4ba1f23074227c2
|