Fast Long-Context Inference via Context Reuse
Project description
ContextPilot: Fast Long-Context Inference via Context Reuse
4–13× cache hits | 1.5–3× faster prefill | ~36% token savings across vLLM, SGLang, RAG, AI Agents, and more.
| Documentation | Examples | Benchmarks |
News
- [2026/02] ContextPilot v0.3.2 released, supporting PageIndex and Mem0.
- [2026/01] ContextPilot has been accepted to MLSys 2026 🎉! See you in Bellevue, WA, USA.
About
Long-context workloads (RAG, memory chat, tool-augmented agents) prepend many context blocks. Across requests, these blocks often overlap but get reordered or duplicated, changing token prefixes and triggering cache misses and redundant KV recomputation. Common examples include (1) Trending Topic QA, (2) Closed-Domain Long-Context QA, (3) Batched Long-Context Inference, (4) multi-turn conversations with long-term memory and many more.
ContextPilot sits between context assembly and inference to maximize prefix reuse and remove duplicates:
- Higher throughput & cache hits — boosts prefill throughput and prefix cache hit ratio via context reuse.
- Drop-in solutions — works with PageIndex, Mem0, LMCache, and backends like vLLM / SGLang.
- No compromise in reasoning quality — can even improve with extremely long contexts.
- Widely tested — validated across diverse RAG and agentic workloads.
It maintains a Context Index of cached content, then per request applies Reorder (align shared blocks into a common prefix) and/or Deduplicate (replace repeats with reference hints), plus cache-aware scheduling to maximize prefix sharing. The optimized prompt is sent via the OpenAI-compatible API; POST /evict keeps the index synced when KV cache is reclaimed. See its design overview below.
For more design details, see Paper and Documentation.
Performance at a Glance
ContextPilot significantly speeds up DeepSeek-R1-671B offline inference on a GPU cluster with minimal accuracy impact: 64.68% vs 64.15% F1 on MultihopRAG and 41.08% vs 40.20% F1 on NarrativeQA.
On consumer-grade or professional-grade GPUs (e.g., 4090, A6000), ContextPilot delivers consistent speedups across popular LLMs and long-context workloads—see the Evaluation section of the Paper for full performance results.
Installation
Requirements: Python >= 3.10
pip install contextpilot
Or install from source:
git clone https://github.com/EfficientContext/ContextPilot.git
cd ContextPilot
pip install -e .
More detailed installation instructions are available in the docs.
Getting Started
ContextPilot offers two core optimizations—reorder and deduplicate—to reduce long-context inefficiencies.
Context Ordering
cp.reorder() places shared blocks at the beginning of the prompt so consecutive requests share the longest possible common prefix, enabling KV-cache reuse. To preserve answer quality, ContextPilot injects an importance ranking so the model still prioritizes blocks in their original relevance order.
Context Deduplication
In multi-turn conversations, successive turns frequently gather many of the same context blocks, wasting tokens and compute.
cp.deduplicate() compares the current turn's context blocks against prior turns (tracked by conversation_id). Duplicate blocks are replaced with lightweight reference hints (e.g., "See Doc 3 from previous context"); only genuinely new blocks are sent in full — typically reducing duplicated tokens by 30-60%. See automatic context deduplication.
Quick Start with Context Ordering
Add one call (cp_live.optimize()) before inference to rearrange context blocks so that shared content aligns into a common prefix, enabling cache reuse. An importance ranking in the prompt preserves accuracy.
| Mode | When to Use | How It Works |
|---|---|---|
| Online | Multi-turn (e.g., chatbot + Mem0) | Tracks previously cached blocks; moves overlapping ones to the prefix each turn |
| Offline | Batch / single-shot | Globally reorders and schedules all requests for maximum prefix sharing |
Both modes work with any OpenAI-compatible endpoint (vLLM, SGLang, etc.) — no changes to your inference deployment. They support both direct API calls (shown below) and HTTP server deployment (see the online usage guide).
Accelerating Online Inference
Multi-turn chatbot with Mem0 or RAG where each turn's context blocks partially overlap. cp_live.optimize() moves shared blocks to the prefix so the engine reuses cached KV states.
from openai import OpenAI
# Step 1: Import ContextPilot
import contextpilot as cp
client = OpenAI(base_url="http://localhost:30000/v1", api_key="EMPTY")
# Step 2: Create a ContextPilot instance
cp_live = cp.ContextPilot(use_gpu=False)
for query in queries:
contexts = get_contexts(query) # Mem0, Retriever, ...
# Step 3: Optimize context ordering and build ready-to-use messages
messages = cp_live.optimize(contexts, query)
response = client.chat.completions.create(
model="Qwen/Qwen3-4B",
messages=messages,
)
print(f"Q: {query}\nA: {response.choices[0].message.content}\n")
Note: When the engine evicts KV-cache entries under memory pressure, ContextPilot's index can go stale. Install the eviction patch for SGLang or vLLM to keep the index in sync. See the online usage guide.
Accelerating Offline Inference
Batch of requests with overlapping context blocks. cp_batch.optimize_batch() globally reorders blocks and schedules execution order so queries with similar contexts run consecutively, maximizing cache reuse. See the offline usage guide for details. Offline mode can also be deployed as an HTTP server without eviction sync — see Stateless Mode.
import asyncio
import openai
# Step 1: Import ContextPilot
import contextpilot as cp
BASE_URL = "http://localhost:30000/v1"
# Step 2: Create a ContextPilot instance
cp_batch = cp.ContextPilot(use_gpu=False)
all_contexts = [get_contexts(q) for q in queries] # Mem0, Retriever, ...
# Step 3: Optimize — reorder, schedule, and build prompts in one call
messages_batch, order = cp_batch.optimize_batch(all_contexts, queries)
# Send all requests concurrently
async def generate_all():
ac = openai.AsyncOpenAI(base_url=BASE_URL, api_key="EMPTY")
return await asyncio.gather(*[ac.chat.completions.create(
model="Qwen/Qwen3-4B", messages=m
) for m in messages_batch])
for resp, idx in zip(asyncio.run(generate_all()), order):
print(f"Q: {queries[idx]}\nA: {resp.choices[0].message.content}\n")
For a detailed walkthrough with concrete examples, see the Quick Start Guide. For more fine-grained control, you can also use cp.reorder() and cp.deduplicate() directly — see the API reference and multi-turn deduplication guide.
Adoption Examples
See many useful adoption examples: Mem0 integration, PageIndex RAG, offline batch scheduling, and multi-turn deduplication.
Citation
@inproceedings{contextpilot2026,
title = {ContextPilot: Fast Long-Context Inference via Context Reuse},
author = {Jiang, Yinsicheng and Huang, Yeqi and Cheng, Liang and Deng, Cheng and Sun, Xuan and Mai, Luo},
booktitle = {Proceedings of the 9th Conference on Machine Learning and Systems (MLSys 2026)},
year = {2026},
url = {https://arxiv.org/abs/2511.03475}
}
Contributing
We welcome and value all contributions! Please feel free to submit issues and pull requests.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file contextpilot-0.3.5.tar.gz.
File metadata
- Download URL: contextpilot-0.3.5.tar.gz
- Upload date:
- Size: 134.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
53c27f33767202684dd086b5e884919b687cc0be2bc11846edc02a506e67e9d2
|
|
| MD5 |
53f7108ba119312db453d66901333b30
|
|
| BLAKE2b-256 |
c70627add05a0e7f1c8ae340a1f0c8bf420d979a2f845af3e16a54e75e89554e
|
File details
Details for the file contextpilot-0.3.5-py3-none-any.whl.
File metadata
- Download URL: contextpilot-0.3.5-py3-none-any.whl
- Upload date:
- Size: 111.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
462ca487d7687421d16d7fb19687b06fea6a195aecce24567239af9ae2aca783
|
|
| MD5 |
99bfa7297633e5bd3a745ee7f9caa2eb
|
|
| BLAKE2b-256 |
fa8d4b6539ccd53a7ba0d32650dd5358e8ffa2e2158b6f605b18536769885b95
|