Skip to main content

Efficient Retrieval-Augmented Generation with Accuracy-Preserving Context Reuse

Project description

ContextPilot Logo

ContextPilot: Fast Long-Context Inference via Context Reuse

Python PyPI License


| Documentation | Examples | Benchmarks |

News

  • [2026/02] ContextPilot v0.3.2 released, supporting PageIndex and Mem0.
  • [2026/01] ContextPilot has been accepted to MLSys 2026 🎉! See you in Bellevue, WA, USA.
  • [2025/12] ContextPilot v0.2.0 released.

About

ContextPilot is a fast optimization system on context engineering layer for agentic workloads:

  1. High Throughput & Cache Hit Ratio: Boosting prefill throughput and prefix cache hit ratio with intelligent context reuse.
  2. Strong Compatibility: Strong compatibility with existing popular RAG libraries (PageIndex), Agentic memory layer (Mem0), KV cache optimization engine (LMCache), and Inference engines (vLLM and SGLang).
  3. Negligible Accuracy Loss: Achieving significant performance improvements with minimal to no accuracy degradation across various benchmarks.
  4. Widely Tested: Tested with a wide range of RAG and Agentic AI applications.

Target Workloads

  1. Trending Topic QA — Search and generation for breaking news and hot topics beyond model knowledge
  2. Closed-Domain Long-Context QA — QA over specialized corpora (novels, financial reports, legal documents) with retrieval or in-context search
  3. Large-Batch Long-Context Execution — High-throughput inference where many requests share overlapping contexts; ContextPilot maximizes prefix reuse regardless of the search method
  4. Multi-Turn Conversations with Long-Term Memory — Persistent context reuse across turns (e.g. Mem0)

Benchmark and Performance

System Performance

Benchmark Results

ContextPilot (Stateless) on DeepSeek-R1 maintains accuracy compared to SGLang, achieving 64.68% vs 64.15% F1 on MultihopRAG and 41.08% vs 40.20% F1 on NarrativeQA.

Accuracy on MT-RAG Benchmark (Online Scheduling)

Method Qwen3-4B Llama3.1-8B Qwen3-30B-A3B
LMCache 62.56 68.46 75.12
CacheBlend 50.33 56.52 X
RadixCache 62.56 68.46 75.12
ContextPilot 64.27 68.12 75.81

ContextPilot delivers 4-13x improvements in cache hit rates and 1.5-3.5x reductions in prefill latency for large-batch RAG workloads, while maintaining or improving accuracy.

Furthermore, ContextPilot has been tested to reduce input token costs by around 36% with GPT-5.2.

See Benchmarks in the documentation for GPU vs CPU performance analysis and detailed benchmark methodology.

Getting Started

Installation

Requirements: Python >= 3.10

pip install contextpilot

Or install from source:

git clone https://github.com/Edinburgh-AgenticAI/ContextPilot.git
cd ContextPilot
pip install -e .

More detailed installation instructions are available in the docs.

Quick Start

StatefulContextPilot tracks cached state across turns so overlapping documents are moved to the prefix for KV-cache reuse:

from openai import OpenAI
import contextpilot as cp

client = OpenAI(base_url="http://localhost:30000/v1", api_key="...")
cp_live = cp.ContextPilot(use_gpu=False)

# Simulated per-turn memory search (e.g. from mem0)
# Each turn retrieves different but partially overlapping documents
turn_memories = [
    ["Transformers use self-attention", "GPT is based on transformers", "BERT is bidirectional"],
    ["RNNs use hidden states", "GPT is based on transformers", "LSTMs solve vanishing gradients"],
    ["Attention computes QKV", "Transformers use self-attention", "GPT is based on transformers"],
]
queries = ["What are transformers?", "How do RNNs compare?", "Explain attention in detail."]

for turn_idx, (query, mems) in enumerate(zip(queries, turn_memories)):
    # 1. Reorder for prefix sharing (handles cold start & incremental)
    # .reorder() accepts a single list or list-of-lists
    reordered, indices = cp_live.reorder(mems)
    ctx = reordered[0]  # single context per turn
    # Turn 2: "GPT is based on transformers" ← moved to prefix (shared with turn 1)
    # Turn 3: "Transformers …", "GPT …"     ← both moved to prefix

    # 2. Generate answer with reordered context
    docs_section = "\n".join(f"[{i+1}] {doc}" for i, doc in enumerate(ctx))
    # Map original importance order (mems) → 1-based positions in reordered ctx
    pos = {doc: i + 1 for i, doc in enumerate(ctx)}
    importance_ranking = ">".join(str(pos[doc]) for doc in mems if doc in pos)
    # System prompt = documents + importance ranking (after </documents>, doesn't affect prefix sharing)
    response = client.chat.completions.create(
        model="Qwen/Qwen3-4B",
        messages=[
            {"role": "system", "content": (
                f"Answer the question based on the provided documents.\n\n"
                f"<documents>\n{docs_section}\n</documents>\n\n"
                f"Read the documents in this importance ranking: {importance_ranking}\n"
                f"Prioritize information from higher-ranked documents."
            )},
            {"role": "user", "content": query},
        ],
    )
    print(f"[Turn {turn_idx+1}] Q: {query}")
    print(f"A: {response.choices[0].message.content}\n")

Note: Stateful mode works without eviction sync — ContextPilot tracks the previous ordering and reorders new contexts to maximize prefix cache hits. For production deployments with limited KV-cache capacity, install the eviction patch for your inference engine (SGLang or vLLM) to keep the index in sync. See the online usage guide for HTTP server setup.

Offline / Online Stateless — same API, just pass the full batch at once:

from openai import OpenAI
import contextpilot as cp

client = OpenAI(base_url="http://localhost:30000/v1", api_key="...") # Your inference engine URL and API key
cp_batch = cp.ContextPilot(use_gpu=False)

queries = ["What is AI?", "Explain neural networks", "What is deep learning?"]
all_contexts = [
    ["Doc about AI", "Doc about ML", "Doc about computing"],
    ["Doc about neural nets", "Doc about deep learning"],
    ["Doc about ML", "Doc about AI", "Doc about deep learning basics"],
]

# One call: builds index, reorders docs for prefix sharing, and schedules execution order
# .reorder() returns (reordered_contexts, original_indices)
reordered_ctx, order = cp_batch.reorder(all_contexts)

# Build all prompts in optimized order
messages_batch = []
for ctx, orig_idx in zip(reordered_ctx, order):
    docs_section = "\n".join(f"[{i+1}] {doc}" for i, doc in enumerate(ctx))
    pos = {doc: i + 1 for i, doc in enumerate(ctx)}
    importance_ranking = ">".join(
        str(pos[doc]) for doc in all_contexts[orig_idx] if doc in pos
    )
    # System prompt = documents + importance ranking (after </documents>, doesn't affect prefix sharing)
    messages_batch.append({
        "model": "Qwen/Qwen3-4B",
        "messages": [
            {"role": "system", "content": (
                f"Answer the question based on the provided documents.\n\n"
                f"<documents>\n{docs_section}\n</documents>\n\n"
                f"Read the documents in this importance ranking: {importance_ranking}\n"
                f"Prioritize information from higher-ranked documents."
            )},
            {"role": "user", "content": queries[orig_idx]},
        ],
    })

# Send concurrently — inference engine processes them in order for max cache reuse
import asyncio, openai

async def generate_all(batch):
    aclient = openai.AsyncOpenAI(base_url="http://localhost:30000/v1", api_key="...")
    tasks = [aclient.chat.completions.create(**req) for req in batch]
    return await asyncio.gather(*tasks)

responses = asyncio.run(generate_all(messages_batch))
for resp, orig_idx in zip(responses, order):
    print(f"Q: {queries[orig_idx]}\nA: {resp.choices[0].message.content}\n")

For online stateless scheduling via HTTP server, see the online usage guide.

Documentation

Check out the ContextPilot documentation for comprehensive guides.

Examples

Go hands-on with our examples, demonstrating how to address different use cases with ContextPilot.

Contributing

We welcome and value all contributions! Please feel free to submit issues and pull requests.

Citation

We will include the paper citation soon!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

contextpilot-0.3.4.tar.gz (132.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

contextpilot-0.3.4-py3-none-any.whl (107.9 kB view details)

Uploaded Python 3

File details

Details for the file contextpilot-0.3.4.tar.gz.

File metadata

  • Download URL: contextpilot-0.3.4.tar.gz
  • Upload date:
  • Size: 132.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for contextpilot-0.3.4.tar.gz
Algorithm Hash digest
SHA256 ccd6af9553f18ffd8220661707ad307a7bf8b6f34ed0d9b92a4623f597697d2e
MD5 fb1e1d9d8e060b46a35effabc78058ba
BLAKE2b-256 a3697c2a76262cb9bd8dcf8b614f1f79d723e60b3bee3ec8f48e66ffef41ad5a

See more details on using hashes here.

File details

Details for the file contextpilot-0.3.4-py3-none-any.whl.

File metadata

  • Download URL: contextpilot-0.3.4-py3-none-any.whl
  • Upload date:
  • Size: 107.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for contextpilot-0.3.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d40bfb13f0f9deeb4c52738e1b4f497700252ddece2dad82fc7ea8c897462be3
MD5 121a69f32d925c274a113159159b8bab
BLAKE2b-256 61b5d43fc3f26fc80866f74e4441634a10cfb79e94f310f942b80d12758bc19b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page