Context analytics
Project description
ctxrot
Understand your ReAct agent's context window and fight context rot.
Note: ctxrot currently supports only DSPy>=3.1.3 and may produce mis-aligned output. Please report any issues you encounter — the API may change.
Install
uv add ctxrot
Quick start
1. Attach the callback
import dspy
from ctxrot import CtxRotCallback
callback = CtxRotCallback(db_path="ctxrot.db", store_content=True)
dspy.configure(
lm=dspy.LM("openai/gpt-5.4-mini"),
callbacks=[callback],
)
- A new session is created automatically each time a DSPy module starts. Every LM call and tool call is recorded to SQLite.
- Set
store_content=Trueto also store full prompts and completions (needed for repetition analysis).
2. Run your agent as usual
react = dspy.ReAct(MySignature, tools=[tool_a, tool_b])
result = react(question="What is the capital of France?")
3. View the dashboard
ctxrot --db ctxrot.db
More examples
In examples, see:
cot_simple.py— simplest "hello world" with ChainOfThoughtreact_multihop.py— ReAct multi-hop QA with Wikipedia searchcustom_module.py— custom DSPy module pipelinerlm_reasoning.py— RLM iterative reasoning loop (requires Deno)
CLI commands
# Launch TUI dashboard
ctxrot --db ctxrot.db
# Reset database
ctxrot reset --db ctxrot.db
# WIP : using RLM for ctx analysis (requires API key / API base)
ctxrot deep-analyze --db ctxrot.db --session <session_id>
How it works
A SQLite database is created at db_path. The callback hooks into DSPy's BaseCallback and populates tables at runtime — a session row on on_module_start, an LM call row on on_lm_end, and a tool call row on on_tool_end.
Your DSPy agent → CtxRotCallback → SQLite → TUI dashboard / analysis
(unchanged) (just listens) (local)
What it tracks
- Per LM call — prompt tokens, completion tokens, cache read/write tokens, cost, duration
- Per tool call — tool name, duration, estimated output tokens
- Per session — model, start time, mode (react, chainofthought, etc.)
- Optionally — full prompt messages and completion text (
store_content=True)
Context rot detection
As context grows, agents start repeating themselves and producing less useful output. ctxrot detects this with two local signals (no LLM calls, token counting via tokie):
requires
store_content=True
DSPy structural markers (
[[ ## ... ## ]]) are stripped before comparison.
Repetition (per-iteration) :
| Metric | What it measures | How |
|---|---|---|
ngram_jaccard |
Word-level overlap vs previous completion | Jaccard similarity of word 3-gram sets. >0.4 = looping. |
sequence_similarity |
Character-level similarity vs previous completion | rapidfuzz.fuzz.ratio / 100. Catches paraphrased repetition. |
cumulative_max |
Max overlap vs any prior completion | Max ngram_jaccard across all earlier iterations. Catches non-consecutive loops. |
Efficiency (per-iteration):
A declining ratio across iterations means the model generates less output relative to its input a sign the context window is saturated.
efficiency_ratio = completion_tokens / prompt_tokens
LLM analysis
Uses RLM to perform semantic analysis on a recorded session. The RLM receives session metadata, growth curves, and pre-computed rot metrics, and can pull full prompt/completion text on demand via tools (get_completion_text, get_messages_json, get_tool_output, compute_repetition_score). Output is a structured markdown report: session overview, context growth pattern, efficiency trends, repetition analysis, tool impact, rot diagnosis (severity + onset iteration), and recommendations.
Deno required
Usage: ctxrot deep-analyze [OPTIONS]
Options:
--db -d TEXT SQLite database path [default: ctxrot.db]
--session -s TEXT Session ID (latest if omitted)
--query -q TEXT Focus area or question [default: comprehensive analysis]
--model -m TEXT Main LM for RLM reasoning [default: openai/gpt-5.4]
--sub-model TEXT Sub LM for semantic analysis [default: openai/gpt-5.4-mini]
--max-iters INT Max RLM REPL iterations [default: 15]
--max-calls INT Max sub-LLM calls [default: 30]
--api-key TEXT API key (or OPENAI_API_KEY env)
--api-base TEXT API base URL (or OPENAI_API_BASE env)
--env-file TEXT Path to .env file [default: .env]
--json Output full result as JSON
--verbose -v Show RLM reasoning steps
--yes -y Skip cost warning confirmation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ctxrot-0.1.0.tar.gz.
File metadata
- Download URL: ctxrot-0.1.0.tar.gz
- Upload date:
- Size: 174.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
03b0356173fe41e6bbfea6f09f85f6953acb4bd13895f6e60918521958ded88b
|
|
| MD5 |
e3c01ab6bd4699b264a7e5bd1ac6ab42
|
|
| BLAKE2b-256 |
36914d0b4b31b58521b9dd8b4b4970ae3341f8eafd45b62529cdfe089200c99c
|
File details
Details for the file ctxrot-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ctxrot-0.1.0-py3-none-any.whl
- Upload date:
- Size: 36.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ca45c171f7261945c0fcf010e617a324ec747866d81912c340bcc63907bc7889
|
|
| MD5 |
44a9602c7f741f2eb464e6ab3a06feaa
|
|
| BLAKE2b-256 |
bdbe26ca2da3152fb174fec2b0a66dff16b3312195c090528d2cfac878546799
|