Skip to main content

Context Engineering CLI — Reduce AI agent token waste with compressed codebase skeletons, task-focused instructions, and session checkpoints. Zero AI, pure parsing.

Project description

ctxl — Context Engineering CLI for AI Agents

Reduce token waste. Prevent hallucination. Zero AI used.

ctxl (pronounced "contextual") is a developer CLI tool that helps you manage the context window of AI coding agents (GitHub Copilot, Cursor, Claude, etc.) by generating compressed codebase skeletons, task-focused instructions, and session checkpoints — all using deterministic parsing, not AI.

The Problem

AI coding agents (Copilot, Cursor, etc.) read your entire codebase to build context. A 3,000-line PySpark file burns ~750 tokens every time the agent references it. Over a session, your context window fills with noise, leading to:

  • 🔴 Hallucination — the model starts making things up
  • 🔴 Token waste — you pay for irrelevant context
  • 🔴 Lost focus — the agent forgets your actual task

The Solution

ctxl gives you three commands that prepare your environment before the AI agent reads it:

ctxl map          # Generate a compressed codebase skeleton (~95% token reduction)
ctxl init         # Generate task-focused Copilot instructions
ctxl checkpoint   # Save session state for safe /clear workflows

Zero AI models. Zero API calls. Zero tokens burned by this tool.

Installation

pip install ctxl-cli

Quick Start

ctxl map — Codebase Skeleton

Generate a compressed structural map of your codebase with line numbers:

ctxl map                      # Map current directory
ctxl map ./src                # Map a specific directory
ctxl map -e .py               # Only Python files
ctxl map -o codebase.md       # Save to file
ctxl map --clipboard          # Copy to clipboard for pasting into AI chat

Before (raw file, ~750 tokens):

class DataPipeline:
    def __init__(self, spark, config):
        self.spark = spark
        self.config = config
        self.source_path = config.get("source_path", "/data/raw")
        # ... 60 more lines of implementation
    
    def clean_data(self, df, drop_nulls=True):
        string_cols = [f.name for f in df.schema.fields ...]
        # ... 20 more lines

After (ctxl map output, ~50 tokens):

L22: class DataPipeline:
    L25: def __init__(self, spark: SparkSession, config: Dict)
    L33: def load_data(self, table_name: str, filters: Optional[Dict] = None) -> DataFrame
    L42: def clean_data(self, df: DataFrame, drop_nulls: bool = True) -> DataFrame
    L52: def transform(self, df: DataFrame, rules: List[Dict]) -> DataFrame
    L66: def validate(self, df: DataFrame) -> bool
    L75: def save(self, df: DataFrame, partition_cols: List[str] = None) -> str

Line numbers (L22:, L42:) let the AI agent navigate directly to the right location.

ctxl init — Copilot Instructions

Generate a .github/copilot-instructions.md file that GitHub Copilot reads natively:

ctxl init "Fix the data pipeline ETL bug"
ctxl init "Add authentication" -f auth.py -f models.py
ctxl init "Refactor tests" --no-map

ctxl checkpoint — Session State

Save your progress before running /clear in Copilot Chat:

ctxl checkpoint save \
    -t "Fix ETL pipeline" \
    --done "Found the bug in clean_data()" \
    --state "Pipeline runs but output has wrong column order" \
    --next "Fix column ordering in transform()" \
    --file "data_pipeline.py"

ctxl checkpoint list          # List all checkpoints
ctxl checkpoint show          # Show latest checkpoint

Supported Languages

ctxl map uses Tree-sitter for parsing and supports:

Language Extensions
Python .py
JavaScript .js, .jsx
TypeScript .ts, .tsx
Java .java

More languages can be added easily via Tree-sitter grammars.

How It Works

Your Codebase (10,000+ tokens)
        │
        ▼
   Tree-sitter Parser (deterministic, local, free)
        │
        ▼
   AST → Extract signatures + line numbers
        │
        ▼
   Compressed Skeleton (~500 tokens)
        │
        ▼
   AI Agent reads skeleton instead of raw code
        │
        ▼
   90-95% fewer tokens burned 🎉

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ctxl_cli-0.1.0.tar.gz (15.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ctxl_cli-0.1.0-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file ctxl_cli-0.1.0.tar.gz.

File metadata

  • Download URL: ctxl_cli-0.1.0.tar.gz
  • Upload date:
  • Size: 15.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for ctxl_cli-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c8e5cdbfd905deaf052feef4db7ea1010158ebc7994005974f4b1b51d797d230
MD5 f418f3a88510dacc9f33bdcb32640951
BLAKE2b-256 7d76b1920e9a06790228312c8b539376e15c5ba5cbaafac423004d8d4ddd7856

See more details on using hashes here.

File details

Details for the file ctxl_cli-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ctxl_cli-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 14.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for ctxl_cli-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4abf1b22c34765576e505b705d0d2db912f1a60bbaec16119cd2eceb2fa86953
MD5 1ea2c977e6b6fe45c75b17e712397a2f
BLAKE2b-256 d13272eea9e24a5bc3d73ceeec8bc0d00c8e325abc9ce2c591b4c9c5cbe542eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page