Skip to main content

Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Code outlining support is included as a standard feature.

Project description

LLM Context

License PyPI version Downloads

Smart context management for LLM development workflows. Share relevant project files instantly through intelligent selection and rule-based filtering.

The Problem

Getting the right context into LLM conversations is friction-heavy:

  • Manually finding and copying relevant files wastes time
  • Too much context hits token limits, too little misses important details
  • AI requests for additional files require manual fetching
  • Hard to track what changed during development sessions

The Solution

llm-context provides focused, task-specific project context through composable rules.

For humans using chat interfaces:

lc-select   # Smart file selection
lc-context  # Copy formatted context to clipboard
# Paste and work - AI can access additional files via MCP

For AI agents with CLI access:

lc-preview tmp-prm-auth    # Validate rule selects right files
lc-context tmp-prm-auth    # Get focused context for sub-agent

For AI agents in chat (MCP tools):

  • lc_outlines - Generate excerpted context from current rule
  • lc_preview - Validate rule effectiveness before use
  • lc_missing - Fetch specific files/implementations on demand

Note: This project was developed in collaboration with several Claude Sonnets (3.5, 3.6, 3.7, 4.0) and Groks (3, 4), using LLM Context itself to share code during development. All code is heavily human-curated by @restlessronin.

Installation

uv tool install "llm-context>=0.6.0"

Quick Start

Human Workflow (Clipboard)

# One-time setup
cd your-project
lc-init

# Daily usage
lc-select
lc-context
# Paste into your LLM chat

MCP Integration (Recommended)

Add to Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "llm-context": {
      "command": "uvx",
      "args": ["--from", "llm-context", "lc-mcp"]
    }
  }
}

Restart Claude Desktop. Now AI can access additional files during conversations without manual copying.

Agent Workflow (CLI)

AI agents with shell access use llm-context to create focused contexts:

# Agent explores codebase
lc-outlines

# Agent creates focused rule for specific task
# (via Skill or lc-rule-instructions)

# Agent validates rule
lc-preview tmp-prm-oauth-task

# Agent uses context for sub-task
lc-context tmp-prm-oauth-task

Agent Workflow (MCP)

AI agents in chat environments use MCP tools:

# Explore codebase structure
lc_outlines(root_path, rule_name)

# Validate rule effectiveness  
lc_preview(root_path, rule_name)

# Fetch specific files/implementations
lc_missing(root_path, param_type, data, timestamp)

Core Concepts

Rules: Task-Specific Context Descriptors

Rules are YAML+Markdown files that describe what context to provide for a task:

---
description: "Debug API authentication"
compose:
  filters: [lc/flt-no-files]
  excerpters: [lc/exc-base]
also-include:
  full-files: ["/src/auth/**", "/tests/auth/**"]
---
Focus on authentication system and related tests.

Five Rule Categories

  • Prompt Rules (prm-): Generate project contexts (e.g., lc/prm-developer)
  • Filter Rules (flt-): Control file inclusion (e.g., lc/flt-base, lc/flt-no-files)
  • Instruction Rules (ins-): Provide guidelines (e.g., lc/ins-developer)
  • Style Rules (sty-): Enforce coding standards (e.g., lc/sty-python)
  • Excerpt Rules (exc-): Configure content extraction (e.g., lc/exc-base)

Rule Composition

Build complex rules from simpler ones:

---
instructions: [lc/ins-developer, lc/sty-python]
compose:
  filters: [lc/flt-base, project-filters]
  excerpters: [lc/exc-base]
---

Essential Commands

Command Purpose
lc-init Initialize project configuration
lc-select Select files based on current rule
lc-context Generate and copy context
lc-context -p Include prompt instructions
lc-context -m Format as separate message
lc-context -nt No tools (manual workflow)
lc-set-rule <name> Switch active rule
lc-preview <rule> Validate rule selection and size
lc-outlines Get code structure excerpts
lc-missing Fetch files/implementations (manual MCP)

AI-Assisted Rule Creation

Let AI help create focused, task-specific rules. Two approaches depending on your environment:

Claude Skill (Interactive, Claude Desktop/Code)

How it works: Global skill guides you through creating rules interactively. Examines your codebase as needed using MCP tools.

Setup:

lc-init  # Installs skill to ~/.claude/skills/
# Restart Claude Desktop or Claude Code

Usage:

# 1. Share project context
lc-context  # Any rule - overview included

# 2. Paste into Claude, then ask:
# "Create a rule for refactoring authentication to JWT"
# "I need a rule to debug the payment processing"

Claude will:

  1. Use project overview already in context
  2. Examine specific files via lc-missing as needed
  3. Ask clarifying questions about scope
  4. Generate optimized rule (tmp-prm-<task>.md)
  5. Provide validation instructions

Skill documentation (progressively disclosed):

  • Skill.md - Quick workflow, decision patterns
  • PATTERNS.md - Common rule patterns
  • SYNTAX.md - Detailed reference
  • EXAMPLES.md - Complete walkthroughs
  • TROUBLESHOOTING.md - Problem solving

Instruction Rules (Works Anywhere)

How it works: Load comprehensive rule-creation documentation into context, work with any LLM.

Usage:

# 1. Load framework
lc-set-rule lc/prm-rule-create
lc-select
lc-context -nt

# 2. Paste into any LLM
# "I need a rule for adding OAuth integration"

# 3. LLM generates focused rule using framework

# 4. Use the new rule
lc-set-rule tmp-prm-oauth
lc-select
lc-context

Included documentation:

  • lc/ins-rule-intro - Introduction and overview
  • lc/ins-rule-framework - Complete decision framework

Comparison

Aspect Skill Instruction Rules
Setup Automatic with lc-init Already available
Interaction Interactive, uses lc-missing Static documentation
File examination Automatic via MCP Manual or via AI
Best for Claude Desktop/Code Any LLM, any environment
Updates Automatic with version upgrades Built-in to rules

Both require sharing project context first. Both produce equivalent results.

Project Customization

Create Base Filters

cat > .llm-context/rules/flt-repo-base.md << 'EOF'
---
description: "Repository-specific exclusions"
compose:
  filters: [lc/flt-base]
gitignores:
  full-files: ["*.md", "/tests", "/node_modules"]
  excerpted-files: ["*.md", "/tests"]
---
EOF

Create Development Rule

cat > .llm-context/rules/prm-code.md << 'EOF'
---
description: "Main development rule"
instructions: [lc/ins-developer, lc/sty-python]
compose:
  filters: [flt-repo-base]
  excerpters: [lc/exc-base]
---
Additional project-specific guidelines and context.
EOF

lc-set-rule prm-code

Deployment Patterns

Choose format based on your LLM environment:

Pattern Command Use Case
System Message lc-context -p AI Studio, etc.
Single User Message lc-context -p -m Grok, etc.
Separate Messages lc-prompt + lc-context -m Flexible placement
Project Files (included) lc-context Claude Projects, etc.
Project Files (searchable) lc-context -m Force into context

See Deployment Patterns for details.

Key Features

  • Intelligent Selection: Rules automatically include/exclude appropriate files
  • Context Validation: Preview size and selection before generation
  • Code Excerpting: Extract structure while reducing tokens (15+ languages)
  • MCP Integration: AI accesses additional files without manual intervention
  • Composable Rules: Build complex contexts from reusable patterns
  • AI-Assisted Creation: Interactive skill or documentation-based approaches
  • Agent-Friendly: CLI and MCP interfaces for autonomous operation

Common Workflows

Daily Development (Human)

lc-set-rule prm-code
lc-select
lc-context
# Paste into chat - AI accesses more files via MCP if needed

Focused Task (Human or Agent)

# Share project context first
lc-context

# Then create focused rule:
# Via Skill: "Create a rule for [task]"
# Via Instructions: lc-set-rule lc/prm-rule-create && lc-context -nt

# Validate and use
lc-preview tmp-prm-task
lc-context tmp-prm-task

Agent Context Provisioning (CLI)

# Agent validates rule effectiveness
lc-preview tmp-prm-refactor-auth

# Agent generates context for sub-agent
lc-context tmp-prm-refactor-auth > /tmp/context.md
# Sub-agent reads context and executes task

Agent Context Provisioning (MCP)

# Agent validates rule
preview = lc_preview(root_path="/path/to/project", rule_name="tmp-prm-task")

# Agent generates context
context = lc_outlines(root_path="/path/to/project")

# Agent fetches additional files as needed
files = lc_missing(root_path, "f", "['/proj/src/auth.py']", timestamp)

Path Format

All paths use project-relative format with project name prefix:

/{project-name}/src/module/file.py
/{project-name}/tests/test_module.py

This enables multi-project context composition without path conflicts.

In rules, patterns are project-relative without the prefix:

also-include:
  full-files:
    - "/src/auth/**"      # ✓ Correct
    - "/myproject/src/**" # ✗ Wrong - don't include project name

Learn More

License

Apache License, Version 2.0. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_context-0.6.1.tar.gz (57.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_context-0.6.1-py3-none-any.whl (85.1 kB view details)

Uploaded Python 3

File details

Details for the file llm_context-0.6.1.tar.gz.

File metadata

  • Download URL: llm_context-0.6.1.tar.gz
  • Upload date:
  • Size: 57.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.13

File hashes

Hashes for llm_context-0.6.1.tar.gz
Algorithm Hash digest
SHA256 6183ad85901b8f7e0edc90f0830156f6f09f22ee7f485b155a5cf2e16eeca6c6
MD5 61a36c19f5a0211df12817cdf750e563
BLAKE2b-256 7635ef7dda8596d85f3e47b243ecee0a0cd7bcdbf86d065b23c8cfc3e9b23377

See more details on using hashes here.

File details

Details for the file llm_context-0.6.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_context-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4542ec222c7c4a468a7da96bd52db714f98f44a0911546d1ee8522d8b3747d64
MD5 036501486f51045a018c40ed92ed955b
BLAKE2b-256 ec769ee0733fc55546dbb284a80c7454ea0a4cf47cde4c752c7dcc4e255b9505

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page