Skip to main content

Generate optimized LLM context from Python library APIs — CLI, skill, and MCP server

Project description

libcontext

CI codecov PyPI version Python License: MIT Ruff Typed

Context-efficient API references for LLM toolchains — structured, on-demand, not always-on.

libcontext inspects any installed Python package via static AST analysis (no code execution) and generates compact Markdown API references. It integrates with Claude Code, GitHub Copilot (via a /lib skill), and VS Code / Cursor (via an MCP server) to provide progressive disclosure — only loading API context when you actually need it, avoiding context window pollution.

Why This Exists

LLMs can often use popular libraries correctly from training data alone. For well-known packages like requests or flask, libcontext adds little value. The real problems arise in specific scenarios:

  • Internal / private libraries — Zero training data exists. The model has never seen the API.
  • Niche open-source packages — Sparse or outdated training data leads to hallucinated methods and wrong signatures. GPT-4o achieves only 38% valid invocations on low-frequency APIs (Amazon Science, ICSE 2025).
  • New versions of any library — Training data has a cutoff. The model knows v2, you're using v3.

Even when an LLM could read source files directly, structured API summaries are more context-efficient: providing API documentation via retrieval improves pass rates by 83–220% compared to no documentation, while consuming far fewer tokens than raw source code (arXiv 2503.15231, March 2025).

Dumping entire API references into always-on instruction files wastes context window on every interaction. Selective retrieval outperforms always-on injection — always-on docs actually hurt performance on well-known APIs (Amazon Science, ICSE 2025).

libcontext addresses this with progressive disclosure: overview first, then drill into specific modules only when needed.

When libcontext makes the biggest difference

Scenario Impact Why
Internal / private libraries Critical Zero training data — the model has never seen the API
Niche open-source packages High Sparse training data; 19.7% of LLM package suggestions are hallucinated (USENIX Security 2025)
New versions of any library High Training cutoff — the LLM knows v2, you're using v3
Popular, stable libraries Low The LLM already has good knowledge from training data — libcontext adds little here

What libcontext does NOT do

  • Replace reading source code — LLMs with tool access (Claude Code, Cursor) can read files directly. For popular libraries, that's often sufficient.
  • Guarantee correctness — Even with perfect API docs, LLMs still make errors. Research shows pass rates of 74–91% with target documentation, not 100% (arXiv 2503.15231).
  • Provide usage examples — libcontext extracts signatures and docstrings, not example code. Research indicates examples have the highest impact on code generation quality.

Quick Start

# Install globally with uv (recommended — available in all projects)
uv tool install libcontext

# Install the /lib skill into your Claude Code project
libctx install --skills

# Now in Claude Code, just type:
#   /lib requests
# Claude will progressively discover the API for you

For VS Code with MCP support:

uv tool install "libcontext[mcp]"
libctx install --mcp --target vscode

How It Works

Progressive Disclosure (Skill / MCP)

Instead of dumping everything upfront, libcontext follows a progressive workflow:

Step 1: Overview          Step 2: Drill down          Step 3: Search
libctx inspect requests   libctx inspect requests     libctx inspect requests
  --overview                --module requests.api       --search Session

  Module list with          Full signatures,            Find specific
  class/function names      docstrings, parameters      classes or methods
  (no signatures)           for one module              across all modules

The /lib skill (Claude Code, GitHub Copilot) and MCP server (Claude Code, VS Code, Cursor) automate this workflow — the AI assistant decides what to inspect based on the task at hand.

Direct CLI Usage

# Full API reference to stdout
libctx inspect requests

# Compact overview — module names with class/function names
libctx inspect requests --overview -q

# Detailed API for a single module
libctx inspect requests --module requests.api -q

# Search for a specific class or function
libctx inspect requests --search Session -q

# Write to a file with marker injection
libctx inspect requests -o .github/copilot-instructions.md

# Multiple libraries at once
libctx inspect requests httpx pydantic -o context.md

# JSON output (programmatic consumption)
libctx inspect requests --format json
libctx inspect requests --search Session --format json

# Filter search by type
libctx inspect requests --search Session --type class -q

# Compare two API snapshots
libctx inspect requests --format json > old.json
# ... upgrade requests ...
libctx inspect requests --format json > new.json
libctx diff old.json new.json

# Bypass disk cache
libctx inspect requests --no-cache

# Cache management
libctx cache list               # show cached packages with size and age
libctx cache clear               # clear all cached API data
libctx cache clear requests      # clear only the entries for one package

AST Analysis

  1. Parsing — Reads .py and .pyi source files using Python's ast module. No code is ever executed.
  2. Stub merging — Discovers colocated and standalone stub packages; merges signatures from stubs with docstrings from sources.
  3. Extraction — Classes, functions, methods, parameters, type annotations, decorators, type aliases, and docstrings.
  4. Compact rendering — Structured Markdown (or JSON) optimised for LLM context windows.
  5. Disk cache — Results are cached on disk and revalidated via (version, mtime, file_count) to avoid re-parsing unchanged packages.

Installation

Global install (recommended)

Install once, use across all projects — no per-project dependency needed:

uv tool install libcontext              # CLI only
uv tool install "libcontext[mcp]"       # with MCP server (requires Python 3.10+)

Update later with uv tool upgrade libcontext.

One-off usage without installing: uvx --from libcontext libctx inspect requests

Per-project install

If you prefer to add libcontext as a project dependency:

uv add libcontext           # basic
uv add libcontext[mcp]      # with MCP server

Or with pip:

pip install libcontext
pip install libcontext[mcp]

Development

git clone https://github.com/Syclaw/libcontext.git
cd libcontext
uv sync --all-extras

Integration Setup

The install command configures your project for AI-assisted library discovery:

# Claude Code — install the /lib skill
libctx install --skills

# Claude Code — install MCP server config
libctx install --mcp

# VS Code / Cursor — install MCP server config
libctx install --mcp --target vscode

# GitHub Copilot — install the skill
libctx install --skills --target github

# Everything at once
libctx install --all --target all
Flag What it installs
--skills /lib skill for on-demand API discovery
--mcp MCP server configuration for tool-based access
--all Both skills and MCP
Target Skills location MCP location
claude (default) .claude/skills/lib/SKILL.md .mcp.json
github .github/skills/lib/SKILL.md
vscode .vscode/mcp.json

Using the /lib Skill (Claude Code)

After libctx install --skills, type /lib <package> in Claude Code:

/lib requests              → overview, then drill into modules
/lib requests requests.api → jump straight to a specific module

Claude will automatically run libctx commands to discover the API progressively.

Using the MCP Server

After libctx install --mcp, the MCP server provides tools:

  • get_package_overview — structural overview of a package
  • get_module_api — detailed API for a single module
  • search_api — search by name or docstring (with optional kind filter and format for JSON output)
  • get_api_json — full package or single-module API as structured JSON
  • diff_api — compare two API snapshots and report changes with breaking change detection
  • refresh_cache — clear both in-memory and disk caches

Python API

from libcontext import collect_package, render_package

# Full API reference
pkg = collect_package("requests")
print(render_package(pkg))
from libcontext import collect_package, render_package_overview, render_module, search_package

pkg = collect_package("requests")

# Overview — module names with class/function names
print(render_package_overview(pkg))

# Single module — full signatures and docstrings
for mod in pkg.non_empty_modules:
    if mod.name == "requests.api":
        print(render_module(mod))

# Search — find specific classes or functions
print(search_package(pkg, "Session"))

# Search with type filter
print(search_package(pkg, "Session", kind="class"))
import dataclasses, json
from libcontext import collect_package, diff_packages, render_diff
from libcontext.models import PackageInfo, _serialize_envelope

# JSON serialization (roundtrip-safe)
pkg = collect_package("requests")
data = _serialize_envelope(dataclasses.asdict(pkg))
print(json.dumps(data, indent=2))

# Reconstruct from JSON
pkg_restored = PackageInfo.from_dict(data["data"])

# API diff between versions
old_pkg = PackageInfo.from_dict(old_data)
new_pkg = PackageInfo.from_dict(new_data)
result = diff_packages(old_pkg, new_pkg)
print(render_diff(result))

Configuration (Optional)

Library authors can customise what libcontext exposes by adding a [tool.libcontext] section to their pyproject.toml. The library does not need to depend on libcontext.

[tool.libcontext]
include_modules = ["mylib.core", "mylib.models"]
exclude_modules = ["mylib._internal", "mylib.tests"]
include_private = false
max_readme_lines = 150
extra_context = """
This library uses the Repository pattern for data access.
All async operations use httpx internally.
"""

Architecture

Module Role
models.py Dataclasses for packages, modules, classes, functions, and diff results
inspector.py Static AST analysis — signatures, docstrings, decorators, type aliases
collector.py Package discovery, module collection, stub merging, and disk cache integration
config.py Reads [tool.libcontext] from pyproject.toml
renderer.py LLM-optimised Markdown generation (full, overview, module, search, diff)
diff.py API diff between two package versions with breaking change detection
cache.py Persistent disk cache with mtime/file-count invalidation and LRU eviction
cli.py CLI entry point — inspect, install, diff, and cache subcommands
mcp_server.py MCP server for Claude Code / VS Code / Cursor integration (optional)

Development

uv sync --all-extras
uv run pytest --cov=libcontext
uv run ruff check src/ tests/
uv run ruff format src/ tests/
uv run mypy src/libcontext

See CONTRIBUTING.md for detailed contribution guidelines.

Dependencies

See DEPENDENCIES.md for the full list of dependencies and their licenses.

License

MIT — see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

libcontext-0.7.4.tar.gz (102.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

libcontext-0.7.4-py3-none-any.whl (57.3 kB view details)

Uploaded Python 3

File details

Details for the file libcontext-0.7.4.tar.gz.

File metadata

  • Download URL: libcontext-0.7.4.tar.gz
  • Upload date:
  • Size: 102.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for libcontext-0.7.4.tar.gz
Algorithm Hash digest
SHA256 9752fae43a96d5e64128297362908b014689af520d48e7211af4d84d9665d085
MD5 32045dcc90688a7f467a54d24dcdbebf
BLAKE2b-256 db8cf894b151c9a2b1b3981f8a809d028eaa065d9b1e99902d008baf416399b0

See more details on using hashes here.

Provenance

The following attestation bundles were made for libcontext-0.7.4.tar.gz:

Publisher: release.yml on Syclaw/libcontext

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file libcontext-0.7.4-py3-none-any.whl.

File metadata

  • Download URL: libcontext-0.7.4-py3-none-any.whl
  • Upload date:
  • Size: 57.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for libcontext-0.7.4-py3-none-any.whl
Algorithm Hash digest
SHA256 0037c2072d971a01498dd8ae578a9900749abfaceb373bb79294fbf8b829c401
MD5 338679c4635774d85e5eb235e3243a35
BLAKE2b-256 2eddaa96d4134f7708a5f0ce4ed8849413093186eb375401311513d549e95f9f

See more details on using hashes here.

Provenance

The following attestation bundles were made for libcontext-0.7.4-py3-none-any.whl:

Publisher: release.yml on Syclaw/libcontext

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page