Skip to main content

Context Kernel for LLM APIs.

Project description

lmctx

GitHub license GitHub Actions lint GitHub Actions test Coverage >=90% GitHub stars PyPI version PyPI status PyPI format PyPI downloads Python versions

Context Kernel for LLM APIs. Standardize what happens before and after every model call, while keeping execution in your own runtime.

  • Before call: adapter.plan(context, spec) builds provider-ready payloads and diagnostics
  • After call: adapter.ingest(context, response, spec=...) normalizes output back into Context
  • Boundary: lmctx never sends HTTP requests, executes tools, or orchestrates loops

Why lmctx

  • Append-only, snapshot-friendly context model (Context) with immutable-by-default updates
  • Unified part model (Part) for text, images, files, tool calls/results, thinking, compaction
  • Loss-resistant round-trips for opaque provider payloads through provider_raw and blob references
  • Pluggable blob storage (InMemoryBlobStore, FileBlobStore, or custom BlobStore)
  • Provider adapters + auto routing via AutoAdapter on (provider, endpoint, api_version)
  • Explainable planning through RequestPlan (included, excluded, warnings, errors)
  • Minimal dependencies (core package has no runtime deps; provider SDKs are optional extras)

Install

pip install lmctx

# provider extras (optional)
pip install 'lmctx[openai]'
pip install 'lmctx[anthropic]'
pip install 'lmctx[google]'
pip install 'lmctx[bedrock]'
pip install 'lmctx[all]'

5-Minute Integration

from openai import OpenAI

from lmctx import AutoAdapter, Context, RunSpec
from lmctx.spec import Instructions

# 1) Build conversation state
ctx = Context().user("What is the capital of France?")

# 2) Describe runtime call settings
spec = RunSpec(
    provider="openai",
    endpoint="responses.create",
    model="gpt-4o-mini",
    instructions=Instructions(system="You are concise and accurate."),
)

# 3) Build request payload with lmctx
router = AutoAdapter()
plan = router.plan(ctx, spec)

# 4) Execute with provider SDK in your own code
client = OpenAI()
response = client.responses.create(**plan.request)

# 5) Normalize response back into Context
ctx = router.ingest(ctx, response, spec=spec)

assistant = ctx.last(role="assistant")
if assistant:
    print(assistant.parts[0].text)

Core Types

Type Role
Context Append-only conversation log (messages, cursor, usage_log, blob_store)
Part / Message Canonical content model shared across adapters
RunSpec Call configuration (provider, endpoint, model, tools, schema, extras)
RequestPlan Planned payload + diagnostics for observability and debugging
BlobReference / BlobStore Out-of-line binary/opaque payload storage with integrity verification

Built-in Adapters

Adapter RunSpec selector Typical SDK call
OpenAIResponsesAdapter openai / responses.create client.responses.create(**plan.request)
OpenAIResponsesCompactAdapter openai / responses.compact client.responses.compact(**plan.request)
OpenAIChatCompletionsAdapter openai / chat.completions client.chat.completions.create(**plan.request)
OpenAIImagesAdapter openai / images.generate client.images.generate(**plan.request)
AnthropicMessagesAdapter anthropic / messages.create client.messages.create(**plan.request)
GoogleGenAIAdapter google / models.generate_content client.models.generate_content(**plan.request)
BedrockConverseAdapter bedrock / converse client.converse(**plan.request)

Documentation

Examples

Scripts are in examples/:

  • Core (no API keys): quickstart.py, multimodal.py, blob_stores.py, tool_calling.py
  • OpenAI: api_openai_responses.py, api_openai_compact.py, api_openai_chat.py, api_openai_images.py
  • Anthropic: api_anthropic.py, api_anthropic_compact.py
  • Google: api_google_genai.py, api_google_image_generation.py
  • Bedrock: api_bedrock.py

Run one:

uv run python examples/quickstart.py

Recorded Logs

Example outputs can be stored locally under examples/logs/ (git-ignored by default). See docs/logs.md for mapping and regeneration commands.

Development

See CONTRIBUTING.md for full guidelines.

uv sync --all-extras --dev
make check

Requirements

  • Python >=3.10,<3.15

License

Apache License 2.0. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmctx-0.2.0.tar.gz (5.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lmctx-0.2.0-py3-none-any.whl (70.7 kB view details)

Uploaded Python 3

File details

Details for the file lmctx-0.2.0.tar.gz.

File metadata

  • Download URL: lmctx-0.2.0.tar.gz
  • Upload date:
  • Size: 5.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for lmctx-0.2.0.tar.gz
Algorithm Hash digest
SHA256 c62727cbdba41e012357de864a936d082732ed2bb527d6c3bec5392162a9dc19
MD5 49e56e57a4011162d5bad3e8ef9d9995
BLAKE2b-256 ab923db730f7566b4e4f60c4546a062cf6b915f8636f8f467c68821eac11a67e

See more details on using hashes here.

File details

Details for the file lmctx-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: lmctx-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 70.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for lmctx-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bd68774dc969ba200b37302284966fe9114d3142cf559c01c24ecf80cce6a798
MD5 3cc93e5a5649ba9cb23f68888ebd8822
BLAKE2b-256 8f4d57301f0deccf083a68c1a2f4cbd8a1974b6be63ed3f5bf536ef7fc6a5999

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page