Skip to main content

A simple framework for LLM-powered applications

Project description

LLMProc

License Status

A Unix-inspired framework for building powerful LLM applications that lets you spawn specialized models, manage large outputs, and enhance context with file preloading.

LLMProc treats language models as processes: spawn them, fork them, link them together, and handle their I/O with a familiar Unix-like approach.

Table of Contents

Installation

# Install with uv (recommended)
uv pip install llmproc               # Base package
uv pip install "llmproc[openai]"     # For OpenAI models
uv pip install "llmproc[anthropic]"  # For Anthropic models
uv pip install "llmproc[all]"        # All providers

See MISC.md for additional installation options and provider configurations.

Quick Start

Python usage

import asyncio
from llmproc import LLMProgram, register_tool

@register_tool()
def calculate(expression: str) -> dict:
    return {"result": eval(expression, {"__builtins__": {}})}

async def main():
    # You can load a program from a TOML file
    program = LLMProgram.from_toml('examples/anthropic/claude-3-5-haiku.toml')

    # Or create a program with the python API
    program = (
        LLMProgram(
            model_name="claude-3-7-sonnet-20250219",
            provider="anthropic",
            system_prompt="You are a helpful assistant.",
            parameters={"max_tokens": 1024}  # Required parameter
        )
        .set_enabled_tools([calculate])  # Enable the calculate tool
    )

    # Start and use the process
    process = await program.start()
    result = await process.run('What is 125 * 48?')
    print(process.get_last_message())

asyncio.run(main())

CLI usage

# Start interactive session
llmproc-demo ./examples/anthropic/claude-3-5-haiku.toml

# Single prompt
llmproc-demo ./examples/anthropic/claude-3-5-sonnet.toml -p "What is Python?"

# Read from stdin
cat questions.txt | llmproc-demo ./examples/anthropic/claude-3-7-sonnet.toml -n

# Use Gemini models
llmproc-demo ./examples/gemini/gemini-2.0-flash-direct.toml

Features

Supported Model Providers

  • OpenAI: GPT-4o, GPT-4o-mini, GPT-4.5
  • Anthropic: Claude 3 Haiku, Claude 3.5/3.7 Sonnet (direct API and Vertex AI)
  • Google: Gemini 1.5 Flash/Pro, Gemini 2.0 Flash, Gemini 2.5 Pro (direct API and Vertex AI)

LLMProc offers a Unix-inspired toolkit for building sophisticated LLM applications:

Process Management - Unix-like LLM Orchestration

Large Content Handling - Sophisticated I/O Management

  • File Descriptor System - Unix-like pagination for large outputs
  • Reference ID System - Mark up and reference specific pieces of content
  • Smart Content Pagination - Optimized line-aware chunking for content too large for context windows

Usage Examples

Additional Features

  • File Preloading - Enhance context by loading files into system prompts
  • Environment Info - Add runtime context like working directory
  • Prompt Caching - Automatic 90% token savings for Claude models (enabled by default)
  • Reasoning/Thinking models - Claude 3.7 Thinking and OpenAI Reasoning models
  • MCP Protocol - Standardized interface for tool usage
  • Tool Aliases - Provide simpler, intuitive names for tools
  • Cross-provider support - Currently supports Anthropic, OpenAI, and Google Gemini

Demo Tools

LLMProc includes demo command-line tools for quick experimentation:

llmproc-demo

Interactive CLI for testing LLM configurations:

llmproc-demo ./examples/anthropic/claude-3-5-haiku.toml  # Interactive session
llmproc-demo ./config.toml -p "What is Python?"          # Single prompt
cat questions.txt | llmproc-demo ./config.toml -n        # Pipe mode

Commands: exit or quit to end the session

llmproc-prompt

View the compiled system prompt without making API calls:

llmproc-prompt ./config.toml                 # Display to stdout
llmproc-prompt ./config.toml -o prompt.txt   # Save to file
llmproc-prompt ./config.toml -E              # Without environment info

Use Cases

  • Claude Code - A minimal Claude Code implementation, with support for preloading CLAUDE.md, spawning, MCP

Documentation

Documentation Index: Start here for guided learning paths

For advanced usage and implementation details, see MISC.md.

Design Philosophy

LLMProc treats LLMs as computing processes:

  • Each model is a process defined by a program (TOML file)
  • It maintains state between executions
  • It interacts with the system through defined interfaces

The library functions as a kernel:

  • Implements system calls for LLM processes
  • Manages resources across processes
  • Creates a standardized interface with the environment

Roadmap

Future development plans:

  1. Exec System Call for process replacement
  2. Process State Serialization & Restoration
  3. Retry mechanism with exponential backoff
  4. Enhanced error handling and reporting
  5. Support for streaming
  6. File Descriptor System Phase 3 enhancements
  7. Gemini advanced features (tool calling, multimodal, token counting, streaming)

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmproc-0.4.0.tar.gz (146.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmproc-0.4.0-py3-none-any.whl (26.3 kB view details)

Uploaded Python 3

File details

Details for the file llmproc-0.4.0.tar.gz.

File metadata

  • Download URL: llmproc-0.4.0.tar.gz
  • Upload date:
  • Size: 146.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.17

File hashes

Hashes for llmproc-0.4.0.tar.gz
Algorithm Hash digest
SHA256 9d5dad2b609e4ce0258bf92a7e16457ddc930a6735cbb8f57f187f71c6254de4
MD5 45fd3ca6116b9fd48188913ae5e105c8
BLAKE2b-256 66ac7148eb3e812f9ecee23b3c256677c389cc2ff49c015725e058e326316362

See more details on using hashes here.

File details

Details for the file llmproc-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: llmproc-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 26.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.17

File hashes

Hashes for llmproc-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 73f09315bec80ecf8d9ba9365fa1a0eb1e73332886d9d09ab0ca4672f81d97f0
MD5 a49d3fd935aad4ade9fe16d47601dda4
BLAKE2b-256 2f9645ccbbc049e69013b0eaf1d3151defe4d72c321c9f6c93e8d57252390b3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page