Skip to main content

A simple framework for LLM-powered applications

Project description

LLMProc

LLMProc Logo

License Status

LLMProc: A Unix-inspired operating system for language models. Like processes in an OS, LLMs execute instructions, make system calls, manage resources, and communicate with each other - enabling powerful multi-model applications with sophisticated I/O management.

Table of Contents

Installation

For Users

# Install base package
pip install llmproc

# Install with specific provider support
pip install "llmproc[openai]"        # For OpenAI models
pip install "llmproc[anthropic]"     # For Anthropic models  
pip install "llmproc[vertex]"        # For Vertex AI
pip install "llmproc[gemini]"        # For Google Gemini

# Install with all providers
pip install "llmproc[all]"

For Developers

If you're contributing to llmproc, clone the repository and use:

# Create virtual environment
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install everything (package + all providers + dev tools)
uv sync --all-extras --all-groups

See CONTRIBUTING.md for the complete developer setup guide.

Quick Start

Python usage

# Full example: examples/multiply_example.py
import asyncio
from llmproc import LLMProgram  # Optional: import register_tool for advanced tool configuration


def multiply(a: float, b: float) -> dict:
    """Multiply two numbers and return the result."""
    return {"result": a * b}  # Expected: π * e = 8.539734222677128


async def main():
    program = LLMProgram(
        model_name="claude-3-7-sonnet-20250219",
        provider="anthropic",
        system_prompt="You're a helpful assistant.",
        parameters={"max_tokens": 1024},
        tools=[multiply],
    )
    process = await program.start()
    await process.run("Can you multiply 3.14159265359 by 2.71828182846?")

    print(process.get_last_message())


if __name__ == "__main__":
    asyncio.run(main())

Configuration Options (TOML, YAML, or Dict)

Load program configuration in multiple ways:

# Load from TOML (traditional)
program = LLMProgram.from_toml("config.toml")

# Or load from YAML
program = LLMProgram.from_yaml("config.yaml")

# Format auto-detection
program = LLMProgram.from_file("config.yaml")  # Detects YAML from extension

# Dictionary-based configuration
program = LLMProgram.from_dict({
    "model": {"name": "claude-3-7-sonnet", "provider": "anthropic"},
    "prompt": {"system_prompt": "You are a helpful assistant."},
    "parameters": {"max_tokens": 1000}
})

# Extract subsections from configuration files
with open("multi_agent.yaml") as f:
    config = yaml.safe_load(f)
agent_config = config["agents"]["assistant"]  # Extract a specific subsection
program = LLMProgram.from_dict(agent_config)  # Create program from subsection

See examples/projects/swe-agent for a complete YAML configuration example with dictionary-based configuration and subsection extraction. For a full reference of available fields, see YAML Configuration Schema.

CLI usage

# Start interactive session
llmproc-demo ./examples/anthropic.toml  # or ./examples/openai.yaml ... or any other config file

# Single prompt
llmproc ./examples/openai.toml -p "What is Python?"  # non-interactive
llmproc ./examples/openai.toml -p "add details" -a  # append to config prompt

# Read from stdin
cat questions.txt | llmproc ./examples/anthropic.toml

# List available builtin tools
llmproc ./examples/min_claude_code_read_only.yaml -p 'give me a list of builtin tools in llmproc'

Features

Supported Model Providers

  • OpenAI: GPT-4o, GPT-4o-mini, GPT-4.5, GPT-4.1, o1, o3, o4-mini, etc
  • Anthropic: Claude 3 Haiku, Claude 3.5/3.7 Sonnet, Claude 4 Sonnet/Opus (direct API and Vertex AI)
  • Google: Gemini 1.5 Flash/Pro, Gemini 2.0 Flash, Gemini 2.5 Pro (direct API and Vertex AI)

LLMProc offers a Unix-inspired toolkit for building sophisticated LLM applications:

Process Management - Unix-like LLM Orchestration

Large Content Handling - Sophisticated I/O Management

  • File Descriptor System - Unix-like pagination for large outputs
  • Reference ID System - Mark up and reference specific pieces of content
  • Smart Content Pagination - Optimized line-aware chunking for content too large for context windows

Usage Examples

Additional Features

  • File Preloading - Enhance context by loading files into system prompts
  • Environment Info - Add runtime context like working directory
  • Prompt Caching - Automatic 90% token savings for Claude models (enabled by default)
  • Reasoning/Thinking models - Claude 3.7 Thinking and OpenAI Reasoning models (configured in anthropic.yaml or openai.yaml)
  • Token-efficient tools - Claude 3.7 optimized tool calling (configured in anthropic.yaml)
  • MCP Protocol - Standardized interface for tool usage
  • Tool Aliases - Provide simpler, intuitive names for tools
  • Dictionary-based Configuration - Create programs from dictionaries for subsection extraction
  • YAML configuration support - Use .yaml files with the same structure as TOML
  • Cross-provider support - Currently supports Anthropic, OpenAI, and Google Gemini
  • New CLI tools - llmproc for single prompts and llmproc-demo for interactive sessions
  • Synchronous API - Create blocking processes with program.start_sync()
  • Standard error logging - Use the write_stderr tool and LLMProcess.get_stderr_log()
  • Flexible callbacks - Callback functions and methods may be synchronous or asynchronous
  • Instance methods as tools - Register object methods directly for stateful tools
  • API retry configuration - Exponential backoff settings via environment variables
  • Spawn the current program - Leave program_name blank in the spawn tool
  • Unified tool configuration - Built-in and MCP tools share the same ToolConfig

Demo Tools

LLMProc includes demo command-line tools for quick experimentation:

llmproc-demo

Interactive CLI for testing LLM configurations:

llmproc-demo ./config.yaml  # Interactive session

Commands: exit or quit to end the session

llmproc

Non-interactive CLI for running a single prompt:

llmproc ./config.yaml -p "What is Python?"      # Single prompt
cat questions.txt | llmproc ./config.yaml       # Read from stdin
llmproc ./config.yaml -p "extra" -a             # Append on top of config

llmproc-prompt

View the compiled system prompt without making API calls:

llmproc-prompt ./config.yaml                 # Display to stdout
llmproc-prompt ./config.yaml -o prompt.txt   # Save to file
llmproc-prompt ./config.yaml -E              # Without environment info

Use Cases

  • Claude Code - A minimal Claude Code implementation, with support for preloading CLAUDE.md, spawning, MCP

Documentation

Documentation Index: Start here for guided learning paths

For advanced usage and implementation details, see MISC.md. For design rationales and API decisions, see FAQ.md.

Design Philosophy

LLMProc treats LLMs as processes in a Unix-inspired operating system framework:

  • LLMs function as processes that execute prompts and make tool calls
  • Tools operate at both user and kernel levels, with system tools able to modify process state
  • The Process abstraction naturally maps to Unix concepts like spawn, fork, goto, and IPC
  • This architecture provides a foundation for evolving toward a more complete LLM operating system

For in-depth explanations of these design decisions, see our API Design FAQ.

Roadmap

  • Persistent children & inter-process communication
  • llmproc mcp server
  • Streaming api support
  • Process State Serialization & Restoration
  • Feature parity for openai/gemini models

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmproc-0.8.0.tar.gz (441.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmproc-0.8.0-py3-none-any.whl (156.2 kB view details)

Uploaded Python 3

File details

Details for the file llmproc-0.8.0.tar.gz.

File metadata

  • Download URL: llmproc-0.8.0.tar.gz
  • Upload date:
  • Size: 441.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llmproc-0.8.0.tar.gz
Algorithm Hash digest
SHA256 1815a4844db57c6764675c996298a3c38d63fca6593435d9fb9021e363dc7675
MD5 eb51fa1aef7fc12dd95682d315fa4603
BLAKE2b-256 2fcbdfd429f895104d58b6f2ac428d303971080034e54737a1bcaa608ffd5ca7

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmproc-0.8.0.tar.gz:

Publisher: release.yml on cccntu/llmproc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llmproc-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: llmproc-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 156.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llmproc-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7543b1619a4aa4282c0d1e92d6b4cbf00987c690dc6425028eaa2e11509135dc
MD5 ff361ddeec891ed5c6c5153f057d78b8
BLAKE2b-256 706160d62944dce7d56a6488756ce26ea32f234eec3784d0426b9a1805e352ba

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmproc-0.8.0-py3-none-any.whl:

Publisher: release.yml on cccntu/llmproc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page