A simple framework for LLM-powered applications
Project description
LLMProc
LLMProc: A Unix-inspired operating system for language models. Like processes in an OS, LLMs execute instructions, make system calls, manage resources, and communicate with each other - enabling powerful multi-model applications with sophisticated I/O management.
Table of Contents
Installation
For Users
# Install base package
pip install llmproc
# Install with specific provider support
pip install "llmproc[openai]" # For OpenAI models
pip install "llmproc[anthropic]" # For Anthropic models
pip install "llmproc[vertex]" # For Vertex AI
pip install "llmproc[gemini]" # For Google Gemini
# Install with all providers
pip install "llmproc[all]"
For Developers
If you're contributing to llmproc, clone the repository and use:
# Create virtual environment
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install everything (package + all providers + dev tools)
uv sync --all-extras --all-groups
See CONTRIBUTING.md for the complete developer setup guide.
Quick Start
Python usage
# Full example: examples/multiply_example.py
import asyncio
from llmproc import LLMProgram # Optional: import register_tool for advanced tool configuration
def multiply(a: float, b: float) -> dict:
"""Multiply two numbers and return the result."""
return {"result": a * b} # Expected: π * e = 8.539734222677128
async def main():
program = LLMProgram(
model_name="claude-3-7-sonnet-20250219",
provider="anthropic",
system_prompt="You're a helpful assistant.",
parameters={"max_tokens": 1024},
tools=[multiply],
)
process = await program.start()
await process.run("Can you multiply 3.14159265359 by 2.71828182846?")
print(process.get_last_message())
if __name__ == "__main__":
asyncio.run(main())
Configuration Options (TOML, YAML, or Dict)
Load program configuration in multiple ways:
# Load from TOML (traditional)
program = LLMProgram.from_toml("config.toml")
# Or load from YAML
program = LLMProgram.from_yaml("config.yaml")
# Format auto-detection
program = LLMProgram.from_file("config.yaml") # Detects YAML from extension
# Dictionary-based configuration
program = LLMProgram.from_dict({
"model": {"name": "claude-3-7-sonnet", "provider": "anthropic"},
"prompt": {"system_prompt": "You are a helpful assistant."},
"parameters": {"max_tokens": 1000}
})
# Extract subsections from configuration files
with open("multi_agent.yaml") as f:
config = yaml.safe_load(f)
agent_config = config["agents"]["assistant"] # Extract a specific subsection
program = LLMProgram.from_dict(agent_config) # Create program from subsection
See examples/projects/swe-agent for a complete YAML configuration example with dictionary-based configuration and subsection extraction. For a full reference of available fields, see YAML Configuration Schema.
CLI usage
# Start interactive session
llmproc-demo ./examples/anthropic.toml # or ./examples/openai.yaml ... or any other config file
# Single prompt
llmproc ./examples/openai.toml -p "What is Python?" # non-interactive
llmproc ./examples/openai.toml -p "add details" -a # append to config prompt
# Read from stdin
cat questions.txt | llmproc ./examples/anthropic.toml
# List available builtin tools
llmproc ./examples/min_claude_code_read_only.yaml -p 'give me a list of builtin tools in llmproc'
Features
Supported Model Providers
- OpenAI: GPT-4o, GPT-4o-mini, GPT-4.5, GPT-4.1, o1, o3, o4-mini, etc
- Anthropic: Claude 3 Haiku, Claude 3.5/3.7 Sonnet, Claude 4 Sonnet/Opus (direct API and Vertex AI)
- Google: Gemini 1.5 Flash/Pro, Gemini 2.0 Flash, Gemini 2.5 Pro (direct API and Vertex AI)
LLMProc offers a Unix-inspired toolkit for building sophisticated LLM applications:
Process Management - Unix-like LLM Orchestration
- Program Linking - Spawn specialized LLM processes for delegated tasks
- Fork Tool - Create process copies with shared conversation state
- GOTO (Time Travel) - Reset conversations to previous points with context compaction demo
- Tool Access Control - Secure multi-process environments with READ/WRITE/ADMIN permissions
Large Content Handling - Sophisticated I/O Management
- File Descriptor System - Unix-like pagination for large outputs
- Reference ID System - Mark up and reference specific pieces of content
- Smart Content Pagination - Optimized line-aware chunking for content too large for context windows
Usage Examples
- See the Python SDK documentation for the fluent API
- Use Function-Based Tools to register Python functions as tools
- Create Context-Aware Meta-Tools to let LLMs modify their own runtime parameters
- Start with a simple configuration (or TOML equivalent) for quick experimentation
Additional Features
- File Preloading - Enhance context by loading files into system prompts
- Environment Info - Add runtime context like working directory
- Prompt Caching - Automatic 90% token savings for Claude models (enabled by default)
- Reasoning/Thinking models - Claude 3.7 Thinking and OpenAI Reasoning models (configured in anthropic.yaml or openai.yaml)
- Token-efficient tools - Claude 3.7 optimized tool calling (configured in anthropic.yaml)
- MCP Protocol - Standardized interface for tool usage
- Tool Aliases - Provide simpler, intuitive names for tools
- Dictionary-based Configuration - Create programs from dictionaries for subsection extraction
- YAML configuration support - Use
.yamlfiles with the same structure as TOML - Cross-provider support - Currently supports Anthropic, OpenAI, and Google Gemini
- New CLI tools -
llmprocfor single prompts andllmproc-demofor interactive sessions - Synchronous API - Create blocking processes with
program.start_sync() - Standard error logging - Use the
write_stderrtool andLLMProcess.get_stderr_log() - Flexible callbacks - Callback functions and methods may be synchronous or asynchronous
- Instance methods as tools - Register object methods directly for stateful tools
- API retry configuration - Exponential backoff settings via environment variables
- Spawn the current program - Leave
program_nameblank in the spawn tool - Unified tool configuration - Built-in and MCP tools share the same
ToolConfig
Demo Tools
LLMProc includes demo command-line tools for quick experimentation:
llmproc-demo
Interactive CLI for testing LLM configurations:
llmproc-demo ./config.yaml # Interactive session
Commands: exit or quit to end the session
llmproc
Non-interactive CLI for running a single prompt:
llmproc ./config.yaml -p "What is Python?" # Single prompt
cat questions.txt | llmproc ./config.yaml # Read from stdin
llmproc ./config.yaml -p "extra" -a # Append on top of config
llmproc-prompt
View the compiled system prompt without making API calls:
llmproc-prompt ./config.yaml # Display to stdout
llmproc-prompt ./config.yaml -o prompt.txt # Save to file
llmproc-prompt ./config.yaml -E # Without environment info
Use Cases
- Claude Code - A minimal Claude Code implementation, with support for preloading CLAUDE.md, spawning, MCP
Documentation
Documentation Index: Start here for guided learning paths
- Examples: Sample configurations and use cases
- API Docs: Detailed API documentation
- Python SDK: Fluent API and program creation
- Function-Based Tools: Register Python functions as tools with automatic schema generation
- File Descriptor System: Handling large outputs
- Program Linking: LLM-to-LLM communication
- GOTO (Time Travel): Conversation time travel
- MCP Feature: Model Context Protocol for tools
- Tool Aliases: Using simpler names for tools
- Gemini Integration: Google Gemini models usage guide
- Testing Guide: Testing and validation
- For a tutorial with all options, see tutorial-config.toml
- For the formal specification, see yaml_config_schema.yaml
For advanced usage and implementation details, see MISC.md. For design rationales and API decisions, see FAQ.md.
Design Philosophy
LLMProc treats LLMs as processes in a Unix-inspired operating system framework:
- LLMs function as processes that execute prompts and make tool calls
- Tools operate at both user and kernel levels, with system tools able to modify process state
- The Process abstraction naturally maps to Unix concepts like spawn, fork, goto, and IPC
- This architecture provides a foundation for evolving toward a more complete LLM operating system
For in-depth explanations of these design decisions, see our API Design FAQ.
Roadmap
- Persistent children & inter-process communication
- llmproc mcp server
- Streaming api support
- Process State Serialization & Restoration
- Feature parity for openai/gemini models
License
Apache License 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmproc-0.8.0.tar.gz.
File metadata
- Download URL: llmproc-0.8.0.tar.gz
- Upload date:
- Size: 441.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1815a4844db57c6764675c996298a3c38d63fca6593435d9fb9021e363dc7675
|
|
| MD5 |
eb51fa1aef7fc12dd95682d315fa4603
|
|
| BLAKE2b-256 |
2fcbdfd429f895104d58b6f2ac428d303971080034e54737a1bcaa608ffd5ca7
|
Provenance
The following attestation bundles were made for llmproc-0.8.0.tar.gz:
Publisher:
release.yml on cccntu/llmproc
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmproc-0.8.0.tar.gz -
Subject digest:
1815a4844db57c6764675c996298a3c38d63fca6593435d9fb9021e363dc7675 - Sigstore transparency entry: 220029163
- Sigstore integration time:
-
Permalink:
cccntu/llmproc@4c4f241dab47996f3fbbd9551b63b903a1acc28e -
Branch / Tag:
refs/tags/v0.8.0 - Owner: https://github.com/cccntu
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4c4f241dab47996f3fbbd9551b63b903a1acc28e -
Trigger Event:
push
-
Statement type:
File details
Details for the file llmproc-0.8.0-py3-none-any.whl.
File metadata
- Download URL: llmproc-0.8.0-py3-none-any.whl
- Upload date:
- Size: 156.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7543b1619a4aa4282c0d1e92d6b4cbf00987c690dc6425028eaa2e11509135dc
|
|
| MD5 |
ff361ddeec891ed5c6c5153f057d78b8
|
|
| BLAKE2b-256 |
706160d62944dce7d56a6488756ce26ea32f234eec3784d0426b9a1805e352ba
|
Provenance
The following attestation bundles were made for llmproc-0.8.0-py3-none-any.whl:
Publisher:
release.yml on cccntu/llmproc
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmproc-0.8.0-py3-none-any.whl -
Subject digest:
7543b1619a4aa4282c0d1e92d6b4cbf00987c690dc6425028eaa2e11509135dc - Sigstore transparency entry: 220029165
- Sigstore integration time:
-
Permalink:
cccntu/llmproc@4c4f241dab47996f3fbbd9551b63b903a1acc28e -
Branch / Tag:
refs/tags/v0.8.0 - Owner: https://github.com/cccntu
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4c4f241dab47996f3fbbd9551b63b903a1acc28e -
Trigger Event:
push
-
Statement type: