A simple framework for LLM-powered applications
Project description
LLMProc
A Unix-inspired framework for building powerful LLM applications that lets you spawn specialized models, manage large outputs, and enhance context with file preloading.
LLMProc treats language models as processes: spawn them, fork them, link them together, and handle their I/O with a familiar Unix-like approach.
Table of Contents
Installation
# Install with uv (recommended)
uv pip install -e .
# Or with pip
pip install -e .
# Set environment variables
export OPENAI_API_KEY="your-key" # For OpenAI models
export ANTHROPIC_API_KEY="your-key" # For Claude models
The package supports .env files for environment variables.
Quick Start
Python usage
import asyncio
from llmproc import LLMProgram, register_tool
@register_tool()
def calculate(expression: str) -> dict:
return {"result": eval(expression, {"__builtins__": {}})}
async def main():
# You can load a program from a TOML file
program = LLMProgram.from_toml('examples/anthropic/claude-3-5-haiku.toml')
# Or create a program with the python API
program = (
LLMProgram(
model_name="claude-3-7-sonnet-20250219",
provider="anthropic",
system_prompt="You are a helpful assistant."
)
.add_tool(calculate)
)
# Start and use the process
process = await program.start()
result = await process.run('What is 125 * 48?')
print(process.get_last_message())
asyncio.run(main())
CLI usage
# Start interactive session
llmproc-demo ./examples/anthropic/claude-3-5-haiku.toml
# Single prompt
llmproc-demo ./examples/anthropic/claude-3-5-sonnet.toml -p "What is Python?"
# Read from stdin
cat questions.txt | llmproc-demo ./examples/anthropic/claude-3-7-sonnet.toml -n
Features
LLMProc offers a complete toolkit for building sophisticated LLM applications:
Basic Configuration
- Minimal Setup - Start with a simple Claude configuration
- File Preloading - Enhance context by loading files into system prompts
- Environment Info - Add runtime context like working directory and platform
Developer Experience
- Python SDK - Create programs with intuitive method chaining
- Function-Based Tools - Register Python functions as tools with type-safety and auto-conversion
Process Management
- Program Linking - Spawn and delegate tasks to specialized LLM processes
- Fork Tool - Create process copies with shared conversation state
Large Content Handling
- File Descriptor System - Unix-like pagination for large outputs
More Features
- Prompt Caching - Automatic 90% token savings for Claude models (enabled by default)
- Reasoning/Thinking models - Claude 3.7 Thinking and OpenAI Reasoning models
- MCP Protocol - Standardized interface for tool usage
- Cross-provider support - Currently supports Anthropic, OpenAI, and Anthropic on Vertex AI
Demo Tools
LLMProc includes demo command-line tools for quick experimentation:
llmproc-demo
Interactive CLI for testing LLM configurations:
llmproc-demo ./examples/anthropic/claude-3-5-haiku.toml # Interactive session
llmproc-demo ./config.toml -p "What is Python?" # Single prompt
cat questions.txt | llmproc-demo ./config.toml -n # Pipe mode
Commands: exit or quit to end the session
llmproc-prompt
View the compiled system prompt without making API calls:
llmproc-prompt ./config.toml # Display to stdout
llmproc-prompt ./config.toml -o prompt.txt # Save to file
llmproc-prompt ./config.toml -E # Without environment info
Use Cases
- Claude Code - A minimal Claude Code implementation, with support for preloading CLAUDE.md, spawning, MCP
Documentation
- Examples: Sample configurations and use cases
- API Docs: Detailed API documentation
- Python SDK: Fluent API and program creation
- Function-Based Tools: Python function tools with type hints
- File Descriptor System: Handling large outputs
- Program Linking: LLM-to-LLM communication
- MCP Feature: Model Context Protocol for tools
- Testing Guide: Testing and validation
- For complete reference, see reference.toml
For advanced usage and implementation details, see MISC.md.
Design Philosophy
LLMProc treats LLMs as computing processes:
- Each model is a process defined by a program (TOML file)
- It maintains state between executions
- It interacts with the system through defined interfaces
The library functions as a kernel:
- Implements system calls for LLM processes
- Manages resources across processes
- Creates a standardized interface with the environment
Roadmap
Future development plans:
- Exec System Call for process replacement
- Process State Serialization & Restoration
- Retry mechanism with exponential backoff
- Enhanced error handling and reporting
- Support for streaming
- File Descriptor System Phase 3 enhancements
- Gemini models support
License
Apache License 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmproc-0.3.1.tar.gz.
File metadata
- Download URL: llmproc-0.3.1.tar.gz
- Upload date:
- Size: 98.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec88ef47d8b8a9cc79459d6fc84731162fdaffbe3b522303f9cf2f010a6f160f
|
|
| MD5 |
d86a1c23e90fc1b43029601b20f184e1
|
|
| BLAKE2b-256 |
802df7de4c92e9b3cb9a2ff9347f2d93c6848b2d2da89db4d89e4029e5970bca
|
File details
Details for the file llmproc-0.3.1-py3-none-any.whl.
File metadata
- Download URL: llmproc-0.3.1-py3-none-any.whl
- Upload date:
- Size: 21.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d2937fdea867bf14ef78bfd16a0feb249a14a87dafa8db4973d161b2288188fd
|
|
| MD5 |
3ed9f04003ea7979dee6a72ba186e9c2
|
|
| BLAKE2b-256 |
26b54b54f57cb55aa392a41582bb293b7f4bd85602fbed5d547a723674757bb8
|