Skip to main content

A simple framework for LLM-powered applications

Project description

LLMProc

LLMProc Logo

License Status

LLMProc: Unix-inspired runtime that treats LLMs as processes. Build production-ready LLM programs with fully customizable YAML/TOML files. Or experiment with meta-tools via Python SDK - fork/spawn, goto, and more. Learn more at llmproc.com.

🔥 Check out our GitHub Actions examples to see LLMProc successfully automating code implementation, conflict resolution, and more!

Table of Contents

Why LLMProc over Claude Code?

Feature LLMProc Claude Code
License / openness ✅ Apache-2.0 ❌ Closed, minified JS
Token overhead ✅ Zero. You send exactly what you want ❌ 12-13k tokens (system prompt + builtin tools)
Custom system prompt ✅ Yes 🟡 Append-only (via CLAUDE.md)
Tool selection ✅ Opt-in; pick only the tools you need 🟡 Opt-out via --disallowedTools*
Tool schema override ✅ Supports alias, description overrides ❌ Not possible
Configuration ✅ Single YAML/TOML "LLM Program" 🟡 Limited config options
Scripting / SDK ✅ Python SDK with function tools ❌ JS-only CLI

*--disallowedTools allows removing builtin tools, but not MCP tools.

Installation

# Basic install - includes Anthropic support
pip install llmproc

# Install with all providers: openai/gemini/vertex/anthropic
pip install "llmproc[all]" # other supported extras: openai/gemini/vertex/anthropic

# Or run without installing (requires uv)
uvx llmproc --help
uvx llmproc-demo --help
uvx llmproc-install-actions --help

# Run GitHub Actions installer directly without installing llmproc
uvx --from llmproc llmproc-install-actions

Note: Only Anthropic models currently support full tool calling. OpenAI and Gemini models have limited feature parity. For development setup, see CONTRIBUTING.md.

Quick Start

Python usage

# Full example: examples/multiply_example.py
import asyncio
from llmproc import LLMProgram  # Optional: import register_tool for advanced tool configuration


def multiply(a: float, b: float) -> dict:
    """Multiply two numbers and return the result."""
    return {"result": a * b}  # Expected: π * e = 8.539734222677128


async def main():
    program = LLMProgram(
        model_name="claude-3-7-sonnet-20250219",
        provider="anthropic",
        system_prompt="You're a helpful assistant.",
        parameters={"max_tokens": 1024},
        tools=[multiply],
    )
    process = await program.start()
    await process.run("Can you multiply 3.14159265359 by 2.71828182846?")

    print(process.get_last_message())


if __name__ == "__main__":
    asyncio.run(main())

Configuration

LLMProc supports TOML, YAML, and dictionary-based configurations. See examples for various configuration patterns and the YAML Configuration Schema for all available options.

CLI Usage

  • llmproc - Execute an LLM program. Use --json mode to pipe output for automation (see GitHub Actions examples)
  • llmproc-demo - Interactive debugger for LLM programs/processes

Run with --help for full usage details:

llmproc --help
llmproc-demo --help

Features

Production Ready

  • Claude 3.7/4 models with full tool calling support
  • Python SDK - Register functions as tools with automatic schema generation
  • Async and sync APIs - Use await program.start() or program.start_sync()
  • TOML/YAML configuration - Define LLM programs declaratively
  • MCP protocol - Connect to external tool servers
  • Built-in tools - File operations, calculator, spawning processes
  • Tool customization - Aliases, description overrides, parameter descriptions
  • Automatic optimizations - Prompt caching, retry logic with exponential backoff

GitHub Actions Examples

Real-world automation using LLMProc:

Setup: To use these actions, you'll need the workflow files and LLM program configs (linked below), plus these secrets in your repository settings:

  • ANTHROPIC_API_KEY: API key for Claude
  • LLMPROC_WRITE_TOKEN: GitHub personal access token with write permissions (contents, pull-requests)

Run the installer in your repository root to download workflows automatically:

# Option 1: If you have llmproc installed
llmproc-install-actions

# Run non-interactively (answers yes to all prompts)
llmproc-install-actions --yes

# Option 2: Run directly without installing (requires uv)
uvx --from llmproc llmproc-install-actions

The installer will check you're in a git repository, show which files will be downloaded, warn about any existing files that will be overwritten, and provide step-by-step instructions for committing the files and setting up required secrets.

In Development

  • OpenAI/Gemini models - Basic support, tool calling not yet implemented
  • Streaming API - Real-time token streaming (planned)
  • Process persistence - Save/restore conversation state

Experimental Features

These cutting-edge features bring Unix-inspired process management to LLMs:

  • Process Forking - Create copies of running LLM processes with full conversation history, enabling parallel exploration of different solution paths

  • Program Linking - Connect multiple LLM programs together, allowing specialized models to collaborate (e.g., a coding expert delegating to a debugging specialist)

  • GOTO/Time Travel - Reset conversations to previous states, perfect for backtracking when the LLM goes down the wrong path or for exploring alternative approaches

  • File Descriptor System - Handle massive outputs elegantly with Unix-like pagination, reference IDs, and smart chunking - no more truncated responses

  • Tool Access Control - Fine-grained permissions (READ/WRITE/ADMIN) for multi-process environments, ensuring security when multiple LLMs collaborate

  • Meta-Tools - LLMs can modify their own runtime parameters! Create tools that let models adjust temperature, max_tokens, or other settings on the fly for adaptive behavior

Documentation

📚 Documentation Index - Comprehensive guides and API reference

🔧 Key Resources:

Design Philosophy

LLMProc treats LLMs as processes in a Unix-inspired runtime framework:

  • LLMs function as processes that execute prompts and make tool calls
  • Tools operate at both user and kernel levels, with system tools able to modify process state
  • The Process abstraction naturally maps to Unix concepts like spawn, fork, goto, IPC, file descriptors, and more
  • This architecture provides a foundation for evolving toward a more complete LLM runtime

For in-depth explanations of these design decisions, see our API Design FAQ.

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmproc-0.9.2.tar.gz (489.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmproc-0.9.2-py3-none-any.whl (161.0 kB view details)

Uploaded Python 3

File details

Details for the file llmproc-0.9.2.tar.gz.

File metadata

  • Download URL: llmproc-0.9.2.tar.gz
  • Upload date:
  • Size: 489.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llmproc-0.9.2.tar.gz
Algorithm Hash digest
SHA256 3db948b0b54ada759f49ea3969970c5e9573a4cdb65fcdbf2fbc627f5324fa94
MD5 11fb77abdc14f4b11481b2855b7febd6
BLAKE2b-256 79ce837a92d64af003aa1803b9f522ccc154f5a7083ed6514857b4c0715fc6ef

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmproc-0.9.2.tar.gz:

Publisher: release.yml on cccntu/llmproc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llmproc-0.9.2-py3-none-any.whl.

File metadata

  • Download URL: llmproc-0.9.2-py3-none-any.whl
  • Upload date:
  • Size: 161.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llmproc-0.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ae5821b7956024b939c392a5c9e59e0cf0c50af9bb272acec3e78ee5c8ec46c4
MD5 6b7e23b1e608f113adc6e0b786d3be70
BLAKE2b-256 d394f9802907513e27c75b9249c977c4c4a0cc70c08671c6d4f9ede58422977e

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmproc-0.9.2-py3-none-any.whl:

Publisher: release.yml on cccntu/llmproc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page