Skip to main content

A simple framework for LLM-powered applications

Project description

LLMProc

LLMProc Logo

License Status

LLMProc: Unix-inspired runtime that treats LLMs as processes. Build production-ready LLM programs with fully customizable YAML/TOML files. Or experiment with meta-tools via Python SDK - fork/spawn, goto, and more. Learn more at llmproc.com.

🔥 Check out our LLMProc GitHub Actions to see LLMProc successfully automating code implementation, conflict resolution, and more!

📋 Latest Updates: See v0.9.3 Release Notes for cost control features, enhanced callbacks, and more.

Table of Contents

LLMProc GitHub Actions

Automate your development workflow with LLMProc-powered GitHub Actions:

  • @llmproc /resolve - Automatically resolve merge conflicts
  • @llmproc /ask <question> - Answer questions on issues/PRs
  • @llmproc /code <request> - Implement features from comments

[!TIP] Quick Setup: Run this command in your repository to automatically install workflows and get setup instructions:

uvx --from llmproc llmproc-install-actions

Why LLMProc over Claude Code?

Feature LLMProc Claude Code
License / openness ✅ Apache-2.0 ❌ Closed, minified JS
Token overhead ✅ Zero. You send exactly what you want ❌ 12-13k tokens (system prompt + builtin tools)
Custom system prompt ✅ Yes 🟡 Append-only (via CLAUDE.md)
Tool selection ✅ Opt-in; pick only the tools you need 🟡 Opt-out via --disallowedTools*
Tool schema override ✅ Supports alias, description overrides ❌ Not possible
Configuration ✅ Single YAML/TOML "LLM Program" 🟡 Limited config options
Scripting / SDK ✅ Python SDK with function tools ❌ JS-only CLI

*--disallowedTools allows removing builtin tools, but not MCP tools.

Installation

pip install llmproc

Run without installing

uvx llmproc

[!IMPORTANT] You'll need an API key from your chosen provider (Anthropic, OpenAI, etc.). Set it as an environment variable: export ANTHROPIC_API_KEY=your_key_here

Quick Start

Python usage

# Full example: examples/multiply_example.py
import asyncio
from llmproc import LLMProgram  # Optional: import register_tool for advanced tool configuration


def multiply(a: float, b: float) -> dict:
    """Multiply two numbers and return the result."""
    return {"result": a * b}  # Expected: π * e = 8.539734222677128


async def main():
    program = LLMProgram(
        model_name="claude-3-7-sonnet-20250219",
        provider="anthropic",
        system_prompt="You're a helpful assistant.",
        parameters={"max_tokens": 1024},
        tools=[multiply],
    )
    process = await program.start()
    await process.run("Can you multiply 3.14159265359 by 2.71828182846?")

    print(process.get_last_message())


if __name__ == "__main__":
    asyncio.run(main())

Configuration

[!NOTE] LLMProc supports TOML, YAML, and dictionary-based configurations. Check out the examples directory for various configuration patterns and the YAML Configuration Schema for all available options.

CLI Usage

  • llmproc - Execute an LLM program. Use --json mode to pipe output for automation (see GitHub Actions examples)
  • llmproc-demo - Interactive debugger for LLM programs/processes

Features

Production Ready

  • Claude 3.7/4 models with full tool calling support
  • Python SDK - Register functions as tools with automatic schema generation
  • Async and sync APIs - Use await program.start() or program.start_sync()
  • TOML/YAML configuration - Define LLM programs declaratively
  • MCP protocol - Connect to external tool servers
  • Built-in tools - File operations, calculator, spawning processes
  • Tool customization - Aliases, description overrides, parameter descriptions
  • Automatic optimizations - Prompt caching, retry logic with exponential backoff

In Development

  • OpenAI/Gemini models - Basic support, tool calling not yet implemented
  • Streaming API - Real-time token streaming (planned)
  • Process persistence - Save/restore conversation state

Experimental Features

These cutting-edge features bring Unix-inspired process management to LLMs:

  • Process Forking - Create copies of running LLM processes with full conversation history, enabling parallel exploration of different solution paths

  • Program Linking - Connect multiple LLM programs together, allowing specialized models to collaborate (e.g., a coding expert delegating to a debugging specialist)

  • GOTO/Time Travel - Reset conversations to previous states, perfect for backtracking when the LLM goes down the wrong path or for exploring alternative approaches

  • File Descriptor System - Handle massive outputs elegantly with Unix-like pagination, reference IDs, and smart chunking - no more truncated responses

  • Tool Access Control - Fine-grained permissions (READ/WRITE/ADMIN) for multi-process environments, ensuring security when multiple LLMs collaborate

  • Meta-Tools - LLMs can modify their own runtime parameters! Create tools that let models adjust temperature, max_tokens, or other settings on the fly for adaptive behavior

Documentation

📚 Documentation Index - Comprehensive guides and API reference

🔧 Key Resources:

Design Philosophy

LLMProc treats LLMs as processes in a Unix-inspired runtime framework:

  • LLMs function as processes that execute prompts and make tool calls
  • Tools operate at both user and kernel levels, with system tools able to modify process state
  • The Process abstraction naturally maps to Unix concepts like spawn, fork, goto, IPC, file descriptors, and more
  • This architecture provides a foundation for evolving toward a more complete LLM runtime

For in-depth explanations of these design decisions, see our API Design FAQ.

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmproc-0.9.4.tar.gz (493.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmproc-0.9.4-py3-none-any.whl (166.0 kB view details)

Uploaded Python 3

File details

Details for the file llmproc-0.9.4.tar.gz.

File metadata

  • Download URL: llmproc-0.9.4.tar.gz
  • Upload date:
  • Size: 493.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llmproc-0.9.4.tar.gz
Algorithm Hash digest
SHA256 8b691ccca4a286d4aa2764525db7b348fd71489fd7241ca40a91347215110fe6
MD5 1b55c9b4998d8df73f0bc128dbe7aac5
BLAKE2b-256 51735f75e7cf8f7c3b3586373ecb866748455f1a1bef57c9dc24730d12765547

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmproc-0.9.4.tar.gz:

Publisher: release.yml on cccntu/llmproc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llmproc-0.9.4-py3-none-any.whl.

File metadata

  • Download URL: llmproc-0.9.4-py3-none-any.whl
  • Upload date:
  • Size: 166.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llmproc-0.9.4-py3-none-any.whl
Algorithm Hash digest
SHA256 baf8073c6982e69e3de422a53297a35fa2be10ce258bc81067c1c35e347b984b
MD5 9bd99049bbb0a764cdd8dde76db6912d
BLAKE2b-256 34339c527a41727c96bafcfed2966e5135e225aa1b83114807ed9853cc3b2d78

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmproc-0.9.4-py3-none-any.whl:

Publisher: release.yml on cccntu/llmproc

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page