Skip to main content

LLM-in-the-loop rule based expert system

Project description

theow

þēow - Old English for "servant" or "bondman."

PyPI Python License: MIT PydanticAI Logfire


Theow is an observable, programmatic LLM agent that auto-heals failing Python functions at runtime. Wrap any function with @theow.mark(), and when it raises, theow intercepts the exception, diagnoses it, and retries transparently. Every LLM call, tool execution, and token spend is traced via OpenTelemetry. Zero prompt engineering. Zero code changes beyond the decorator.

from theow import Theow
from theow.tools import read_file, write_file, run_command

agent = Theow(llm="anthropic/claude-sonnet-4-20250514")
agent.tool()(read_file)
agent.tool()(write_file)

@agent.mark(
    context_from=lambda task, exc: {"stage": "deploy", "error": str(exc)},
    explorable=True,
)
def deploy(task):
    ...  # when this fails, theow heals it

Why theow

Theow simplifies programmatic LLM agents through three complementing layers.

flowchart LR
    D["@mark'd function"] -->|raises| R["Resolver"]
    R -->|rule found| A["Execute action"]
    R -->|no rule| E["Explorer"]
    E -->|LLM diagnoses & writes rule| A
    A --> V["Re-run function"]
    V -->|pass| Done
    V -->|new error| R

Layer 1: Conversational agent

Theow wraps PydanticAI and the GitHub Copilot SDK into a single interface. Give it a prompt, a set of tools, and a token/call budget. Theow runs a conversation loop with the LLM, executing tool calls until the task is done or budget runs out.

agent = Theow(llm="anthropic/claude-sonnet-4-20250514")
agent.tool()(read_file)
agent.tool()(run_command)

agent.run("Fix the broken config file in ./config/", tools=agent.get_tools())

PydanticAI can do this on its own. Theow wraps it into a simpler API, adds a unified interface across the PydanticAI providers (15+) and the Copilot SDK (which is a different protocol entirely), and manages a custom conversation loop with signals, budget tracking, and nudging. You can optionally enable middleware for guardrails on LLM input/output and Logfire for OpenTelemetry instrumentation. The next two layers are where theow diverges from plain LLM wrappers.

Layer 2: Explorer

The explorer takes an error context and diagnoses the problem using internal prompts and whatever tools you've registered. No prompt engineering required.

Beyond finding a fix, the explorer converts the LLM's solution into a rule-action pair: a YAML rule that pattern-matches the error, paired with a Python action that fixes it. These pairs persist to disk and get indexed in ChromaDB for retrieval. Remote store support is planned. See how exploration works for the full flow.

agent = Theow(llm="anthropic/claude-sonnet-4-20250514")
agent.tool()(read_file)
agent.tool()(write_file)

context = {
    "error": "FileNotFoundError: config.yaml not found",
    "stderr": "Traceback ...\nFileNotFoundError: config.yaml not found",
}

rule = agent.explore(context, tools=agent.get_tools())
# rule is a validated Rule object, or None if exploration failed

See explore() API reference.

Layer 3: Resolver

The resolver checks if a matching rule already exists before calling the LLM. It tries explicit name/tag filtering first, then semantic search over the rule database. If a rule matches, its action runs immediately. No LLM call, no tokens spent.

agent = Theow(theow_dir=".theow")

context = {
    "error": "FileNotFoundError: config.yaml not found",
    "stderr": "Traceback ...\nFileNotFoundError: config.yaml not found",
}

rule = agent.resolve(context)
if rule:
    agent.execute_rule(rule, context)

See resolve() API reference and execute_rule() API reference.

The resolver can optionally invoke the explorer when no rule matches, creating a closed loop: fail -> resolve -> (miss) -> explore -> create rule -> resolve next time. First failure: the LLM investigates and writes a rule. Second failure of the same kind: the rule fires instantly. As rules accumulate, LLM calls decrease. Since failure modes are finite, they may reach zero.

Rules can also define LLM actions. Instead of running Python code, the matched rule triggers a conversation with a pre-stored prompt from your prompt library. This gives you deterministic routing with dynamic execution. For structural awareness during exploration, you can plug in CodeGraph, a tree-sitter based code graph that lets the LLM query symbols, call chains, and class hierarchies instead of reading entire files.

The decorator

The resolver-explorer pair is assembled into @theow.mark(), which wraps any Python function for automatic recovery:

@agent.mark(
    context_from=lambda task, exc: {"error": str(exc), "task_id": task.id},
    explorable=True,    # allow LLM exploration on novel errors
    max_retries=3,      # rules to try per error
    max_depth=3,        # chase cascading errors
)
def process(task):
    ...

When process() raises, the decorator intercepts the exception, calls context_from with the original arguments and the exception to build a context dict, automatically enriches it with the full traceback and exception type, then hands it to the resolver. If no rule matches and explorable=True, the explorer takes over. If a fix works, the function is retried transparently within the same call stack, so the caller never knows recovery happened.

The decorator also handles deep recovery (when a fix reveals a new error underneath, theow keeps the changes and continues against the new error), model escalation (cheap model first, strong model as fallback), and lifecycle hooks (setup/teardown callbacks around each recovery attempt). For the full set of configuration options including provider setup, see the linked docs. Theow also ships with a CLI for running explorations from the command line.

Quick start

pip install theow
from theow import Theow
from theow.tools import read_file, write_file, run_command

agent = Theow(
    theow_dir=".theow",
    llm="anthropic/claude-sonnet-4-20250514",
)

agent.tool()(read_file)
agent.tool()(write_file)
agent.tool()(run_command)

@agent.mark(
    context_from=lambda task, exc: {"error": str(exc)},
    explorable=True,
)
def process(task):
    ...

Set your provider's API key and enable exploration:

ANTHROPIC_API_KEY=sk-... THEOW_EXPLORE=1 python my_script.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

theow-0.2.0.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

theow-0.2.0-py3-none-any.whl (97.0 kB view details)

Uploaded Python 3

File details

Details for the file theow-0.2.0.tar.gz.

File metadata

  • Download URL: theow-0.2.0.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for theow-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4a8438efc2341359f0ab3cd366af1b3fe299bab9d673134785f24b8498f22754
MD5 79c5f5daffbcaa6d5cf228bdf3590afb
BLAKE2b-256 14d44d2dc1d443926129d65f6a6505f82f990a3c5f87da6af0a3c0beaba89af3

See more details on using hashes here.

Provenance

The following attestation bundles were made for theow-0.2.0.tar.gz:

Publisher: release.yml on adhityaravi/theow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file theow-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: theow-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 97.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for theow-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 22217110de8172342817abcc95aa4f85800a7059dc00eac6a3012e7a6cffeb37
MD5 48e114e023ed43d2839b5f5b16129bae
BLAKE2b-256 f25a5fad34ce5a575ce2f47b0c1177295608ae8e5a47b57fbd041d7778e9e716

See more details on using hashes here.

Provenance

The following attestation bundles were made for theow-0.2.0-py3-none-any.whl:

Publisher: release.yml on adhityaravi/theow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page