Skip to main content

Harness an agent into a deterministic program with typed I/O.

Project description

🐴 Bridle

Harness an agent into a deterministic program with typed I/O.

Bridle is a Python library for writing agents the way you'd write any program: control flow in code, judgment in the model. Three primitives are typed holes the model fills with values — step, branch, loop. Two decorators compose them: @agent, @tool. Wrappers add behavior: cache, retry, timeout, with_model, fallback, mock, log.

It reads like async/await for LLM decisions.

Status

v0.1.0 — Anthropic-only, sync, single-agent. See What's next for the v0.2.0 roadmap.

Install

pip install bridle-ai

The PyPI distribution is bridle-ai; the import is bridle.

Quickstart

import bridle
from bridle import agent, branch, cache, loop, retry, step, tool
from bridle.models.anthropic import install
from pydantic import BaseModel


class Topic(BaseModel): title: str
class Plan(BaseModel): topics: list[Topic]
class Source(BaseModel): url: str; summary: str
class Brief(BaseModel): headline: str; body: str


@tool
def search(query: str) -> list[str]:
    """Search the web. Returns up to 10 result URLs."""
    ...


@agent(input=str, output=Brief, model="claude-sonnet-4-6")
def brief_writer(topic: str) -> Brief:
    plan = cache(step("draft a research plan", schema=Plan, context=topic))

    sources: list[Source] = []
    for t in plan.topics:
        found = loop(
            f"gather sources on {t.title}",
            schema=Source,
            until=lambda acc: len(acc) >= 3,
            tools=[search],
        )
        sources.extend(found)

    if not branch("is the evidence sufficient?", context=sources):
        return brief_writer(f"{topic} — go deeper on whatever's underdocumented")

    return retry(step("write the brief", schema=Brief, context=(topic, sources)), attempts=2)


install()  # registers the Anthropic adapter as the active model client
result = bridle.resolve(brief_writer("the weather on Mars"))
print(result.headline)

The model never picks the next state. It produces typed values; your Python decides where to go next. Every primitive is mockable; every run is observable.

The four primitives

Primitive What it does
step(prompt, *, schema, context=None, tools=()) The atomic unit: the model works toward a typed return, calling tools as needed, until its output satisfies the schema.
branch(prompt, *, schema=bool, context=None) A step constrained to a single typed decision. Defaults to bool; pass an Enum or Literal for multi-way.
loop(prompt, *, schema, until, tools=(), max_iterations=32) Repeat a step until a pure-Python predicate is satisfied. LoopExhaustedError on cap.
@agent(input=, output=, model=, token_budget=) Wrap a Python function whose body uses primitives. Validates I/O. Inner steps inherit the agent's model.

@tool registers a Python function as a tool the model can call. The parameter schema is extracted from type hints; the docstring becomes the description.

The wrapper algebra

Every wrapper takes a Call and returns a Call. They compose freely.

cache(retry(timeout(step("..."), seconds=10), attempts=3))
Wrapper What it does
cache(call, *, key=None, backend=None, ttl=None) Memoize results. Default key hashes kind + schema + context + prompt + tools.
retry(call, *, attempts=3, on=BridleError, backoff=None) Re-evaluate on failure. Each attempt clones the inner call.
timeout(call, *, seconds) Abort if the call runs past the deadline. Raises bridle.TimeoutError.
with_model(call, "model-id") Per-call model override (highest layer of model resolution).
fallback(call, *alternates) Try each in turn until one succeeds.
mock(call, value) Replace dispatch with a constant. For tests.
log(call, *, level="INFO") Stream the trace to a Python logger for the wrapper's subtree.

Model selection

Bridle ships zero default models. Set one of three layers:

bridle.configure(model="claude-sonnet-4-6")     # process-wide
@agent(model="claude-opus-4-7", ...)             # per-agent
with_model(step("..."), "claude-haiku-4-5")     # per-call (highest precedence)

Resolution order: per-call → per-agent → process. If none is set, Bridle raises ConfigurationError with a message that lists all three places.

The trace

Every primitive emits structured events into a Trace you can inspect, replay, or stream.

from bridle import Trace
from bridle.trace import set_active_trace

trace = Trace()
set_active_trace(trace)

bridle.resolve(brief_writer("..."))

print(trace.to_jsonl())          # one JSON line per event
print(trace.tree())              # nested view: agent → step → model_request → ...
trace.subscribe(lambda e: ...)   # live observer

Event kinds: call_start, call_end, model_request, model_response, tool_call, tool_result, cache_hit, cache_miss, retry.

Token budgets

Soft per-agent budget — usage accumulates after each model response and the next step raises TokenBudgetExceededError if it would breach.

@agent(input=Q, output=R, model="claude-sonnet-4-6", token_budget=100_000)
def deep_research(q: Q) -> R:
    ...

Caching

Caching is opt-in per call via the cache wrapper. Backends ship for memory and file; Redis is reserved for v0.2.0.

from bridle.cache.file import FileCache
bridle.set_cache(FileCache("./.bridle-cache"))

# Now any cache(...) wrapped call writes to disk.
plan = cache(step("draft a plan", schema=Plan, context=topic))

The default key is deterministic across runs: it hashes the call's kind, schema fingerprint, context, prompt, and tools.

What's next

v0.2.0 (planned):

  • Async-first execution
  • parallel primitive (returning to the four primitives the brief originally proposed, native to async)
  • Multi-agent coordination
  • Streaming primitives
  • Redis-backed cache
  • Sealed inner traces (seal=True on @agent)
  • Model abstraction beyond Anthropic
  • True durable execution for long-running agents

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bridle_ai-0.1.1.tar.gz (73.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bridle_ai-0.1.1-py3-none-any.whl (37.9 kB view details)

Uploaded Python 3

File details

Details for the file bridle_ai-0.1.1.tar.gz.

File metadata

  • Download URL: bridle_ai-0.1.1.tar.gz
  • Upload date:
  • Size: 73.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for bridle_ai-0.1.1.tar.gz
Algorithm Hash digest
SHA256 10927b81b6b23e5a999fb9eff545646e53319fa8fa23237acaa46b7a0a2344c2
MD5 224c176cbb7dcf59b0891ab0a223dfc4
BLAKE2b-256 607f18af295f8cdf2ee64ee609c44e480fee5467b15ed0b7acb5afd0567436cd

See more details on using hashes here.

Provenance

The following attestation bundles were made for bridle_ai-0.1.1.tar.gz:

Publisher: publish.yaml on heyagentpaul/bridle

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file bridle_ai-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: bridle_ai-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 37.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for bridle_ai-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 49a6b0369f454918c2f17d2428ff1b601d99312cf59caf17efcae68e7bce5bf5
MD5 e6040a9dcd8a39397d45625e1bcf6388
BLAKE2b-256 8219c094ebacebd113330fd13b23e2a2aea92c56753354f838ea74735a14aba3

See more details on using hashes here.

Provenance

The following attestation bundles were made for bridle_ai-0.1.1-py3-none-any.whl:

Publisher: publish.yaml on heyagentpaul/bridle

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page