Skip to main content

The tool-calling loop for LLM agents — iterator-first, protocol-hooked, one dependency.

Project description

looplet

CI codecov PyPI version Python 3.11+ License: Apache 2.0 Status: Beta

Not pydantic-ai-harness — that's a capability library for pydantic-ai. looplet is a framework-agnostic loop library. Works with any LLM backend, one dependency.

The tool-calling loop you can actually step through.

Every agent framework gives you agent.run(). looplet gives you for step in loop(...): — and that's the whole product.

from looplet import composable_loop

for step in composable_loop(llm=llm, tools=tools, task=task, ...):
    print(step.pretty())          # → "#1 ✓ search(query='…') → 12 items [182ms]"
    if step.tool_result.error:
        break                     # your loop, your control flow

Who this is for

  • You're building an agent for a non-trivial domain (security, research, ops, robotics) and framework magic is in your way.
  • You want to iterate on behaviour at a single step — add a hook, filter a result, veto a tool call — without learning a graph DSL.
  • You need vendor independence: no Claude-only, no AWS-only, no "everything through Pydantic."
  • You're a library author who wants to embed a reactive loop in your own package without pulling in dozens of transitive deps.

How it compares

looplet claude-agent-sdk strands-agents pydantic-ai langgraph
You own the loop (iterator) for step in loop(...) ❌ async stream ❌ closed agent() run_sync() ❌ graph
Provider-agnostic ❌ Claude-only
No subprocess / bundled binary
Hooks as Protocol objects ⚠️ dict callbacks ⚠️ inheritance ⚠️ Capability ⚠️ nodes
Fail-closed permissions ✅ built in ⚠️ hooks only ⚠️ deferred tools
Crash-resume checkpoints ⚠️ add-on
Built-in evals ✅ pytest-style
OSI license Apache-2.0 Anthropic terms Apache-2.0 MIT MIT
Core runtime deps 1 CLI binary several many many

Install

pip install looplet                    # core only
pip install "looplet[openai]"          # + OpenAI / Ollama / any OAI-compat
pip install "looplet[anthropic]"       # + Anthropic
pip install "looplet[all]"             # both

60-second example

from looplet import composable_loop, LoopConfig, DefaultState, BaseToolRegistry, ToolSpec
from looplet.backends import OpenAIBackend
from openai import OpenAI

llm = OpenAIBackend(OpenAI(), model="gpt-4o-mini")

tools = BaseToolRegistry()
tools.register(ToolSpec(name="greet", description="Greet someone.",
                        parameters={"name": "str"},
                        execute=lambda *, name: {"greeting": f"Hello, {name}!"}))
tools.register(ToolSpec(name="done", description="Finish.",
                        parameters={"answer": "str"},
                        execute=lambda *, answer: {"answer": answer}))

for step in composable_loop(
    llm=llm, tools=tools,
    state=DefaultState(max_steps=5),
    config=LoopConfig(max_steps=5),
    task={"goal": "Greet Alice, then finish."},
):
    print(step.pretty())

Runs against any OpenAI-compatible endpoint (OpenAI, Ollama, Together, Groq, vLLM, …). Set OPENAI_BASE_URL and OPENAI_MODEL to your provider.

What looplet gives you

  • Composable loopcomposable_loop yields Steps you can observe or interrupt. Hooks (pre_prompt, pre_dispatch, post_dispatch, check_done, should_stop, on_loop_end) layer behaviour without forking the loop.
  • Tool registryToolSpec + JSON-schema catalog, concurrent batching, auto-ctx threading, structured ToolError categories.
  • Permissions — declarative PermissionEngine with ALLOW/DENY/ASK rules, argument matchers, human-in-the-loop handler, audit log.
  • Context managementcompact_chain of prune / summarise / truncate strategies triggered on budget pressure.
  • CheckpointsFileCheckpointStore + resume_loop_state() preserve session log, conversation, step offset, and budgets across crash-resume.
  • ProvenanceProvenanceSink captures the exact prompts the LLM saw and the trajectory the loop took, in a diff-friendly directory.
  • Evals — pytest-style eval_* functions discovered, batched, and run from the CLI. Your debug output becomes your regression suite.
  • MCP + skillsMCPToolAdapter bridges MCP servers without the MCP SDK; Skill bundles tools + prompt fragment + context.
  • Backends — sync / async / streaming adapters for OpenAI and Anthropic. Bring your own via the LLMBackend protocol.

Learn more

Doc What's in it
docs/tutorial.md Build your first agent in 5 steps
HOOK_GUIDE.md Writing and composing hooks
docs/evals.md pytest-style agent evaluation
PROVENANCE_GUIDE.md Capturing prompts + trajectories
docs/recipes.md Ollama, OTel, MCP, cost accounting, checkpoints
ROADMAP.md What's planned, what's frozen, what's out of scope
CONTRIBUTING.md Dev setup, conventions, PR checklist
CHANGELOG.md Release notes

Every public symbol has a docstring and the package ships a py.typed marker.

Examples

python -m looplet.examples.hello_world                            # 30-line starter
python -m looplet.examples.coding_agent "implement fizzbuzz"      # bash/read/write/edit/grep
python -m looplet.examples.coding_agent --trace ./traces/         # save full trajectory

Stability

looplet follows SemVer. Pre-1.0, minor versions may introduce breaking changes as the design stabilises — pin conservatively:

looplet>=0.1.6,<0.2

See ROADMAP.md § v1.0 API contract for what's frozen and the path to 1.0.

Contributing

Contributions welcome — bug reports, docs, backends, examples, evals. Start with CONTRIBUTING.md and docs/good-first-issues.md. Security issues go through SECURITY.md.

License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

looplet-0.1.7.tar.gz (239.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

looplet-0.1.7-py3-none-any.whl (164.7 kB view details)

Uploaded Python 3

File details

Details for the file looplet-0.1.7.tar.gz.

File metadata

  • Download URL: looplet-0.1.7.tar.gz
  • Upload date:
  • Size: 239.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for looplet-0.1.7.tar.gz
Algorithm Hash digest
SHA256 fb902fc57c667a1458a503bb8ed73e2706cb1c4e3032080096e921dcdd23f9ef
MD5 4a91cbd6b9002abe9823f2bfe454d7ce
BLAKE2b-256 73fac527d47e5534882968395ce3c2aabedf7d787d7e68af64ba379c6f257447

See more details on using hashes here.

Provenance

The following attestation bundles were made for looplet-0.1.7.tar.gz:

Publisher: publish.yml on hsaghir/looplet

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file looplet-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: looplet-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 164.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for looplet-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 0591fd22c56922a67afeb5735c1dda78097f86e4ef845f850d440721b456c618
MD5 d98185940d09df781df99b0feecbb18e
BLAKE2b-256 1abed99fa6948ff01a320ec1e114709312c886195700b0cdf0a60fef51b1e9b0

See more details on using hashes here.

Provenance

The following attestation bundles were made for looplet-0.1.7-py3-none-any.whl:

Publisher: publish.yml on hsaghir/looplet

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page