LLM in a loop with tools, MCP, sessions, and structured outputs.
Project description
InnerLoop
Pure Python SDK for LLM agent loops.
uv add innerloop # or: pip install innerloop
Why InnerLoop?
- The agentic loop — The loop is the fundamental building block. Tools, iteration, and structured output handled for you.
- Pure Python — No Node.js, no subprocess, no external service. Just
uv addorpip installand go. - Auditable — Minimal codebase, optional dependencies. Easy to review and trust.
- Provider agnostic — Same API across 20+ providers: Anthropic, OpenAI, Google, Azure, Mistral, xAI, Groq, Ollama, and more.
- Built-in tools — File system, bash, web, and todos.
- Skills — Claude Code-compatible prompt templates for domain expertise on demand.
- Context overflow protection — Protect your agent's context window with automatic tool truncation.
- Security-conscious — Workdir sandboxing for file tools. Bash allow/deny lists.
- Structured output — Pydantic, msgspec, dict, or JSON Schema with validation retries.
- Sessions — JSONL persistence. Human-readable. Resume anytime.
- Observability — stdlib logging with OpenTelemetry, Logfire, and Weave integration.
- Flexible API — Sync or async. Streaming or blocking. Agent loop or single-shot call.
Quick Start
Create a loop with tools and let it run until the task is done.
from innerloop import Loop, tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 72°F"
loop = Loop(model="anthropic/claude-haiku-4-5", tools=[get_weather])
response = loop.run("What's the weather in NYC and LA?")
Any Provider
Switch providers by changing the model string.
Loop(model="anthropic/claude-haiku-4-5")
Loop(model="openai/gpt-5-mini")
Loop(model="google/gemini-2.5-flash")
Loop(model="azure/gpt-5-mini", base_url="https://example.openai.azure.com")
Loop(model="mistral/mistral-large-latest")
Loop(model="xai/grok-4-1-fast-reasoning")
Loop(model="deepseek/deepseek-reasoner")
Loop(model="groq/openai/gpt-oss-120b")
Loop(model="ollama/qwen3:8b")
Loop(model="openrouter/openai/gpt-oss-120b")
Structured Output
Force the model to return validated data with automatic retries on failure.
from pydantic import BaseModel
from innerloop import Loop
class City(BaseModel):
name: str
country: str
population: int
loop = Loop(model="openai/gpt-5-mini")
response = loop.run("Tell me about Tokyo", response_format=City, validation_retries=3)
print(response.output) # City(name='Tokyo', country='Japan', population=13929286)
Sessions
Keep conversation history across multiple calls with automatic JSONL persistence.
loop = Loop(model="anthropic/claude-haiku-4-5")
with loop.session() as ask:
ask("Remember: the secret word is 'banana'")
response = ask("What's the secret word?")
print(response.session_id) # "20251207144323-SA9MWJ"
Resume anytime:
loop = Loop(model="anthropic/claude-haiku-4-5", session="20251207144323-SA9MWJ")
Streaming
Get tokens as they arrive for responsive interfaces.
from innerloop import Loop, TextEvent
loop = Loop(model="anthropic/claude-haiku-4-5")
for event in loop.stream("Write a poem"):
if isinstance(event, TextEvent):
print(event.text, end="", flush=True)
Async
Use async/await for non-blocking I/O.
response = await loop.arun("Hello!")
async for event in loop.astream("Write a story"):
...
Vision
Analyze images with vision-capable models.
from innerloop import Loop
loop = Loop(model="anthropic/claude-3-5-haiku-latest") # Or openai/gpt-4o-mini, google/gemini-2.0-flash
response = loop.run(
prompt="What do you see in this image?",
images=["https://example.com/photo.jpg"]
)
Works with local files, multiple images, structured output, and streaming. See demos/vision.py for a multi-provider example.
Built-in Tools
Pre-built tools for file, web, and task operations with security sandboxing.
from innerloop import Loop, FS_TOOLS, SAFE_FS_TOOLS, WEB_TOOLS
loop = Loop(
model="anthropic/claude-haiku-4-5",
tools=FS_TOOLS,
workdir="./my-project", # Sandboxed - can't escape this directory
)
Truncation
Give the LLM control over what data it needs while protecting your context window.
read("log.txt", head=0, tail=100) # Last 100 lines
read("main.py", head=50, tail=50) # First 50 + last 50
grep("TODO", head=20, tail=0) # First 20 matches
Safety cap at 50KB / 2000 lines prevents runaway outputs.
Bash Security
Control what shell commands the model can run.
from innerloop import bash
safe_bash = bash(
use={"make": "Run builds", "git": "Version control"},
deny=["rm -rf", "sudo"],
)
strict_bash = bash(allow=["make", "git", "uv"])
Skills
Load domain-specific prompt templates on demand—compatible with Claude Code skills.
from pathlib import Path
from innerloop import Loop
loop = Loop(
model="anthropic/claude-sonnet-4",
skills_paths=[Path("~/.claude/skills"), Path(".claude/skills")],
)
response = loop.run("Review the code in src/ for quality issues")
# LLM invokes the code-reviewer skill if relevant
One-Shot Calls
For simple extractions without tool iteration, call skips the loop.
from pydantic import BaseModel
from innerloop import call
class Contact(BaseModel):
name: str
email: str
result = call(
prompt="Extract: John Smith, john@acme.com",
model="openai/gpt-5-mini",
response_format=Contact,
)
print(result.output) # Contact(name="John Smith", email="john@acme.com")
Documentation
Getting Started
- Getting Started — Installation and first steps
- Agent Loop — How the loop executes tools
- Custom Tools — Build tools with the
@tooldecorator - Structured Output — Pydantic, msgspec, and JSON Schema
- One-Shot Calls — Single LLM calls without looping
- Recipes — Common patterns and examples
Sessions & Observability
- Sessions — Multi-turn conversations
- Session Logging — JSONL format, analysis with jq/hl/visidata
- Observability — Runtime logging with OpenTelemetry/Logfire/Weave
Reference
- Streaming — Event types and real-time output
- Vision — Image input for vision models
- Providers — Supported providers and configuration
- Built-in Tools — Filesystem, web, and todo tools
- Skills — Claude Code-compatible prompt templates
- Bash Tool — Security modes and command filtering
- Truncation — Preventing context overflow
- Configuration — All configuration options
- Security — Sandboxing and safety
Contributing
- Contributing — Development setup and guidelines
- Testing — Running and writing tests
Essays
- Building an Agentic Loop — What is an agent loop and how to build one
Examples
- demos/ — Runnable examples for every feature
- demos/README.py — Tests all code examples from this README (keep in sync)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file innerloop-0.0.1.dev16.tar.gz.
File metadata
- Download URL: innerloop-0.0.1.dev16.tar.gz
- Upload date:
- Size: 100.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9d1238a711bc58bc97ae5de2bf1556857e23c76faf782572a46a7a4d1e738a02
|
|
| MD5 |
4a37aa10a7ec2b57ac87eb5adfa87219
|
|
| BLAKE2b-256 |
0a0c51a8ade9bc6a58bcc1f2e73fdddf7aa8c13224a6bcae85050ef006165fa2
|
Provenance
The following attestation bundles were made for innerloop-0.0.1.dev16.tar.gz:
Publisher:
on-release-main.yml on botassembly/innerloop
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
innerloop-0.0.1.dev16.tar.gz -
Subject digest:
9d1238a711bc58bc97ae5de2bf1556857e23c76faf782572a46a7a4d1e738a02 - Sigstore transparency entry: 774152532
- Sigstore integration time:
-
Permalink:
botassembly/innerloop@f6eec828c7392956c6d39e39b1c6b3c08e767dd6 -
Branch / Tag:
refs/tags/v0.0.1dev16 - Owner: https://github.com/botassembly
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
on-release-main.yml@f6eec828c7392956c6d39e39b1c6b3c08e767dd6 -
Trigger Event:
release
-
Statement type:
File details
Details for the file innerloop-0.0.1.dev16-py3-none-any.whl.
File metadata
- Download URL: innerloop-0.0.1.dev16-py3-none-any.whl
- Upload date:
- Size: 124.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
90bb2536ac620591a3d97a9b65db1eefab43471f0f8f5ce332adc4208b7c1d10
|
|
| MD5 |
838707dc6acb8ad53b2295ec533e47ca
|
|
| BLAKE2b-256 |
f4ed630c67290a2680dcbe0bf87c3f64df7edca82cbfbabc9916c11d79e9105b
|
Provenance
The following attestation bundles were made for innerloop-0.0.1.dev16-py3-none-any.whl:
Publisher:
on-release-main.yml on botassembly/innerloop
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
innerloop-0.0.1.dev16-py3-none-any.whl -
Subject digest:
90bb2536ac620591a3d97a9b65db1eefab43471f0f8f5ce332adc4208b7c1d10 - Sigstore transparency entry: 774152533
- Sigstore integration time:
-
Permalink:
botassembly/innerloop@f6eec828c7392956c6d39e39b1c6b3c08e767dd6 -
Branch / Tag:
refs/tags/v0.0.1dev16 - Owner: https://github.com/botassembly
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
on-release-main.yml@f6eec828c7392956c6d39e39b1c6b3c08e767dd6 -
Trigger Event:
release
-
Statement type: