Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon
Project description
InitRunner
Website · Docs · InitHub · Discord
Define an agent in one YAML file. Chat with it. When it works, let it run autonomously. When you trust it, deploy it as a daemon that reacts to cron schedules, file changes, webhooks, and Telegram messages. Same file the whole way. No rewrite between prototyping and production.
Quickstart
curl -fsSL https://initrunner.ai/install.sh | sh
initrunner setup # wizard: pick provider, model, API key
Or: uv pip install "initrunner[recommended]" / pipx install "initrunner[recommended]". See Installation.
Starters
Eight starters you can run in one command. Browse the full catalog with initrunner run --list. The model is auto-detected from your API key.
| Starter | What it does |
|---|---|
helpdesk |
Q&A agent over your docs (markdown, PDF, HTML, Word) with citations and per-user memory |
scholar |
Three-agent research team: planner, web researcher, synthesizer, with shared memory |
reviewer |
Multi-perspective code review: architect, security, maintainer |
reader |
Index a codebase, chat about architecture, remember patterns across sessions |
scout |
Web research with structured briefings and sourced citations |
writer |
Topic-to-article pipeline: researcher, writer, editor/fact-checker, driven by webhook or cron |
mail |
Monitors inbox, triages, drafts replies, alerts Slack on urgent mail |
librarian |
Knowledge-base Q&A agent with document ingestion |
Build your own
initrunner new "a research assistant that summarizes papers"
# generates role.yaml, then asks: "Run it now? [Y/n]"
initrunner new "a regex explainer" --run "what does ^[a-z]+$ match?"
# generate and execute in one command
initrunner run --ingest ./docs/ # skip YAML entirely, just chat with your docs
Browse community agents at InitHub: initrunner search "code review" / initrunner install alice/code-reviewer.
Docker:
docker run --rm -it -e OPENAI_API_KEY ghcr.io/vladkesler/initrunner:latest run -i
One file, four modes
Here's a role file:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
That file works four ways:
initrunner run reviewer.yaml -i # interactive REPL
initrunner run reviewer.yaml -p "Review PR #42" # one prompt, one response
initrunner run reviewer.yaml -a -p "Audit the whole repo" # autonomous loop
initrunner run reviewer.yaml --daemon # runs on triggers
The model: block is optional. Omit it and InitRunner auto-detects from your API key. Works with Anthropic, OpenAI, Google, Groq, Mistral, Cohere, xAI, OpenRouter, Ollama, and any OpenAI-compatible endpoint.
Autonomous
Add -a and the agent builds a task list, works each item, reflects on progress, and stops when everything's done. Four reasoning strategies control how: react (default), todo_driven, plan_execute, reflexion.
spec:
autonomy:
compaction: { enabled: true, threshold: 30 }
guardrails:
max_iterations: 15
autonomous_token_budget: 100000
autonomous_timeout_seconds: 600
Spin guards catch loops without progress. History compaction summarizes old context so long runs don't exhaust the token window. Iteration, token, and wall-clock caps bound every run. See Autonomy · Guardrails.
Daemon
Add triggers and switch to --daemon. The agent runs continuously. Each event fires one prompt-response cycle.
spec:
triggers:
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
- type: telegram
allowed_user_ids: [123456789]
Six trigger types: cron, webhook, file_watch, heartbeat, telegram, discord. The daemon hot-reloads role changes without restarting and runs up to four triggers concurrently. See Triggers.
Autopilot
--autopilot is --daemon plus the autonomous loop on every trigger. A Telegram message like "find me flights from NYC to London next week" in daemon mode gets one LLM turn. In autopilot, the agent searches flights, compares options, checks dates, and replies with a shortlist.
initrunner run role.yaml --autopilot
Or go selective: set autonomous: true on individual triggers, leave the rest single-shot.
spec:
triggers:
- type: telegram
autonomous: true # think, research, then reply
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
autonomous: true # plan, gather data, write, review
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
# default: single response
Memory across modes
Semantic memory (facts the agent learns), episodic memory (what happened in past sessions), and procedural memory (how the agent prefers to solve things) persist across interactive sessions, autonomous runs, and daemon triggers. After each session, an LLM consolidates durable facts into the store. Knowledge accumulates over time, not just within a single run.
Agents that learn
Point your agent at a directory. It extracts, chunks, embeds, and indexes your documents automatically. During conversation, the agent searches the index and cites what it finds. New and changed files re-index on every run.
spec:
ingest:
auto: true
sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
memory:
semantic:
max_memories: 1000
cd ~/myproject
initrunner run reader -i # indexes your code, then starts Q&A
Consolidation is the interesting part. After each session, an LLM reads the conversation and distills it into the semantic store. Facts the agent learns during a Tuesday debugging session show up when it's reviewing code on Thursday. Shared memory across flows lets teams of agents build knowledge together. See Memory · Ingestion · RAG Quickstart.
Security
Five controls ship with the framework and turn on via config keys. Roles without a security: section get safe defaults.
Input validation. A content policy engine (blocked patterns, prompt length limits, optional LLM topic classifier) plus an input guard capability validate prompts before the agent starts.
Tool authorization. InitGuard ABAC policy engine checks every tool call and delegation against CEL policies. Per-tool allow/deny glob patterns enforce argument-level permissions.
Sandboxed code execution. Audit hooks stop python tools from writing outside allowed paths, spawning subprocesses, reaching private IPs, loading native libraries, or starting new threads. For stronger isolation, Bubblewrap on Linux or Docker anywhere runs shell and python tools with no network, a read-only filesystem, and memory and CPU caps.
Tamper-evident audit trail. Every run writes to an append-only SQLite audit log, HMAC-SHA256 signed over the previous record's hash. initrunner audit verify-chain detects any middle-row mutation, reorder, or deletion. Secrets are scrubbed on write.
Encrypted credential vault. initrunner vault init creates ~/.initrunner/vault.enc, encrypted with Fernet + scrypt from your passphrase. API keys resolve from env vars first, then the vault, so existing api_key_env: and ${VAR} placeholders keep working.
spec:
security:
audit_hooks_enabled: true
block_private_ips: true
input_guard:
max_prompt_chars: 10000
blocked_patterns: ["(?i)rm -rf /"]
See Security · Bubblewrap · Docker sandbox · Agent Policy · Credential Vault · Audit Chain · Guardrails.
Cost control
USD budgets cap daemon spend. Hit the cap and triggers stop firing until the window resets.
spec:
guardrails:
daemon_daily_cost_budget: 5.00 # USD per day
daemon_weekly_cost_budget: 25.00 # USD per week
Cost estimation uses genai-prices to compute spend per model and provider. Every run logs its cost to the audit trail. The dashboard plots cost across agents and time ranges. See Cost Tracking.
Multi-agent orchestration
Chain agents into flows. One agent's output feeds the next.
apiVersion: initrunner/v1
kind: Flow
metadata: { name: email-chain }
spec:
agents:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager:
role: roles/triager.yaml
sink: { type: delegate, strategy: sense, target: [researcher, responder] }
researcher: { role: roles/researcher.yaml }
responder: { role: roles/responder.yaml }
initrunner flow up flow.yaml
Sense routing picks the right target per message using keyword scoring first (zero API calls); only ambiguous cases fall back to an LLM tiebreak.
Team mode gives multiple perspectives on one task without a full flow. Define personas in one file with three strategies: sequential handoff, parallel execution, or debate (multi-round argumentation with synthesis). See Patterns Guide · Team Mode · Flow.
MCP and interfaces
Agents consume any MCP server as a tool source (stdio, SSE, streamable-http). Going the other direction, expose your agents as MCP tools so Claude Code, Cursor, and Windsurf can call them:
initrunner mcp serve agent.yaml # agent becomes an MCP tool
initrunner mcp toolkit --tools search,sql # expose raw tools, no LLM needed
See MCP Gateway.
Dashboard: run agents, build flows, dig through audit trails
pip install "initrunner[dashboard]"
initrunner dashboard # opens http://localhost:8100
Also available as a native desktop window (initrunner desktop). See Dashboard.
Everything else
| Feature | Command / config | Docs |
|---|---|---|
| Skills (reusable tool + prompt bundles) | spec: { skills: [../skills/web-researcher] } |
Skills |
| API server (OpenAI-compatible endpoint) | initrunner run agent.yaml --serve --port 3000 |
Server |
| A2A server (agent-to-agent protocol) | initrunner a2a serve agent.yaml |
A2A |
| Multimodal (images, audio, video, docs) | initrunner run role.yaml -p "Describe" -A photo.png |
Multimodal |
| Structured output (validated JSON schemas) | spec: { output: { schema: {...} } } |
Structured Output |
| Evals (test agent output quality) | initrunner test role.yaml -s eval.yaml |
Evals |
| Capabilities (native PydanticAI features) | spec: { capabilities: [Thinking, WebSearch] } |
Capabilities |
| Observability (OpenTelemetry) | spec: { observability: { enabled: true } } |
Observability |
| Reasoning (structured thinking patterns) | spec: { reasoning: { pattern: plan_execute } } |
Reasoning |
| Tool search (on-demand tool discovery) | spec: { tool_search: { enabled: true } } |
Tool Search |
| Configure (switch provider/model) | initrunner configure role.yaml --provider groq |
Providers |
Architecture
initrunner/
agent/ Role schema, loader, executor, self-registering tools
runner/ Single-shot, REPL, autonomous, daemon execution modes
flow/ Multi-agent orchestration via flow.yaml
triggers/ Cron, file watcher, webhook, heartbeat, Telegram, Discord
stores/ Document + memory stores (LanceDB, zvec)
ingestion/ Extract -> chunk -> embed -> store pipeline
mcp/ MCP server integration and gateway
audit/ Append-only SQLite audit trail with secret scrubbing
services/ Shared business logic layer
cli/ Typer + Rich CLI entry point
Built on PydanticAI. See CONTRIBUTING.md for dev setup.
Distribution
InitHub: Browse and install community agents at hub.initrunner.ai. Publish your own with initrunner publish.
OCI registries: Push role bundles to any OCI-compliant registry: initrunner publish oci://ghcr.io/org/my-agent --tag 1.0.0. See OCI Distribution.
Documentation
| Area | Key docs |
|---|---|
| Getting started | Installation · Setup · Tutorial · CLI Reference |
| Quickstarts | RAG · Docker · Discord Bot · Telegram Bot |
| Agents & tools | Tools · Tool Creation · Tool Search · Skills · Providers |
| Intelligence | Reasoning · Intent Sensing · Autonomy · Structured Output |
| Knowledge & memory | Ingestion · Memory · Multimodal Input |
| Orchestration | Patterns Guide · Flow · Delegation · Team Mode · Triggers |
| Interfaces | Dashboard · API Server · MCP Gateway · A2A |
| Distribution | OCI Distribution · Shareable Templates |
| Security | Security Model · Runtime Sandbox · Bubblewrap · Docker Sandbox · Credential Vault · Audit Chain · Agent Policy · Guardrails |
| Operations | Audit · Cost Tracking · Reports · Evals · Doctor · Observability · CI/CD |
Examples
initrunner examples list # browse all agents, teams, and flows
initrunner examples copy code-reviewer # copy to current directory
Upgrading
Run initrunner doctor --role role.yaml to check any role file for deprecated fields, schema errors, and spec version issues. Add --fix to auto-repair. Use --flow flow.yaml to validate an entire flow and its referenced roles. See Deprecations.
Community
- Discord: chat, ask questions, share roles
- GitHub Issues: bug reports and feature requests
- Changelog: release notes
- CONTRIBUTING.md: dev setup and PR guidelines
License
Licensed under MIT or Apache-2.0, at your option.
v2026.4.17
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file initrunner-2026.4.17.tar.gz.
File metadata
- Download URL: initrunner-2026.4.17.tar.gz
- Upload date:
- Size: 2.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b688bdd253d4df1301def842466e6470eb611e028e72b139c0fc3e11d345fa9
|
|
| MD5 |
bbbb2a5d6dc23db433e1c3a40a313ee5
|
|
| BLAKE2b-256 |
27a943e127111b14d6a930c0817e8c6d20e76a09fe9e828aff4a6001d9e0edb9
|
Provenance
The following attestation bundles were made for initrunner-2026.4.17.tar.gz:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-2026.4.17.tar.gz -
Subject digest:
3b688bdd253d4df1301def842466e6470eb611e028e72b139c0fc3e11d345fa9 - Sigstore transparency entry: 1370757519
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@0e9d1df25f8f9c8d570cc44542cc00795c20d5dc -
Branch / Tag:
refs/tags/v2026.4.17 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0e9d1df25f8f9c8d570cc44542cc00795c20d5dc -
Trigger Event:
push
-
Statement type:
File details
Details for the file initrunner-2026.4.17-py3-none-any.whl.
File metadata
- Download URL: initrunner-2026.4.17-py3-none-any.whl
- Upload date:
- Size: 1.2 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ee50dbf17f31b0d87f49ba9aef5252b5a22b525d901e351d51707829e94abeab
|
|
| MD5 |
e45d5608605c7daa5fe268845d644d2a
|
|
| BLAKE2b-256 |
1165d638ffb3833fafdecbe878b10d58e745134a14307948dce6728e995e3428
|
Provenance
The following attestation bundles were made for initrunner-2026.4.17-py3-none-any.whl:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-2026.4.17-py3-none-any.whl -
Subject digest:
ee50dbf17f31b0d87f49ba9aef5252b5a22b525d901e351d51707829e94abeab - Sigstore transparency entry: 1370757596
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@0e9d1df25f8f9c8d570cc44542cc00795c20d5dc -
Branch / Tag:
refs/tags/v2026.4.17 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0e9d1df25f8f9c8d570cc44542cc00795c20d5dc -
Trigger Event:
push
-
Statement type: