Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon
Project description
InitRunner
Website · Docs · InitHub · Discord
Define an agent in one YAML file. Chat with it. When it works, let it run autonomously. When you trust it, deploy it as a daemon that reacts to cron schedules, file changes, webhooks, and Telegram messages. Same file the whole way. No rewrite between prototyping and production.
initrunner run researcher -i # chat with it
initrunner run researcher -a -p "Audit this codebase" # let it work alone
initrunner run researcher --daemon # runs 24/7, reacts to triggers
Quickstart
curl -fsSL https://initrunner.ai/install.sh | sh
initrunner setup # wizard: pick provider, model, API key
Or: uv pip install "initrunner[recommended]" / pipx install "initrunner[recommended]". See Installation.
Starters
Run initrunner run --list for the full catalog. The model is auto-detected from your API key.
| Starter | What it does |
|---|---|
helpdesk |
Drop your docs in, get a Q&A agent with citations and memory |
deep-researcher |
3-agent pipeline: planner, web researcher, synthesizer |
code-review-team |
Multi-perspective review: architect, security, maintainer |
codebase-analyst |
Index a repo, chat about architecture, learns patterns across sessions |
content-pipeline |
Researcher, writer, editor/fact-checker via webhook or cron |
email-agent |
Monitors inbox, triages, drafts replies, alerts Slack on urgent mail |
Build your own
initrunner new "a research assistant that summarizes papers"
# generates role.yaml, then asks: "Run it now? [Y/n]"
initrunner new "a regex explainer" --run "what does ^[a-z]+$ match?"
# generate and execute in one command
initrunner run --ingest ./docs/ # skip YAML entirely, just chat with your docs
Browse community agents at InitHub: initrunner search "code review" / initrunner install alice/code-reviewer.
Docker:
docker run --rm -it -e OPENAI_API_KEY ghcr.io/vladkesler/initrunner:latest run -i
One file, four modes
Here's a role file:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
That file works four ways:
initrunner run reviewer.yaml -i # interactive REPL
initrunner run reviewer.yaml -p "Review PR #42" # one prompt, one response
initrunner run reviewer.yaml -a -p "Audit the whole repo" # autonomous: plans, executes, reflects
initrunner run reviewer.yaml --daemon # runs continuously, fires on triggers
The model: section is optional. Omit it and InitRunner auto-detects from your API key. Works with Anthropic, OpenAI, Google, Groq, Mistral, Cohere, xAI, OpenRouter, Ollama, and any OpenAI-compatible endpoint.
Autonomous mode
Add -a and the agent stops being a chatbot. It builds a task list, works through each item, reflects on its own progress, and finishes when everything is done. Four reasoning strategies control how: react (default), todo_driven, plan_execute, and reflexion.
spec:
autonomy:
compaction: { enabled: true, threshold: 30 }
guardrails:
max_iterations: 15
autonomous_token_budget: 100000
autonomous_timeout_seconds: 600
Spin guards catch the agent if it loops without making progress. History compaction summarizes old context so long runs don't blow up the token window. Budget enforcement, iteration limits, and wall-clock timeouts keep everything bounded. See Autonomy · Guardrails.
Daemon mode
Add triggers and switch to --daemon. The agent runs continuously, reacting to events. Each event fires a prompt-response cycle.
spec:
triggers:
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
- type: telegram
allowed_user_ids: [123456789]
initrunner run role.yaml --daemon # runs until Ctrl+C
Six trigger types: cron, webhook, file_watch, heartbeat, telegram, discord. The daemon hot-reloads role changes without restarting and runs up to 4 triggers concurrently. See Triggers.
Autopilot
--autopilot is --daemon where every trigger gets the full autonomous loop. Someone messages your Telegram bot "find me flights from NYC to London next week." In daemon mode, you get one shot at an answer. In autopilot, the agent searches, compares options, checks dates, and sends back something worth reading.
initrunner run role.yaml --autopilot
You can also be selective. Set autonomous: true on individual triggers and leave the rest single-shot:
spec:
triggers:
- type: telegram
autonomous: true # think, research, then reply
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
autonomous: true # plan, gather data, write, review
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
# default: quick single response
Memory carries across everything
Episodic, semantic, and procedural memory persist across interactive sessions, autonomous runs, and daemon triggers. After each session, consolidation extracts durable facts from the conversation using an LLM. The agent accumulates knowledge over time, not just within a single run.
Agents that learn
Point your agent at a directory. It extracts, chunks, embeds, and indexes your documents automatically. During conversation, the agent searches the index and cites what it finds. New and changed files are re-indexed on every run without manual intervention.
spec:
ingest:
auto: true
sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
memory:
semantic:
max_memories: 1000
cd ~/myproject
initrunner run codebase-analyst -i # indexes your code, then starts Q&A
The interesting part is consolidation. After each session, an LLM reads what happened and distills it into the semantic store. Facts the agent learns during a Tuesday debugging session show up when it's reviewing code on Thursday. Shared memory across flows lets teams of agents build knowledge together. See Memory · Ingestion · RAG Quickstart.
Security is config, not plumbing
Most agent frameworks treat security as "add auth middleware when you get to production." InitRunner ships security integrated and ready to use. You turn it on with config keys, not with a weekend of plumbing.
Agents accept untrusted input. Content policy engine (blocked patterns, prompt length limits, optional LLM topic classifier) and an input guard capability validate prompts before the agent starts.
Agents call tools with real consequences. InitGuard ABAC policy engine checks every tool call and delegation against CEL policies. Per-tool allow/deny glob patterns enforce argument-level permissions.
Agents run code. PEP 578 audit-hook sandbox restricts filesystem writes, blocks subprocess spawning, blocks private-IP network access, and prevents dangerous imports. Docker container sandboxing adds read-only rootfs, memory/CPU limits, and network isolation on top.
Everything is logged. Append-only SQLite audit trail with automatic secret scrubbing. Regex patterns redact GitHub tokens, AWS keys, Stripe keys, and more from both prompts and outputs.
These are opt-in via the security: config key, not on by magic. Roles without a security: section get safe defaults. The point is that these capabilities exist in the box rather than being something you bolt on six months into production.
export INITRUNNER_POLICY_DIR=./policies
initrunner run role.yaml # tool calls + delegation checked against policies
See Agent Policy · Security · Guardrails.
Cost control
Token budgets are table stakes. InitRunner also enforces USD cost budgets. Set a daily or weekly dollar cap on a daemon and it stops firing triggers when the threshold is hit.
spec:
guardrails:
daemon_daily_cost_budget: 5.00 # USD per day
daemon_weekly_cost_budget: 25.00 # USD per week
Cost estimation uses genai-prices to calculate actual spend per model and provider. Every run logs its cost to the audit trail. The dashboard shows cost analytics across agents and time ranges. See Cost Tracking.
Multi-agent orchestration
Chain agents into flows. One agent's output feeds into the next. Sense routing auto-picks the right target per message using keyword scoring first (zero API calls), with an LLM tiebreak only when the keywords are ambiguous:
apiVersion: initrunner/v1
kind: Flow
metadata: { name: email-chain }
spec:
agents:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager:
role: roles/triager.yaml
sink: { type: delegate, strategy: sense, target: [researcher, responder] }
researcher: { role: roles/researcher.yaml }
responder: { role: roles/responder.yaml }
initrunner flow up flow.yaml
Team mode is for when you want multiple perspectives on one task without a full flow. Define personas in a single file with three strategies: sequential handoff, parallel execution, or debate (multi-round argumentation with synthesis). See Patterns Guide · Team Mode · Flow.
MCP and interfaces
Agents consume any MCP server as a tool source (stdio, SSE, streamable-http). Going the other direction, expose your agents as MCP tools so Claude Code, Cursor, and Windsurf can call them:
initrunner mcp serve agent.yaml # agent becomes an MCP tool
initrunner mcp toolkit --tools search,sql # expose raw tools, no LLM needed
See MCP Gateway.
Dashboard: run agents, build flows, dig through audit trails
pip install "initrunner[dashboard]"
initrunner dashboard # opens http://localhost:8100
Also available as a native desktop window (initrunner desktop). See Dashboard.
Everything else
| Feature | Command / config | Docs |
|---|---|---|
| Skills (reusable tool + prompt bundles) | spec: { skills: [../skills/web-researcher] } |
Skills |
| API server (OpenAI-compatible endpoint) | initrunner run agent.yaml --serve --port 3000 |
Server |
| A2A server (agent-to-agent protocol) | initrunner a2a serve agent.yaml |
A2A |
| Multimodal (images, audio, video, docs) | initrunner run role.yaml -p "Describe" -A photo.png |
Multimodal |
| Structured output (validated JSON schemas) | spec: { output: { schema: {...} } } |
Structured Output |
| Evals (test agent output quality) | initrunner test role.yaml -s eval.yaml |
Evals |
| Capabilities (native PydanticAI features) | spec: { capabilities: [Thinking, WebSearch] } |
Capabilities |
| Observability (OpenTelemetry) | spec: { observability: { enabled: true } } |
Observability |
| Reasoning (structured thinking patterns) | spec: { reasoning: { pattern: plan_execute } } |
Reasoning |
| Tool search (on-demand tool discovery) | spec: { tool_search: { enabled: true } } |
Tool Search |
| Configure (switch provider/model) | initrunner configure role.yaml --provider groq |
Providers |
Architecture
initrunner/
agent/ Role schema, loader, executor, self-registering tools
runner/ Single-shot, REPL, autonomous, daemon execution modes
flow/ Multi-agent orchestration via flow.yaml
triggers/ Cron, file watcher, webhook, heartbeat, Telegram, Discord
stores/ Document + memory stores (LanceDB, zvec)
ingestion/ Extract -> chunk -> embed -> store pipeline
mcp/ MCP server integration and gateway
audit/ Append-only SQLite audit trail with secret scrubbing
services/ Shared business logic layer
cli/ Typer + Rich CLI entry point
Built on PydanticAI. See CONTRIBUTING.md for dev setup.
Distribution
InitHub: Browse and install community agents at hub.initrunner.ai. Publish your own with initrunner publish.
OCI registries: Push role bundles to any OCI-compliant registry: initrunner publish oci://ghcr.io/org/my-agent --tag 1.0.0. See OCI Distribution.
Cloud deploy:
Documentation
| Area | Key docs |
|---|---|
| Getting started | Installation · Setup · Tutorial · CLI Reference |
| Quickstarts | RAG · Docker · Discord Bot · Telegram Bot |
| Agents & tools | Tools · Tool Creation · Tool Search · Skills · Providers |
| Intelligence | Reasoning · Intent Sensing · Autonomy · Structured Output |
| Knowledge & memory | Ingestion · Memory · Multimodal Input |
| Orchestration | Patterns Guide · Flow · Delegation · Team Mode · Triggers |
| Interfaces | Dashboard · API Server · MCP Gateway · A2A |
| Distribution | OCI Distribution · Shareable Templates |
| Security | Security Model · Agent Policy · Guardrails |
| Operations | Audit · Cost Tracking · Reports · Evals · Doctor · Observability · CI/CD |
Examples
initrunner examples list # browse all agents, teams, and flows
initrunner examples copy code-reviewer # copy to current directory
Upgrading
Run initrunner doctor --role role.yaml to check any role file for deprecated fields, schema errors, and spec version issues. Add --fix to auto-repair. See Deprecations.
Community
- Discord: chat, ask questions, share roles
- GitHub Issues: bug reports and feature requests
- Changelog: release notes
- CONTRIBUTING.md: dev setup and PR guidelines
License
Licensed under MIT or Apache-2.0, at your option.
v2026.4.11
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file initrunner-2026.4.11.tar.gz.
File metadata
- Download URL: initrunner-2026.4.11.tar.gz
- Upload date:
- Size: 2.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fa9be5caad620b4afb9b8c03ad88cf0cdb9d073f7241bc26ab36c82ef0d174f0
|
|
| MD5 |
382d63af0e7226d53f67146dcaffd0bb
|
|
| BLAKE2b-256 |
95b38fbec33d44093de40a1f356066908826bf14020f2fe040053b5bdb2319e3
|
Provenance
The following attestation bundles were made for initrunner-2026.4.11.tar.gz:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-2026.4.11.tar.gz -
Subject digest:
fa9be5caad620b4afb9b8c03ad88cf0cdb9d073f7241bc26ab36c82ef0d174f0 - Sigstore transparency entry: 1280886201
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@733d2a1ddd42790de85ea00c29d24752c0790a34 -
Branch / Tag:
refs/tags/v2026.4.11 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@733d2a1ddd42790de85ea00c29d24752c0790a34 -
Trigger Event:
push
-
Statement type:
File details
Details for the file initrunner-2026.4.11-py3-none-any.whl.
File metadata
- Download URL: initrunner-2026.4.11-py3-none-any.whl
- Upload date:
- Size: 1.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7fae7c73e4cbc802ea88ae30122bdc180183ba3c35b864ac3e0d2d8a7cd193ea
|
|
| MD5 |
4cfae898faecfcff5cc327da018e42b6
|
|
| BLAKE2b-256 |
92e5f41a22e99058df445c33b0b51f5216da9aca85d2ab8ebc98ca16a9c39ecc
|
Provenance
The following attestation bundles were made for initrunner-2026.4.11-py3-none-any.whl:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-2026.4.11-py3-none-any.whl -
Subject digest:
7fae7c73e4cbc802ea88ae30122bdc180183ba3c35b864ac3e0d2d8a7cd193ea - Sigstore transparency entry: 1280886202
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@733d2a1ddd42790de85ea00c29d24752c0790a34 -
Branch / Tag:
refs/tags/v2026.4.11 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@733d2a1ddd42790de85ea00c29d24752c0790a34 -
Trigger Event:
push
-
Statement type: