Skip to main content

Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon

Project description

InitRunner

InitRunner

PyPI version PyPI downloads Docker pulls MIT OR Apache-2.0 PydanticAI Discord

Website · Docs · InitHub · Discord

English · 简体中文 · 日本語

YAML-first AI agent platform. Define an agent's role, tools, knowledge base, and memory in one file. Run it as an interactive chat, a one-shot command, an autonomous agent, a daemon with cron/webhook/file-watch triggers, a Telegram/Discord bot, or an OpenAI-compatible API. RAG and persistent memory work out of the box. Manage everything from a web dashboard or native desktop app. Install with curl or pip, no containers required.

initrunner run helpdesk -i                                    # docs Q&A with RAG + memory
initrunner run deep-researcher -p "Compare vector databases"  # 3-agent research team
initrunner run code-review-team -p "Review the latest commit" # multi-perspective code review

15 curated starters, 60+ examples, or define your own.

v2026.4.3: Autonomous execution docs, compose/team runs in Launchpad, dimension-specific reflexion, budget-aware continuation prompts, finalize_plan() tool, Electric Charcoal dashboard. See the Changelog.

Quickstart

curl -fsSL https://initrunner.ai/install.sh | sh
initrunner setup        # wizard: pick provider, model, API key

Or: uv pip install "initrunner[recommended]" / pipx install "initrunner[recommended]". See Installation.

Try a starter

Run initrunner run --list for the full catalog. The model is auto-detected from your API key.

Starter What it does Kind
helpdesk Drop your docs in, get a Q&A agent with citations and memory Agent (RAG)
code-review-team Multi-perspective review: architect, security, maintainer Team
deep-researcher 3-agent pipeline: planner, web researcher, synthesizer with shared memory Team
codebase-analyst Index your repo, chat about architecture, learns patterns across sessions Agent (RAG)
web-researcher Search the web and produce structured briefings with citations Agent
content-pipeline Topic researcher, writer, editor/fact-checker via webhook or cron Compose
telegram-assistant Telegram bot with memory and web search Agent (Daemon)
email-agent Monitors inbox, triages messages, drafts replies, alerts Slack on urgent mail Agent (Daemon)
support-desk Sense-routed intake: auto-routes to researcher, responder, or escalator Compose
memory-assistant Personal assistant that remembers across sessions Agent

RAG starters auto-ingest on first run. Just cd into your project:

cd ~/myproject
initrunner run codebase-analyst -i   # indexes your code, then starts Q&A

Build your own

initrunner new "a research assistant that summarizes papers"  # generates a role.yaml
initrunner run --ingest ./docs/    # or skip YAML entirely, just chat with your docs

Browse and install community agents from InitHub: initrunner search "code review" / initrunner install alice/code-reviewer.

Docker, no install needed:

docker run -d -e OPENAI_API_KEY -p 8100:8100 \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest        # dashboard
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest run -i # chat

See the Docker guide for more.

Define an Agent in YAML

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: code-reviewer
  description: Reviews code for bugs and style issues
spec:
  role: |
    You are a senior engineer. Review code for correctness and readability.
    Use git tools to examine changes and read files for context.
  model: { provider: openai, name: gpt-5-mini }
  tools:
    - type: git
      repo_path: .
    - type: filesystem
      root_path: .
      read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"

The model: section is optional; omit it and InitRunner auto-detects from your API key. Works with Anthropic, OpenAI, Google, Groq, Mistral, Cohere, xAI, OpenRouter, Ollama, and any OpenAI-compatible endpoint. 28 built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, Slack, MCP, audio, PDF extraction, CSV analysis, image generation) and you can add your own in a single file.

Why InitRunner

A YAML file is the agent. Tools, knowledge sources, memory, triggers, model, guardrails, all declared in one place. You can read it and immediately understand what the agent does. You can diff it, review it in a PR, hand it to a teammate. When you want to switch from GPT to Claude, you change one line. When you want to add RAG, you add an ingest: section.

The same file runs as an interactive chat (-i), a one-shot command (-p "..."), an autonomous agent (-a), a cron/webhook/file-watch daemon (--daemon), or an OpenAI-compatible API (--serve). You don't pick a deployment mode upfront and build around it. You pick it at runtime with a flag.

What this gets you in practice: your agent config lives in version control next to your code. New team members read the YAML and understand what the agent does. You review agent changes in PRs like any other config. The agent you prototyped interactively is the same one you deploy as a daemon or API. Same file, different flag.

How It Compares

InitRunner LangChain CrewAI AutoGen
Agent config YAML file Python chains + config Python classes Python classes
RAG --ingest ./docs/ (one flag) Loaders + splitters + vectorstore RAG tool or custom External setup
Memory Built-in, on by default Add-on (multiple options) Short/long-term memory External
Multi-agent compose.yaml or kind: Team LangGraph Crew definition Group chat
Autonomous execution -a flag + YAML guardrails Custom agent loop Sequential process Conversation loop
Deployment modes Same YAML: REPL / daemon / API Custom per mode CLI or Kickoff Custom
Model switching Change 1 YAML line Swap LLM class Config per agent Config per agent
Custom tools 1 file, 1 decorator @tool decorator @tool decorator Function call
Bot deployment --telegram / --discord flag Separate integration Separate integration Separate integration
Migration --pydantic-ai / --langchain import N/A N/A N/A

What You Get

Knowledge and memory

Point your agent at a directory. It extracts, chunks, embeds, and indexes your documents. During conversation, the agent searches the index automatically and cites what it finds. Memory persists across sessions.

spec:
  ingest:
    auto: true
    sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
  memory:
    semantic:
      max_memories: 1000
initrunner run role.yaml -i   # auto-ingests on first run, memory + search ready

See Ingestion · Memory · RAG Quickstart.

Triggers and daemons

Turn any agent into a daemon that reacts to cron schedules, file changes, webhooks, or heartbeats:

spec:
  triggers:
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate the weekly status report."
    - type: file_watch
      paths: [./src]
      prompt_template: "File changed: {path}. Review it."
initrunner run role.yaml --daemon   # runs until stopped

See Triggers · Telegram · Discord.

Multi-agent orchestration

Chain agents together. One agent's output feeds into the next. Sense routing auto-picks the right target per message (keyword matching first, single LLM call to break ties):

apiVersion: initrunner/v1
kind: Compose
metadata: { name: email-chain }
spec:
  services:
    inbox-watcher:
      role: roles/inbox-watcher.yaml
      sink: { type: delegate, target: triager }
    triager:
      role: roles/triager.yaml
      sink: { type: delegate, strategy: sense, target: [researcher, responder] }
    researcher: { role: roles/researcher.yaml }
    responder: { role: roles/responder.yaml }

Run with initrunner compose up compose.yaml. See Patterns Guide · Compose.

Reasoning and tool management

Control how your agent thinks, not just what it does:

spec:
  reasoning:
    pattern: plan_execute    # plans upfront, then executes each step
    auto_plan: true
  tools:
    - type: think            # internal scratchpad with self-critique
      critique: true
    - type: todo             # structured task list for multi-step work

Four reasoning patterns: react, todo_driven, plan_execute, and reflexion. See Reasoning.

Agents with many tools waste context and pick worse. Tool search hides tools behind on-demand keyword discovery: the agent sees only search_tools and a few pinned tools, then discovers what it needs per-turn. BM25 scoring, no API calls, typically saves 60-80% context. See Tool Search.

Autonomous execution

Most runs are one turn: you prompt, the agent responds. Add -a and the agent keeps going. It builds a todo list, works through each item, and finishes when everything is done. You set the budget -- iterations, tokens, and time -- so it can't run away.

spec:
  autonomy:
    compaction: { enabled: true, threshold: 30 }
  guardrails:
    max_iterations: 15
    autonomous_token_budget: 100000
initrunner run role.yaml -a -p "Scan this repo for security issues and file a report"

Works with triggers too: set autonomous: true on any trigger and the daemon runs the full loop instead of a single response. See Autonomy · Guardrails.

Architecture

initrunner/
  agent/        Role schema, loader, executor, 28 self-registering tools
  runner/       Single-shot, REPL, autonomous, daemon execution modes
  compose/      Multi-agent orchestration via compose.yaml
  triggers/     Cron, file watcher, webhook, heartbeat, Telegram, Discord
  stores/       Document + memory stores (LanceDB, zvec)
  ingestion/    Extract -> chunk -> embed -> store pipeline
  mcp/          MCP server integration and gateway
  audit/        Append-only SQLite audit trail
  services/     Shared business logic layer
  cli/          Typer + Rich CLI entry point

Built on PydanticAI for the agent framework, Pydantic for config validation, LanceDB for vector search. See CONTRIBUTING.md for dev setup.

Security

InitRunner ships with an embedded initguard policy engine. Agents get identity from their role metadata (name, team, tags, author), and every tool call and delegation is checked against your policies:

  • Tool-level authorization: agents can only call tools their policy allows
  • Delegation policy: controls which agents can hand off to which others
  • Content filtering: input guardrails with configurable content policy
  • PEP 578 sandboxing: audit hooks for dangerous operations
  • Docker isolation: optional sandboxed execution environment
  • Token budgets and rate limiting: prevent runaway costs
  • Env var scrubbing: sensitive keys stripped from subprocess environments
  • Append-only audit trail: every tool call logged to SQLite
export INITRUNNER_POLICY_DIR=./policies
initrunner run role.yaml                  # tool calls + delegation checked against policies

See Agent Policy · Security · Guardrails.

User Interfaces

InitRunner Dashboard
Dashboard: agents, activity, compositions, and teams at a glance

pip install "initrunner[dashboard]"
initrunner dashboard                  # opens http://localhost:8100

Browse agents, run prompts, build compositions visually, configure reasoning patterns, and review audit trails. Also available as a native desktop window (initrunner desktop). See Dashboard docs.

More Capabilities

Feature Command / config Docs
Skills (reusable tool + prompt bundles) spec: { skills: [../skills/web-researcher] } Skills
Team mode (multi-persona on one task) kind: Team + spec: { personas: {…} } Team Mode
API server (OpenAI-compatible endpoint) initrunner run agent.yaml --serve --port 3000 Server
Multimodal (images, audio, video, docs) initrunner run role.yaml -p "Describe" -A photo.png Multimodal
Structured output (validated JSON schemas) spec: { output: { schema: {…} } } Structured Output
Evals (test agent output quality) initrunner test role.yaml -s eval.yaml Evals
MCP gateway (expose agents as MCP tools) initrunner mcp serve agent.yaml MCP Gateway
MCP toolkit (tools without an agent) initrunner mcp toolkit MCP Gateway
Capabilities (native PydanticAI features) spec: { capabilities: [Thinking, WebSearch] } Capabilities
Observability (OpenTelemetry integration) spec: { observability: { enabled: true } } Observability
Configure (switch provider/model on any role) initrunner configure role.yaml --provider groq Providers

Distribution

InitHub: Browse and install community agents at hub.initrunner.ai. Publish your own with initrunner publish. See Registry.

OCI registries: Push role bundles to any OCI-compliant registry: initrunner publish oci://ghcr.io/org/my-agent --tag 1.0.0. See OCI Distribution.

Cloud deploy:

Deploy on Railway Deploy to Render

Documentation

Area Key docs
Getting started Installation · Setup · RAG Quickstart · Tutorial · CLI Reference · Docker · Discord Bot · Telegram Bot
Agents & tools Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers
Intelligence Reasoning · Intent Sensing · Tool Search · Autonomy
Knowledge & memory Ingestion · Memory · Multimodal Input
Orchestration Patterns Guide · Compose · Delegation · Team Mode · Autonomy · Triggers
Interfaces Dashboard · API Server · MCP Gateway
Distribution OCI Distribution · Shareable Templates
Operations Security · Agent Policy · Guardrails · Audit · Reports · Evals · Doctor · Observability · CI/CD

Examples

initrunner examples list               # 60+ agents, teams, and compose projects
initrunner examples copy code-reviewer # copy to current directory

Upgrading

Run initrunner doctor --role role.yaml to check any role file for deprecated fields, schema errors, and spec version issues. Add --fix to auto-repair, or --fix --yes for CI. See Deprecations.

Community & Contributing

Contributions welcome! See CONTRIBUTING.md for dev setup and PR guidelines.

License

Licensed under MIT or Apache-2.0, at your option.


v2026.4.3

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

initrunner-2026.4.3.tar.gz (2.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

initrunner-2026.4.3-py3-none-any.whl (976.3 kB view details)

Uploaded Python 3

File details

Details for the file initrunner-2026.4.3.tar.gz.

File metadata

  • Download URL: initrunner-2026.4.3.tar.gz
  • Upload date:
  • Size: 2.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-2026.4.3.tar.gz
Algorithm Hash digest
SHA256 fa3ce7b0d95be500838c3a82fe578ae61b7ff21d1ae206ad9fe144cd87382ce8
MD5 cecb027d1648d6aacb325db31f9dba98
BLAKE2b-256 22dcc6b720fd6054691d9526bcd95b4536204b2f71370428eb370ba2a175781f

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-2026.4.3.tar.gz:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file initrunner-2026.4.3-py3-none-any.whl.

File metadata

  • Download URL: initrunner-2026.4.3-py3-none-any.whl
  • Upload date:
  • Size: 976.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-2026.4.3-py3-none-any.whl
Algorithm Hash digest
SHA256 131871b77f1abdfc47632cbf8ed1d395d7d0d59e393dec97288b231cdd1f2833
MD5 98295b51d2bd5ba1962f6dbfbb104ea9
BLAKE2b-256 c23a01d26b5b49722823b2296919d8c1305bc89ce2cfaa4768803422866f1c88

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-2026.4.3-py3-none-any.whl:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page