Skip to main content

Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon

Project description

InitRunner

InitRunner

PyPI version PyPI downloads Docker pulls MIT OR Apache-2.0 PydanticAI Discord

Website · Docs · InitHub · Discord

English · 简体中文 · 日本語

What's new in 2026.4.15 — encrypted credential vault · HMAC-signed audit chain · bidirectional Slack adapter (Socket Mode) · one-word starter names · full changelog

Define an agent in one YAML file. Chat with it. When it works, let it run autonomously. When you trust it, deploy it as a daemon that reacts to cron schedules, file changes, webhooks, and Telegram messages. Same file the whole way. No rewrite between prototyping and production.

initrunner run researcher -i                            # chat with it
initrunner run researcher -a -p "Audit this codebase"   # let it work alone
initrunner run researcher --daemon                      # runs 24/7, reacts to triggers

Quickstart

curl -fsSL https://initrunner.ai/install.sh | sh
initrunner setup        # wizard: pick provider, model, API key

Or: uv pip install "initrunner[recommended]" / pipx install "initrunner[recommended]". See Installation.

Starters

Run initrunner run --list for the full catalog. The model is auto-detected from your API key.

Starter What it does
helpdesk Drop your docs in, get a Q&A agent with citations and memory
scholar 3-agent pipeline: planner, web researcher, synthesizer
reviewer Multi-perspective review: architect, security, maintainer
reader Index a repo, chat about architecture, learns patterns across sessions
writer Researcher, writer, editor/fact-checker via webhook or cron
mail Monitors inbox, triages, drafts replies, alerts Slack on urgent mail

Build your own

initrunner new "a research assistant that summarizes papers"
# generates role.yaml, then asks: "Run it now? [Y/n]"

initrunner new "a regex explainer" --run "what does ^[a-z]+$ match?"
# generate and execute in one command

initrunner run --ingest ./docs/    # skip YAML entirely, just chat with your docs

Browse community agents at InitHub: initrunner search "code review" / initrunner install alice/code-reviewer.

Docker:

docker run --rm -it -e OPENAI_API_KEY ghcr.io/vladkesler/initrunner:latest run -i

One file, four modes

Here's a role file:

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: code-reviewer
  description: Reviews code for bugs and style issues
spec:
  role: |
    You are a senior engineer. Review code for correctness and readability.
    Use git tools to examine changes and read files for context.
  model: { provider: openai, name: gpt-5-mini }
  tools:
    - type: git
      repo_path: .
    - type: filesystem
      root_path: .
      read_only: true

That file works four ways:

initrunner run reviewer.yaml -i              # interactive REPL
initrunner run reviewer.yaml -p "Review PR #42"  # one prompt, one response
initrunner run reviewer.yaml -a -p "Audit the whole repo"  # autonomous: plans, executes, reflects
initrunner run reviewer.yaml --daemon        # runs continuously, fires on triggers

The model: section is optional. Omit it and InitRunner auto-detects from your API key. Works with Anthropic, OpenAI, Google, Groq, Mistral, Cohere, xAI, OpenRouter, Ollama, and any OpenAI-compatible endpoint.

Autonomous mode

Add -a and the agent stops being a chatbot. It builds a task list, works through each item, reflects on its own progress, and finishes when everything is done. Four reasoning strategies control how: react (default), todo_driven, plan_execute, and reflexion.

spec:
  autonomy:
    compaction: { enabled: true, threshold: 30 }
  guardrails:
    max_iterations: 15
    autonomous_token_budget: 100000
    autonomous_timeout_seconds: 600

Spin guards catch the agent if it loops without making progress. History compaction summarizes old context so long runs don't blow up the token window. Budget enforcement, iteration limits, and wall-clock timeouts keep everything bounded. See Autonomy · Guardrails.

Daemon mode

Add triggers and switch to --daemon. The agent runs continuously, reacting to events. Each event fires a prompt-response cycle.

spec:
  triggers:
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate the weekly status report."
    - type: file_watch
      paths: [./src]
      prompt_template: "File changed: {path}. Review it."
    - type: telegram
      allowed_user_ids: [123456789]
initrunner run role.yaml --daemon   # runs until Ctrl+C

Six trigger types: cron, webhook, file_watch, heartbeat, telegram, discord. The daemon hot-reloads role changes without restarting and runs up to 4 triggers concurrently. See Triggers.

Autopilot

--autopilot is --daemon where every trigger gets the full autonomous loop. Someone messages your Telegram bot "find me flights from NYC to London next week." In daemon mode, you get one shot at an answer. In autopilot, the agent searches, compares options, checks dates, and sends back something worth reading.

initrunner run role.yaml --autopilot

You can also be selective. Set autonomous: true on individual triggers and leave the rest single-shot:

spec:
  triggers:
    - type: telegram
      autonomous: true          # think, research, then reply
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate the weekly status report."
      autonomous: true          # plan, gather data, write, review
    - type: file_watch
      paths: [./src]
      prompt_template: "File changed: {path}. Review it."
      # default: quick single response

Memory carries across everything

Episodic, semantic, and procedural memory persist across interactive sessions, autonomous runs, and daemon triggers. After each session, consolidation extracts durable facts from the conversation using an LLM. The agent accumulates knowledge over time, not just within a single run.

Agents that learn

Point your agent at a directory. It extracts, chunks, embeds, and indexes your documents automatically. During conversation, the agent searches the index and cites what it finds. New and changed files are re-indexed on every run without manual intervention.

spec:
  ingest:
    auto: true
    sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
  memory:
    semantic:
      max_memories: 1000
cd ~/myproject
initrunner run reader -i   # indexes your code, then starts Q&A

The interesting part is consolidation. After each session, an LLM reads what happened and distills it into the semantic store. Facts the agent learns during a Tuesday debugging session show up when it's reviewing code on Thursday. Shared memory across flows lets teams of agents build knowledge together. See Memory · Ingestion · RAG Quickstart.

Security ships with the framework

Most agent frameworks treat security as "add auth middleware when you get to production." InitRunner ships these controls in the box. Turn them on with config keys.

Agents accept untrusted input. Content policy engine (blocked patterns, prompt length limits, optional LLM topic classifier) and an input guard capability validate prompts before the agent starts.

Agents call tools with real consequences. InitGuard ABAC policy engine checks every tool call and delegation against CEL policies. Per-tool allow/deny glob patterns enforce argument-level permissions.

Agents run code. PEP 578 audit-hook sandbox restricts filesystem writes, blocks subprocess spawning, blocks private-IP network access, and prevents dangerous imports. Docker container sandboxing adds read-only rootfs, memory/CPU limits, and network isolation on top.

Everything is logged. Append-only SQLite audit trail with automatic secret scrubbing. Regex patterns redact GitHub tokens, AWS keys, Stripe keys, and more from both prompts and outputs.

All of these are opt-in via the security: config key. Roles without a security: section get safe defaults.

export INITRUNNER_POLICY_DIR=./policies
initrunner run role.yaml    # tool calls + delegation checked against policies

See Agent Policy · Security · Guardrails.

Cost control

Most frameworks track token budgets. InitRunner also enforces USD cost budgets. Set a daily or weekly dollar cap on a daemon and it stops firing triggers when the threshold is hit.

spec:
  guardrails:
    daemon_daily_cost_budget: 5.00    # USD per day
    daemon_weekly_cost_budget: 25.00  # USD per week

Cost estimation uses genai-prices to calculate actual spend per model and provider. Every run logs its cost to the audit trail. The dashboard shows cost analytics across agents and time ranges. See Cost Tracking.

Multi-agent orchestration

Chain agents into flows. One agent's output feeds into the next. Sense routing auto-picks the right target per message using keyword scoring first (zero API calls), with an LLM tiebreak only when the keywords are ambiguous:

apiVersion: initrunner/v1
kind: Flow
metadata: { name: email-chain }
spec:
  agents:
    inbox-watcher:
      role: roles/inbox-watcher.yaml
      sink: { type: delegate, target: triager }
    triager:
      role: roles/triager.yaml
      sink: { type: delegate, strategy: sense, target: [researcher, responder] }
    researcher: { role: roles/researcher.yaml }
    responder: { role: roles/responder.yaml }
initrunner flow up flow.yaml

Team mode is for when you want multiple perspectives on one task without a full flow. Define personas in a single file with three strategies: sequential handoff, parallel execution, or debate (multi-round argumentation with synthesis). See Patterns Guide · Team Mode · Flow.

MCP and interfaces

Agents consume any MCP server as a tool source (stdio, SSE, streamable-http). Going the other direction, expose your agents as MCP tools so Claude Code, Cursor, and Windsurf can call them:

initrunner mcp serve agent.yaml          # agent becomes an MCP tool
initrunner mcp toolkit --tools search,sql  # expose raw tools, no LLM needed

See MCP Gateway.

InitRunner Dashboard
Dashboard: run agents, build flows, dig through audit trails

pip install "initrunner[dashboard]"
initrunner dashboard                  # opens http://localhost:8100

Also available as a native desktop window (initrunner desktop). See Dashboard.

Everything else

Feature Command / config Docs
Skills (reusable tool + prompt bundles) spec: { skills: [../skills/web-researcher] } Skills
API server (OpenAI-compatible endpoint) initrunner run agent.yaml --serve --port 3000 Server
A2A server (agent-to-agent protocol) initrunner a2a serve agent.yaml A2A
Multimodal (images, audio, video, docs) initrunner run role.yaml -p "Describe" -A photo.png Multimodal
Structured output (validated JSON schemas) spec: { output: { schema: {...} } } Structured Output
Evals (test agent output quality) initrunner test role.yaml -s eval.yaml Evals
Capabilities (native PydanticAI features) spec: { capabilities: [Thinking, WebSearch] } Capabilities
Observability (OpenTelemetry) spec: { observability: { enabled: true } } Observability
Reasoning (structured thinking patterns) spec: { reasoning: { pattern: plan_execute } } Reasoning
Tool search (on-demand tool discovery) spec: { tool_search: { enabled: true } } Tool Search
Configure (switch provider/model) initrunner configure role.yaml --provider groq Providers

Architecture

initrunner/
  agent/        Role schema, loader, executor, self-registering tools
  runner/       Single-shot, REPL, autonomous, daemon execution modes
  flow/         Multi-agent orchestration via flow.yaml
  triggers/     Cron, file watcher, webhook, heartbeat, Telegram, Discord
  stores/       Document + memory stores (LanceDB, zvec)
  ingestion/    Extract -> chunk -> embed -> store pipeline
  mcp/          MCP server integration and gateway
  audit/        Append-only SQLite audit trail with secret scrubbing
  services/     Shared business logic layer
  cli/          Typer + Rich CLI entry point

Built on PydanticAI. See CONTRIBUTING.md for dev setup.

Distribution

InitHub: Browse and install community agents at hub.initrunner.ai. Publish your own with initrunner publish.

OCI registries: Push role bundles to any OCI-compliant registry: initrunner publish oci://ghcr.io/org/my-agent --tag 1.0.0. See OCI Distribution.

Cloud deploy:

Deploy on Railway Deploy to Render

Documentation

Area Key docs
Getting started Installation · Setup · Tutorial · CLI Reference
Quickstarts RAG · Docker · Discord Bot · Telegram Bot
Agents & tools Tools · Tool Creation · Tool Search · Skills · Providers
Intelligence Reasoning · Intent Sensing · Autonomy · Structured Output
Knowledge & memory Ingestion · Memory · Multimodal Input
Orchestration Patterns Guide · Flow · Delegation · Team Mode · Triggers
Interfaces Dashboard · API Server · MCP Gateway · A2A
Distribution OCI Distribution · Shareable Templates
Security Security Model · Agent Policy · Guardrails
Operations Audit · Cost Tracking · Reports · Evals · Doctor · Observability · CI/CD

Examples

initrunner examples list               # browse all agents, teams, and flows
initrunner examples copy code-reviewer # copy to current directory

Upgrading

Run initrunner doctor --role role.yaml to check any role file for deprecated fields, schema errors, and spec version issues. Add --fix to auto-repair. Use --flow flow.yaml to validate an entire flow and its referenced roles. See Deprecations.

Community

License

Licensed under MIT or Apache-2.0, at your option.


v2026.4.14

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

initrunner-2026.4.15.tar.gz (2.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

initrunner-2026.4.15-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file initrunner-2026.4.15.tar.gz.

File metadata

  • Download URL: initrunner-2026.4.15.tar.gz
  • Upload date:
  • Size: 2.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for initrunner-2026.4.15.tar.gz
Algorithm Hash digest
SHA256 a4d4d2e222f71ff4375463e8a39032fac464b40ae0628c90035355e96016a455
MD5 a844eb6fb41613ce0322bcfcbe3cf751
BLAKE2b-256 2964a2a364790ed27e4d1056e2018824c960544e44768d8d7448a1ddda004ba7

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-2026.4.15.tar.gz:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file initrunner-2026.4.15-py3-none-any.whl.

File metadata

  • Download URL: initrunner-2026.4.15-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for initrunner-2026.4.15-py3-none-any.whl
Algorithm Hash digest
SHA256 0237f4f9f3e18c6b6aefe24d6e70cb37d7438f15bcef35ee42893ef644cf730a
MD5 1a4d3caab0b138a86780793195ab8dc8
BLAKE2b-256 9a6815a0784e491747e9a9623135637b755ab92d2ab6547f7c13e16815c958e5

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-2026.4.15-py3-none-any.whl:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page