Skip to main content

Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon

Project description

InitRunner

InitRunner mascot

Python 3.11+ PyPI version PyPI downloads GitHub stars Docker pulls MIT OR Apache-2.0 Tests v1.39.0 Ruff PydanticAI Website Discord

Website · Docs · InitHub · Discord · Issues

Define AI agents in YAML. Run them as CLI tools, Telegram bots, Discord bots, API servers, or autonomous daemons. Built-in RAG, persistent memory, 25+ built-in tools, policy-based authorization. Any model.

One YAML file is all it takes to go from idea to running agent - with document search, persistent memory, and tools wired in automatically. Start with initrunner chat for a zero-config assistant, then scale to bots, pipelines, and API servers without rewriting anything.

v1.39.0 -- Architecture cleanup: legacy tools migrated to auto-discovery, run() god-function broken into focused helpers, services layer fully enforced, 91 new security tests for authz and middleware. See the Changelog.

Contents

Quickstart

Install and configure:

curl -fsSL https://initrunner.ai/install.sh | sh -s -- --extras all
initrunner setup        # wizard: pick provider, model, API key

Or install with a package manager: uv tool install "initrunner[all]" / pipx install "initrunner[all]". See Installation and Setup.

Use a premade agent from InitHub

Browse hub.initrunner.ai or search from the terminal:

initrunner search "code review"                                    # find agents
initrunner install alice/code-reviewer                             # install one
initrunner run alice/code-reviewer -p "Review the latest commit"   # run it

See Registry docs for version pinning, updates, and OCI sources.

Or build your own

initrunner new "a research assistant that summarizes papers"  # AI-generates a role.yaml
initrunner chat --ingest ./docs/   # or skip YAML entirely -- chat with your docs, memory on by default

Fork a hub agent as a starting point: initrunner new --from hub:alice/code-reviewer. See Chat and Tutorial.

Or run with Docker, no install needed:

docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest chat

See the Docker guide for RAG, Telegram, API server, and more examples.

Define Agent Roles in YAML

When you need more control, define an agent as a YAML file:

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: code-reviewer
  description: Reviews code for bugs and style issues
spec:
  role: |
    You are a senior engineer. Review code for correctness and readability.
    Use git tools to examine changes and read files for context.
  model: { provider: openai, name: gpt-5-mini }
  tools:
    - type: git
      repo_path: .
    - type: filesystem
      root_path: .
      read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"

That's it. No Python, no boilerplate. Using Claude? pipx install "initrunner[anthropic]" and set model: { provider: anthropic, name: claude-sonnet-4-5-20250929 }.

InitRunner Quick Chat
Quick Chat - ask a question, send the answer to Slack

User Interfaces

Terminal UI (tui) Web Dashboard (ui)
Launch initrunner tui initrunner ui
Install pip install initrunner[tui] pip install initrunner[dashboard]
Roles Create from template, edit via forms Form builder with live preview, AI generate
Chat Streaming chat with token counts SSE streaming with file attachments
Extras Audit log, memory, daemon event log Audit detail panel, memory, trigger monitor
Style k9s-style keyboard-driven (Textual) Server-rendered HTML (HTMX + DaisyUI)

See TUI docs · Dashboard docs · API Server docs

Why InitRunner

Zero config to start. initrunner chat gives you an AI assistant with persistent memory and document search out of the box. No YAML, no setup beyond an API key.

Config, not code. Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 25+ built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, think, script, and more) work out of the box. Need a custom tool? One file, one decorator.

Version-control your agents. Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.

Prototype to production. Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.

How It Compares

InitRunner Build from scratch LangChain
Setup curl -fsSL https://initrunner.ai/install.sh | sh + API key Install 5-10 packages, write glue code pip install langchain + adapters
Agent config One YAML file Python classes + wiring Python chains + config objects
RAG --ingest ./docs/ (one flag) Embed, store, retrieve, prompt - DIY Loaders > splitters > vectorstore chain
Bot deployment --telegram / --discord flag Build bot framework integration Separate bot framework + adapter
Model switching --model flag, aliases, or change YAML Rewrite client code Swap LLM class + adjust prompts
Multi-agent compose.yaml with delegation + auto-routing Custom orchestration layer Agent executor + custom routing

What Can You Build?

  • A Telegram bot that answers questions about your codebase - point it at your repo, deploy with one flag
  • A cron job that monitors competitors and sends daily digests - cron trigger + web scraper + Slack sink
  • A document Q&A agent for your team's knowledge base - ingest PDFs and Markdown, serve as an API
  • A code review bot triggered by new commits - file-watch trigger + git tools + structured output
  • A multi-agent pipeline with auto-routing: intake > researcher / responder / escalator - sense routing picks the right target per message (initrunner examples copy support-desk)
  • A personal assistant that remembers everything - persistent memory across sessions, no setup

Features

Start with the code-reviewer above. Each step adds one capability - no rewrites, just add a section to your YAML.

Knowledge & memory

Point at your docs for RAG - a search_documents tool is auto-registered. Add memory for persistent recall across sessions:

spec:
  ingest:
    sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
  memory:
    store_path: ./memory.db
    semantic:
      max_memories: 1000
initrunner ingest role.yaml   # extract | chunk | embed | store
initrunner run role.yaml -i --resume   # search_documents + memory ready

See Ingestion · Memory · RAG Quickstart.

Triggers

Turn it into a daemon that reacts to events - cron, file watch, webhook, heartbeat, Telegram, or Discord:

spec:
  triggers:
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate the weekly status report."
    - type: file_watch
      paths: [./src]
      prompt_template: "File changed: {path}. Review it."
initrunner run role.yaml --daemon   # runs until stopped

See Triggers · Telegram · Discord.

Compose agents

Orchestrate multiple agents into a pipeline - one agent's output feeds into the next. Use strategy: sense to auto-route messages to the right target:

apiVersion: initrunner/v1
kind: Compose
metadata: { name: email-pipeline }
spec:
  services:
    inbox-watcher:
      role: roles/inbox-watcher.yaml
      sink: { type: delegate, target: triager }
    triager:
      role: roles/triager.yaml
      sink: { type: delegate, strategy: sense, target: [researcher, responder] }
    researcher: { role: roles/researcher.yaml }
    responder: { role: roles/responder.yaml }

Run with initrunner compose up pipeline.yaml. See Compose · Delegation.

Security & Authorization

Built-in security with optional Cerbos agent-as-principal policy engine. Agents get Cerbos identity from role.metadata (name, team, tags, author), with tool-level authorization and delegation policy enforced across CLI, compose, daemon, API, and pipeline:

pip install initrunner[authz]
export INITRUNNER_CERBOS_ENABLED=true
export INITRUNNER_CERBOS_AGENT_CHECKS=true  # per-agent identity checks
initrunner run role.yaml   # tool calls + delegation checked against Cerbos policies

Also includes content filtering, PEP 578 sandboxing, Docker isolation, token budgets, and rate limiting out of the box. See Agent Policy · Security · Guardrails.

More capabilities

Feature Command / config Docs
Skills - reusable tool + prompt bundles, auto-discovered spec: { skills: [../skills/web-researcher] } Skills
Team mode - multi-persona on one task kind: Team + spec: { personas: {…} } Team Mode
API server - OpenAI-compatible endpoint initrunner run agent.yaml --serve --port 3000 Server
Multimodal - images, audio, video, docs initrunner run role.yaml -p "Describe" -A photo.png Multimodal
Structured output - validated JSON schemas spec: { output: { schema: {…} } } Structured Output
Evals - test agent output quality initrunner test role.yaml -s eval.yaml Evals
MCP gateway - expose agents as MCP tools initrunner mcp serve agent.yaml MCP Gateway
MCP toolkit - tools without an agent initrunner mcp toolkit MCP Gateway
Configure - switch provider/model on any role initrunner configure role.yaml --provider groq Providers

See Tutorial for a guided walkthrough.

Distribution & Deployment

InitHub

initrunner search "code review"                          # browse InitHub
initrunner install alice/code-reviewer                   # install from InitHub
initrunner install alice/code-reviewer@1.2.0             # pin a version

See Registry.

initrunner login                        # browser-based device code auth
initrunner login --token <TOKEN>        # CI/headless
initrunner publish                      # publish from current agent directory

See Publishing Guide.

OCI registry

Publish and install complete role bundles to any OCI-compliant container registry:

initrunner publish oci://ghcr.io/org/my-agent --tag 1.0.0       # from current dir
initrunner publish ./my-agent/ oci://ghcr.io/org/my-agent --tag 1.0.0  # or pass a path
initrunner install oci://ghcr.io/org/my-agent:1.0.0

See OCI Distribution.

Cloud deploy

Deploy on Railway Deploy to Render

Fly.io: See Cloud Deployment Guide.

Documentation

Area Key docs
Getting started Installation · Setup · Chat · RAG Quickstart · Tutorial · CLI Reference · Docker · Discord Bot · Telegram Bot
Agents & tools Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers
Knowledge & memory Ingestion · Memory · Multimodal Input
Orchestration Compose · Delegation · Team Mode · Autonomy · Triggers · Intent Sensing
Interfaces Dashboard · TUI · API Server · MCP Gateway
Distribution OCI Distribution · Shareable Templates
Operations Security · Agent Policy · Guardrails · Audit · Reports · Evals · Doctor · Observability · CI/CD

See docs/ for the full index.

Examples

initrunner examples list               # see all available examples
initrunner examples copy code-reviewer # copy to current directory

The examples/ directory includes 20+ ready-to-run agents, skills, and compose pipelines.

Community & Contributing

Contributions welcome! See CONTRIBUTING.md for dev setup and PR guidelines. For security vulnerabilities, see SECURITY.md.

License

Licensed under MIT or Apache-2.0, at your option.


v1.39.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

initrunner-1.39.2.tar.gz (4.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

initrunner-1.39.2-py3-none-any.whl (903.8 kB view details)

Uploaded Python 3

File details

Details for the file initrunner-1.39.2.tar.gz.

File metadata

  • Download URL: initrunner-1.39.2.tar.gz
  • Upload date:
  • Size: 4.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-1.39.2.tar.gz
Algorithm Hash digest
SHA256 b55f07c9e0fd8a0905b0d3f2e1be62be1d3ff520a01a91f03414c4f6ec817f8f
MD5 4766481726e8a236b5a2f560d9feb050
BLAKE2b-256 682f1669feaa868432bb5ec0abceb1bfa8072f08b3cf7abee4cfbc1634e393ab

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-1.39.2.tar.gz:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file initrunner-1.39.2-py3-none-any.whl.

File metadata

  • Download URL: initrunner-1.39.2-py3-none-any.whl
  • Upload date:
  • Size: 903.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-1.39.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9deba2fb782da1f3d5c5a9074d0a14929669460dab128b0b987a9acbc593da2c
MD5 2c5403f46b2edddc5832aa9768e0e1e5
BLAKE2b-256 f38178e7046129f5e582bd8567cdb19804612ac7be538014e182d6a44fafc6f2

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-1.39.2-py3-none-any.whl:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page