Skip to main content

Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon

Project description

InitRunner

InitRunner mascot

Python 3.11+ PyPI version PyPI downloads GitHub stars Docker pulls MIT License Tests v1.12.0 Ruff PydanticAI Website Discord

Website · Docs · Discord · Issues

Define AI agents in YAML. Run them as CLI tools, Telegram bots, Discord bots, API servers, or autonomous daemons. Built-in RAG, persistent memory, 40+ tools. Any model.

One YAML file is all it takes to go from idea to running agent - with document search, persistent memory, and tools wired in automatically. Start with initrunner chat for a zero-config assistant, then scale to bots, pipelines, and API servers without rewriting anything.

v1.13.0 -- Docker container sandbox for tool execution, shared streaming and transport modules, tool architecture refactoring for MCP reuse, and security hardening. See the Changelog.

30-Second Quickstart

curl -fsSL https://initrunner.ai/install.sh | sh -s -- --extras all

Then run the setup wizard:

initrunner setup

The wizard walks you through provider, API key, model, and first agent — you'll have a working role in under a minute.

Prefer a package manager? uv tool install "initrunner[all]", pipx install "initrunner[all]", or pip install "initrunner[all]" all work. Note that bare pip install may fail on modern Linux due to PEP 668 — use uv, pipx, or the shell installer instead.

Try It

initrunner chat --ingest ./docs/   # chat with your docs, memory on by default
>>> summarize the getting started guide
The guide covers installation, creating your first agent with a role.yaml file, ...

>>> what retrieval strategies does it mention?
The docs describe three strategies: full-text search, semantic similarity, ...

>>> /quit

No YAML, no config files. Add --tool-profile all to enable every built-in tool.

Define Agent Roles in YAML

When you need more control, define an agent as a YAML file:

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: code-reviewer
  description: Reviews code for bugs and style issues
spec:
  role: |
    You are a senior engineer. Review code for correctness and readability.
    Use git tools to examine changes and read files for context.
  model: { provider: openai, name: gpt-5-mini }
  tools:
    - type: git
      repo_path: .
    - type: filesystem
      root_path: .
      read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"

That's it. No Python, no boilerplate. Using Claude? pipx install "initrunner[anthropic]" and set model: { provider: anthropic, name: claude-opus-4-6 }.

InitRunner Quick Chat
Quick Chat - ask a question, send the answer to Slack

Why InitRunner

Zero config to start. initrunner chat gives you an AI assistant with persistent memory and document search out of the box. No YAML, no setup beyond an API key.

Config, not code. Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 20+ built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, think, script, and more) work out of the box. Need a custom tool? One file, one decorator.

Version-control your agents. Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.

Prototype to production. Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.

How It Compares

InitRunner Build from scratch LangChain
Setup curl -fsSL https://initrunner.ai/install.sh | sh + API key Install 5-10 packages, write glue code pip install langchain + adapters
Agent config One YAML file Python classes + wiring Python chains + config objects
RAG --ingest ./docs/ (one flag) Embed, store, retrieve, prompt - DIY Loaders > splitters > vectorstore chain
Bot deployment --telegram / --discord flag Build bot framework integration Separate bot framework + adapter
Model switching Change model.provider in YAML Rewrite client code Swap LLM class + adjust prompts
Multi-agent compose.yaml with delegation Custom orchestration layer Agent executor + custom routing

What Can You Build?

  • A Telegram bot that answers questions about your codebase - point it at your repo, deploy with one flag
  • A cron job that monitors competitors and sends daily digests - cron trigger + web scraper + Slack sink
  • A document Q&A agent for your team's knowledge base - ingest PDFs and Markdown, serve as an API
  • A code review bot triggered by new commits - file-watch trigger + git tools + structured output
  • A multi-agent pipeline: inbox watcher > triager > responder - define in compose.yaml, run with one command
  • A personal assistant that remembers everything - persistent memory across sessions, no setup

Quickstart

1. Install

curl -fsSL https://initrunner.ai/install.sh | sh -s -- --extras all

Or with a package manager:

uv tool install "initrunner[all]"   # recommended (fast, PEP 668-safe)
pipx install "initrunner[all]"      # also PEP 668-safe
pip install "initrunner[all]"       # may fail on modern Linux (PEP 668)

Common extras: anthropic (Claude), ingest (PDF/DOCX), dashboard (web UI), all (everything). See Installation docs for the full extras table and platform notes.

2. Run the setup wizard

initrunner setup

The wizard guides you through:

  • Provider — OpenAI, Anthropic, Google, Groq, Mistral, Cohere, Bedrock, xAI, or Ollama
  • API key — auto-detects existing keys, validates, and saves to ~/.initrunner/.env
  • Model — pick from a curated list for your provider
  • Intent — chatbot, knowledge/RAG, memory, Telegram bot, Discord bot, API agent, daemon, or bundled example
  • Tools — select and configure tools with intent-specific defaults
  • Connectivity test — verifies everything works before you start

At the end you get a ready-to-run role.yaml and a configured initrunner chat session. See Setup docs for all flags and non-interactive usage.

Alternative: manual configuration

If you prefer to skip the wizard, set your API key directly:

export OPENAI_API_KEY=sk-...          # OpenAI (default)
export ANTHROPIC_API_KEY=sk-ant-...   # Claude

You can also store keys in ~/.initrunner/.env — it's loaded automatically by all commands. Environment variables set in the shell take precedence over .env values.

3. Start chatting

initrunner chat                        # zero-config chat with persistent memory
initrunner chat --resume               # resume previous session + auto-recall memories
initrunner chat --ingest ./docs/       # chat with your documents (instant RAG)
initrunner chat --tool-profile all     # chat with all tools enabled
initrunner chat --telegram             # one-command Telegram bot
initrunner chat --telegram --allowed-user-ids 123456789  # restrict access
initrunner run role.yaml -p "Hello!"   # one-shot prompt
initrunner run role.yaml -i            # interactive REPL

Embedding note: --ingest uses OpenAI embeddings by default (text-embedding-3-small). Anthropic and other non-OpenAI users also need OPENAI_API_KEY set, or can switch embedding providers in their role YAML. See RAG Quickstart.

Memory is on by default - the agent remembers facts across sessions. Use --no-memory to disable. See Chat docs for all options, and CLI Reference for the full command list.

From Simple to Powerful

Start with the code-reviewer above. Each step adds one capability - no rewrites, just add a section to your YAML.

1. Add knowledge & memory

Point at your docs for RAG - a search_documents tool is auto-registered. Add memory for persistent recall across sessions:

spec:
  ingest:
    sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
  memory:
    store_path: ./memory.db
    max_memories: 1000
initrunner ingest role.yaml   # extract | chunk | embed | store
initrunner run role.yaml -i --resume   # search_documents + memory ready

See Ingestion · Memory · RAG Quickstart.

2. Add skills

Compose reusable bundles of tools and prompts. Each skill is a SKILL.md file - reference it by path:

spec:
  skills:
    - ../skills/web-researcher
    - ../skills/code-tools.md

The agent inherits each skill's tools and prompt instructions automatically. Run initrunner init --skill my-skill to scaffold one. See Skills.

3. Add triggers

Turn it into a daemon that reacts to events - cron, file watch, webhook, Telegram, or Discord:

spec:
  triggers:
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate the weekly status report."
    - type: file_watch
      paths: [./src]
      prompt_template: "File changed: {path}. Review it."
initrunner daemon role.yaml   # runs until stopped

See Triggers · Telegram · Discord.

4. Compose agents

Orchestrate multiple agents into a pipeline - one agent's output feeds into the next:

apiVersion: initrunner/v1
kind: Compose
metadata: { name: email-pipeline }
spec:
  services:
    inbox-watcher:
      role: roles/inbox-watcher.yaml
      sink: { type: delegate, target: triager }
    triager: { role: roles/triager.yaml }

Run with initrunner compose up pipeline.yaml. See Compose · Delegation.

5. Team up agents

Run multiple personas on the same task in a single file - each persona sees the previous output:

apiVersion: initrunner/v1
kind: Team
metadata:
  name: code-review-team
  description: Multi-perspective code review
spec:
  model: { provider: openai, name: gpt-5-mini }
  personas:
    architect: "review for design patterns, SOLID principles, and architecture issues"
    security: "find security vulnerabilities, injection risks, auth issues"
    maintainer: "check readability, naming, test coverage gaps, docs"
  tools:
    - type: filesystem
      root_path: .
      read_only: true
    - type: git
      repo_path: .
      read_only: true
  guardrails:
    max_tokens_per_run: 50000
    team_token_budget: 150000
initrunner run team.yaml -p "Review the latest commit"

See Team Mode.

6. Serve as an API

Turn any agent into an OpenAI-compatible endpoint - drop-in for Open WebUI, Vercel AI SDK, or any OpenAI client:

initrunner serve support-agent.yaml --port 3000

See Server docs for client examples and Open WebUI integration.

7. Attach files and media

Send images, audio, video, and documents alongside your prompts:

initrunner run role.yaml -p "Describe this image" -A photo.png
initrunner run role.yaml -p "Compare these" -A before.png -A after.png

In the REPL, use /attach to queue files. See Multimodal Input.

8. Get structured output

Force the agent to return validated JSON matching a schema - ideal for pipelines and automation. Add an output section with a JSON schema and the agent's response is validated against it:

initrunner run classifier.yaml -p "Acme Corp invoice for $250"
# => {"status": "approved", "amount": 250.0}

See Structured Output for inline schemas, external schema files, and pipeline integration.

9. Test your agents

Define eval suites in YAML to verify output quality, tool usage, and performance:

# eval-suite.yaml
cases:
  - name: search-test
    prompt: "Find info about Docker"
    assertions:
      - type: tool_calls
        expected: ["web_search"]
      - type: llm_judge
        criteria: ["Response explains Docker clearly"]
      - type: max_latency
        limit_ms: 30000
initrunner test role.yaml -s eval-suite.yaml -v -j 4 -o results.json

See Evals.

10. Expose as MCP tools

Turn any agent into an MCP server that Claude Code, Claude Desktop, Gemini CLI, Codex CLI, Cursor, and Windsurf can call directly:

initrunner mcp serve researcher.yaml writer.yaml reviewer.yaml

Each role becomes a tool. Configure in Claude Desktop's claude_desktop_config.json:

{
  "mcpServers": {
    "initrunner": {
      "command": "initrunner",
      "args": ["mcp", "serve", "roles/agent.yaml"]
    }
  }
}

See MCP Gateway docs for SSE/HTTP transports, pass-through mode, and multi-agent setups.

MCP Toolkit (no LLM required)

Expose InitRunner tools directly as an MCP server — no agent, no API key needed for default tools:

initrunner mcp toolkit                        # web search, page fetch, CSV, datetime
initrunner mcp toolkit --tools sql --tools http  # add opt-in tools
initrunner mcp toolkit -c toolkit.yaml        # YAML config with env var interpolation

Compatible with Claude Code, Claude Desktop, Gemini CLI, Codex CLI, Cursor, and Windsurf. Add to your MCP config (.mcp.json for Claude Code, claude_desktop_config.json for Claude Desktop, etc.):

{
  "mcpServers": {
    "initrunner-toolkit": {
      "command": "initrunner",
      "args": ["mcp", "toolkit"]
    }
  }
}

Community Roles

Browse, install, and run roles shared by the community:

initrunner search "code review"                          # browse the community index
initrunner install code-reviewer                         # download, validate, confirm
initrunner install user/repo:roles/agent.yaml@v1.0       # install from any GitHub repo
initrunner run ~/.initrunner/roles/code-reviewer.yaml -i # run an installed role

Every install shows a security summary and asks for confirmation. See docs/agents/registry.md for details.

Docker

Available on GHCR and Docker Hub. The image ships with all extras pre-installed.

# Interactive chat with memory
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest chat

# Chat with cherry-picked tools
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data -v .:/workspace \
    ghcr.io/vladkesler/initrunner:latest \
    chat --tools git --tools filesystem

# Enable all built-in tools at once
#   chat --tool-profile all

# Chat with your documents (instant RAG)
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data -v ./docs:/docs \
    ghcr.io/vladkesler/initrunner:latest chat --ingest /docs

# Ingest documents for a role, then query
docker run --rm -e OPENAI_API_KEY \
    -v ./roles:/roles -v ./docs:/docs -v initrunner-data:/data \
    ghcr.io/vladkesler/initrunner:latest ingest /roles/rag-agent.yaml
docker run --rm -it -e OPENAI_API_KEY \
    -v ./roles:/roles -v initrunner-data:/data \
    ghcr.io/vladkesler/initrunner:latest run /roles/rag-agent.yaml -i

# Telegram bot
docker run -d -e OPENAI_API_KEY -e TELEGRAM_BOT_TOKEN \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest \
    chat --telegram

# OpenAI-compatible API server on port 8000
docker run -d -e OPENAI_API_KEY -v ./roles:/roles \
    -p 8000:8000 ghcr.io/vladkesler/initrunner:latest \
    serve /roles/my-agent.yaml --host 0.0.0.0

# Web dashboard at http://localhost:8420
docker run -d -e OPENAI_API_KEY -v ./roles:/roles -v initrunner-data:/data \
    -p 8420:8420 ghcr.io/vladkesler/initrunner:latest ui --role-dir /roles

Or use docker compose up with the included docker-compose.yml (copy examples/.env.example to .env first). Example roles are seeded automatically on first boot. To use your own roles, uncomment the ./roles:/data/roles volume mount in the compose file.

Docker Sandbox for Tool Execution

Shell, Python, and script tools can run inside Docker containers for kernel-level isolation — network namespaces, cgroups, read-only rootfs, memory/CPU limits. Enable it in your role YAML:

security:
  docker:
    enabled: true        # run tools in containers
    image: python:3.12-slim
    network: none        # no network access
    memory_limit: 256m
    cpu_limit: 1.0
    read_only_rootfs: true
    bind_mounts:
      - source: ./data
        target: /data
        read_only: true

Run initrunner doctor to verify Docker is available. See docs/security/docker-sandbox.md for the full configuration reference.

Cloud Deploy

Deploy the InitRunner dashboard to a cloud platform with one click:

Deploy on Railway Deploy to Render

Fly.io: See Cloud Deployment Guide.

All deploys include the web dashboard with example roles pre-loaded. Set your LLM provider API key and a dashboard password during setup. See the full guide.

User Interfaces

Terminal UI (tui) Web Dashboard (ui)
Launch initrunner tui initrunner ui
Install pip install initrunner[tui] pip install initrunner[dashboard]
Roles Create from template, edit via forms Form builder with live preview, AI generate
Chat Streaming chat with token counts SSE streaming with file attachments
Extras Audit log, memory, daemon event log Audit detail panel, memory, trigger monitor
Style k9s-style keyboard-driven (Textual) Server-rendered HTML (HTMX + DaisyUI)

See TUI docs · Dashboard docs · API Server docs

Documentation

Area Key docs
Getting started Installation · Setup · Chat · RAG Quickstart · Tutorial · CLI Reference · Discord Bot · Telegram Bot
Agents & tools Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers
Knowledge & memory Ingestion · Memory · Multimodal Input
Orchestration Compose · Delegation · Team Mode · Autonomy · Triggers · Intent Sensing
Interfaces Dashboard · TUI · API Server · MCP Gateway
Operations Security · Guardrails · Audit · Reports · Evals · Doctor · Observability · CI/CD

See docs/ for the full index.

Examples

initrunner examples list               # see all available examples
initrunner examples copy code-reviewer # copy to current directory

The examples/ directory includes 20+ ready-to-run agents, skills, and compose pipelines covering code review, support bots, data analysis, web monitoring, and multi-agent orchestration.

Community & Support

If you find InitRunner useful, consider giving it a star - it helps others discover the project.

Contributing

Contributions welcome! See CONTRIBUTING.md for dev setup, PR guidelines, and quality checks. Share your roles by pushing to a public GitHub repo - anyone can install them with initrunner install user/repo. For security vulnerabilities, see SECURITY.md.

License

MIT - see LICENSE for details.


v1.12.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

initrunner-1.13.0.tar.gz (3.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

initrunner-1.13.0-py3-none-any.whl (779.0 kB view details)

Uploaded Python 3

File details

Details for the file initrunner-1.13.0.tar.gz.

File metadata

  • Download URL: initrunner-1.13.0.tar.gz
  • Upload date:
  • Size: 3.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-1.13.0.tar.gz
Algorithm Hash digest
SHA256 305e6d44c787a8ba52c6054a9066e0d552ba102ef2f3be76be1df9b3738fdd2c
MD5 ac70e05ab39654a715d07628263d45f4
BLAKE2b-256 5339f9243c590f67545be14d9a059a65a4d6794e926af5ab1a966b130ab1ce5c

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-1.13.0.tar.gz:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file initrunner-1.13.0-py3-none-any.whl.

File metadata

  • Download URL: initrunner-1.13.0-py3-none-any.whl
  • Upload date:
  • Size: 779.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-1.13.0-py3-none-any.whl
Algorithm Hash digest
SHA256 28751853c71c24c2bf5c67bae4811942ded1da041b3a2be19fe1f661a51b9e2b
MD5 a5c13b4f518884cc8b41db4a78ec3406
BLAKE2b-256 191f994402cf0cb01e4aeb3e14e3f8e3561bb14248a5f42cd6fb7bd46a95d596

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-1.13.0-py3-none-any.whl:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page