Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon
Project description
InitRunner
Website · Docs · Discord · Issues
Define AI agents in YAML. Run them as CLI tools, Telegram bots, Discord bots, API servers, or autonomous daemons. Built-in RAG, persistent memory, 40+ tools. Any model.
One YAML file is all it takes to go from idea to running agent - with document search, persistent memory, and tools wired in automatically. Start with initrunner chat for a zero-config assistant, then scale to bots, pipelines, and API servers without rewriting anything.
v1.20.0 -- Model aliases (
~/.initrunner/models.yaml),--modelruntime override on all commands, inlineprovider:modelsyntax. See the Changelog.
Contents
- 30-Second Quickstart
- Try It
- Define Agent Roles in YAML
- Why InitRunner
- How It Compares
- What Can You Build?
- Quickstart
- From Simple to Powerful
- Community Roles
- OCI Registry Distribution
- Docker
- Cloud Deploy
- User Interfaces
- Documentation
- Examples
- Community & Support
- Contributing
- License
30-Second Quickstart
curl -fsSL https://initrunner.ai/install.sh | sh -s -- --extras all
Then run the setup wizard:
initrunner setup
The wizard walks you through provider, API key, model, and first agent — you'll have a working role in under a minute.
Prefer a package manager?
uv tool install "initrunner[all]",pipx install "initrunner[all]", orpip install "initrunner[all]"all work. Note that barepip installmay fail on modern Linux due to PEP 668 — useuv,pipx, or the shell installer instead.
Try It
initrunner chat --ingest ./docs/ # chat with your docs, memory on by default
>>> summarize the getting started guide
The guide covers installation, creating your first agent with a role.yaml file, ...
>>> what retrieval strategies does it mention?
The docs describe three strategies: full-text search, semantic similarity, ...
>>> /quit
No YAML, no config files. Add --tool-profile all to enable every built-in tool.
Define Agent Roles in YAML
When you need more control, define an agent as a YAML file:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"
That's it. No Python, no boilerplate. Using Claude? pipx install "initrunner[anthropic]" and set model: { provider: anthropic, name: claude-opus-4-6 }.
Quick Chat - ask a question, send the answer to Slack
Why InitRunner
Zero config to start. initrunner chat gives you an AI assistant with persistent memory and document search out of the box. No YAML, no setup beyond an API key.
Config, not code. Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 20+ built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, think, script, and more) work out of the box. Need a custom tool? One file, one decorator.
Version-control your agents. Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.
Prototype to production. Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.
How It Compares
| InitRunner | Build from scratch | LangChain | |
|---|---|---|---|
| Setup | curl -fsSL https://initrunner.ai/install.sh | sh + API key |
Install 5-10 packages, write glue code | pip install langchain + adapters |
| Agent config | One YAML file | Python classes + wiring | Python chains + config objects |
| RAG | --ingest ./docs/ (one flag) |
Embed, store, retrieve, prompt - DIY | Loaders > splitters > vectorstore chain |
| Bot deployment | --telegram / --discord flag |
Build bot framework integration | Separate bot framework + adapter |
| Model switching | --model flag, aliases, or change YAML |
Rewrite client code | Swap LLM class + adjust prompts |
| Multi-agent | compose.yaml with delegation + auto-routing |
Custom orchestration layer | Agent executor + custom routing |
What Can You Build?
- A Telegram bot that answers questions about your codebase - point it at your repo, deploy with one flag
- A cron job that monitors competitors and sends daily digests - cron trigger + web scraper + Slack sink
- A document Q&A agent for your team's knowledge base - ingest PDFs and Markdown, serve as an API
- A code review bot triggered by new commits - file-watch trigger + git tools + structured output
- A multi-agent pipeline with auto-routing: intake > researcher / responder / escalator - sense routing picks the right target per message (
initrunner examples copy support-desk) - A personal assistant that remembers everything - persistent memory across sessions, no setup
Quickstart
1. Install
curl -fsSL https://initrunner.ai/install.sh | sh -s -- --extras all
Or with a package manager:
uv tool install "initrunner[all]" # recommended (fast, PEP 668-safe)
pipx install "initrunner[all]" # also PEP 668-safe
pip install "initrunner[all]" # may fail on modern Linux (PEP 668)
Common extras: anthropic (Claude), ingest (PDF/DOCX), dashboard (web UI), all (everything). See Installation docs for the full extras table and platform notes.
2. Run the setup wizard
initrunner setup
The wizard guides you through:
- Provider — OpenAI, Anthropic, Google, Groq, Mistral, Cohere, Bedrock, xAI, or Ollama
- API key — auto-detects existing keys, validates, and saves to
~/.initrunner/.env - Model — pick from a curated list for your provider
- Intent — chatbot, knowledge/RAG, memory, Telegram bot, Discord bot, API agent, daemon, or bundled example
- Tools — select and configure tools with intent-specific defaults
- Connectivity test — verifies everything works before you start
At the end you get a ready-to-run role.yaml and a configured initrunner chat session. See Setup docs for all flags and non-interactive usage.
Alternative: manual configuration
If you prefer to skip the wizard, set your API key directly:
export OPENAI_API_KEY=sk-... # OpenAI (default)
export ANTHROPIC_API_KEY=sk-ant-... # Claude
You can also store keys in ~/.initrunner/.env — it's loaded automatically by all commands. Environment variables set in the shell take precedence over .env values.
3. Start chatting
initrunner chat # zero-config chat with persistent memory
initrunner chat --resume # resume previous session + auto-recall memories
initrunner chat --ingest ./docs/ # chat with your documents (instant RAG)
initrunner chat --tool-profile all # chat with all tools enabled
initrunner chat --model smart # use a model alias (defined in ~/.initrunner/models.yaml)
initrunner chat --telegram # one-command Telegram bot
initrunner chat --telegram --allowed-user-ids 123456789 # restrict access
initrunner run role.yaml -p "Hello!" # one-shot prompt
initrunner run role.yaml -i # interactive REPL
Embedding note:
--ingestuses OpenAI embeddings by default (text-embedding-3-small). Anthropic and other non-OpenAI users also needOPENAI_API_KEYset, or can switch embedding providers in their role YAML. See RAG Quickstart.
Memory is on by default - the agent remembers facts across sessions. Use --no-memory to disable. See Chat docs for all options, and CLI Reference for the full command list.
From Simple to Powerful
Start with the code-reviewer above. Each step adds one capability - no rewrites, just add a section to your YAML.
1. Add knowledge & memory
Point at your docs for RAG - a search_documents tool is auto-registered. Add memory for persistent recall across sessions:
spec:
ingest:
sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
memory:
store_path: ./memory.db
semantic:
max_memories: 1000
initrunner ingest role.yaml # extract | chunk | embed | store
initrunner run role.yaml -i --resume # search_documents + memory ready
See Ingestion · Memory · RAG Quickstart.
2. Add skills
Compose reusable bundles of tools and prompts. Each skill is a SKILL.md file - reference it by path:
spec:
skills:
- ../skills/web-researcher
- ../skills/code-tools.md
The agent inherits each skill's tools and prompt instructions automatically. Run initrunner init --skill my-skill to scaffold one. See Skills.
3. Add triggers
Turn it into a daemon that reacts to events - cron, file watch, webhook, heartbeat, Telegram, or Discord:
spec:
triggers:
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
initrunner daemon role.yaml # runs until stopped
See Triggers · Telegram · Discord.
4. Compose agents
Orchestrate multiple agents into a pipeline - one agent's output feeds into the next. Use strategy: sense to auto-route messages to the right target:
apiVersion: initrunner/v1
kind: Compose
metadata: { name: email-pipeline }
spec:
services:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager:
role: roles/triager.yaml
sink: { type: delegate, strategy: sense, target: [researcher, responder] }
researcher: { role: roles/researcher.yaml }
responder: { role: roles/responder.yaml }
Run with initrunner compose up pipeline.yaml. The triager's output is auto-routed to the best-matching target using intent sensing - zero-cost keyword scoring with optional LLM tiebreak. Use strategy: keyword for zero API calls, or omit for fan-out to all targets. See Compose · Delegation.
5. Team up agents
Run multiple personas on the same task in a single file - each persona sees the previous output:
apiVersion: initrunner/v1
kind: Team
metadata:
name: code-review-team
description: Multi-perspective code review
spec:
model: { provider: openai, name: gpt-5-mini }
personas:
architect: "review for design patterns, SOLID principles, and architecture issues"
security: "find security vulnerabilities, injection risks, auth issues"
maintainer: "check readability, naming, test coverage gaps, docs"
tools:
- type: filesystem
root_path: .
read_only: true
- type: git
repo_path: .
read_only: true
guardrails:
max_tokens_per_run: 50000
team_token_budget: 150000
initrunner run team.yaml -p "Review the latest commit"
See Team Mode.
6. Serve as an API
Turn any agent into an OpenAI-compatible endpoint - drop-in for Open WebUI, Vercel AI SDK, or any OpenAI client:
initrunner serve support-agent.yaml --port 3000
See Server docs for client examples and Open WebUI integration.
7. Attach files and media
Send images, audio, video, and documents alongside your prompts:
initrunner run role.yaml -p "Describe this image" -A photo.png
initrunner run role.yaml -p "Compare these" -A before.png -A after.png
In the REPL, use /attach to queue files. See Multimodal Input.
8. Get structured output
Force the agent to return validated JSON matching a schema - ideal for pipelines and automation. Add an output section with a JSON schema and the agent's response is validated against it:
initrunner run classifier.yaml -p "Acme Corp invoice for $250"
# => {"status": "approved", "amount": 250.0}
See Structured Output for inline schemas, external schema files, and pipeline integration.
9. Test your agents
Define eval suites in YAML to verify output quality, tool usage, and performance:
# eval-suite.yaml
cases:
- name: search-test
prompt: "Find info about Docker"
assertions:
- type: tool_calls
expected: ["web_search"]
- type: llm_judge
criteria: ["Response explains Docker clearly"]
- type: max_latency
limit_ms: 30000
initrunner test role.yaml -s eval-suite.yaml -v -j 4 -o results.json
See Evals.
10. Expose as MCP tools
Turn any agent into an MCP server that Claude Code, Claude Desktop, Gemini CLI, Codex CLI, Cursor, and Windsurf can call directly:
initrunner mcp serve researcher.yaml writer.yaml reviewer.yaml
Each role becomes a tool. Configure in Claude Desktop's claude_desktop_config.json:
{
"mcpServers": {
"initrunner": {
"command": "initrunner",
"args": ["mcp", "serve", "roles/agent.yaml"]
}
}
}
See MCP Gateway docs for SSE/HTTP transports, pass-through mode, and multi-agent setups.
MCP Toolkit (no LLM required)
Expose InitRunner tools directly as an MCP server — no agent, no API key needed for default tools:
initrunner mcp toolkit # web search, page fetch, CSV, datetime
initrunner mcp toolkit --tools sql --tools http # add opt-in tools
initrunner mcp toolkit -c toolkit.yaml # YAML config with env var interpolation
Compatible with Claude Code, Claude Desktop, Gemini CLI, Codex CLI, Cursor, and Windsurf. Add to your MCP config (.mcp.json for Claude Code, claude_desktop_config.json for Claude Desktop, etc.):
{
"mcpServers": {
"initrunner-toolkit": {
"command": "initrunner",
"args": ["mcp", "toolkit"]
}
}
}
Community Roles
Browse, install, and run roles shared by the community:
initrunner search "code review" # browse the community index
initrunner install code-reviewer # download, validate, confirm
initrunner install user/repo:roles/agent.yaml@v1.0 # install from any GitHub repo
initrunner run ~/.initrunner/roles/code-reviewer.yaml -i # run an installed role
Every install shows a security summary and asks for confirmation. See docs/agents/registry.md for details.
OCI Registry Distribution
Publish and install complete role bundles (with skills, schemas, and data files) to any OCI-compliant container registry:
initrunner login ghcr.io # authenticate
initrunner publish role.yaml oci://ghcr.io/org/my-agent --tag 1.0.0 # publish
initrunner install oci://ghcr.io/org/my-agent:1.0.0 # install
initrunner info oci://ghcr.io/org/my-agent:1.0.0 # inspect
Bundles include the role definition, resolved skills, schema files, and any explicitly included data files. OCI references use the oci:// prefix and work alongside existing GitHub and community index installs. See OCI Distribution docs for authentication, bundle format, and full command reference.
Docker
Available on GHCR and Docker Hub. The image ships with all extras pre-installed.
# Interactive chat with memory
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest chat
# Chat with cherry-picked tools
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data -v .:/workspace \
ghcr.io/vladkesler/initrunner:latest \
chat --tools git --tools filesystem
# Enable all built-in tools at once
# chat --tool-profile all
# Chat with your documents (instant RAG)
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data -v ./docs:/docs \
ghcr.io/vladkesler/initrunner:latest chat --ingest /docs
# Ingest documents for a role, then query
docker run --rm -e OPENAI_API_KEY \
-v ./roles:/roles -v ./docs:/docs -v initrunner-data:/data \
ghcr.io/vladkesler/initrunner:latest ingest /roles/rag-agent.yaml
docker run --rm -it -e OPENAI_API_KEY \
-v ./roles:/roles -v initrunner-data:/data \
ghcr.io/vladkesler/initrunner:latest run /roles/rag-agent.yaml -i
# Telegram bot
docker run -d -e OPENAI_API_KEY -e TELEGRAM_BOT_TOKEN \
-v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest \
chat --telegram
# OpenAI-compatible API server on port 8000
docker run -d -e OPENAI_API_KEY -v ./roles:/roles \
-p 8000:8000 ghcr.io/vladkesler/initrunner:latest \
serve /roles/my-agent.yaml --host 0.0.0.0
# Web dashboard at http://localhost:8420
docker run -d -e OPENAI_API_KEY -v ./roles:/roles -v initrunner-data:/data \
-p 8420:8420 ghcr.io/vladkesler/initrunner:latest ui --role-dir /roles
Or use docker compose up with the included docker-compose.yml (copy examples/.env.example to .env first). Example roles are seeded automatically on first boot. To use your own roles, uncomment the ./roles:/data/roles volume mount in the compose file.
Docker Sandbox for Tool Execution
Shell, Python, and script tools can run inside Docker containers for kernel-level isolation — network namespaces, cgroups, read-only rootfs, memory/CPU limits. Enable it in your role YAML:
security:
docker:
enabled: true # run tools in containers
image: python:3.12-slim
network: none # no network access
memory_limit: 256m
cpu_limit: 1.0
read_only_rootfs: true
bind_mounts:
- source: ./data
target: /data
read_only: true
Run initrunner doctor to verify Docker is available. See docs/security/docker-sandbox.md for the full configuration reference.
Cloud Deploy
Deploy the InitRunner dashboard to a cloud platform with one click:
Fly.io: See Cloud Deployment Guide.
All deploys include the web dashboard with example roles pre-loaded. Set your LLM provider API key and a dashboard password during setup. See the full guide.
User Interfaces
Terminal UI (tui) |
Web Dashboard (ui) |
|
|---|---|---|
| Launch | initrunner tui |
initrunner ui |
| Install | pip install initrunner[tui] |
pip install initrunner[dashboard] |
| Roles | Create from template, edit via forms | Form builder with live preview, AI generate |
| Chat | Streaming chat with token counts | SSE streaming with file attachments |
| Extras | Audit log, memory, daemon event log | Audit detail panel, memory, trigger monitor |
| Style | k9s-style keyboard-driven (Textual) | Server-rendered HTML (HTMX + DaisyUI) |
See TUI docs · Dashboard docs · API Server docs
Documentation
| Area | Key docs |
|---|---|
| Getting started | Installation · Setup · Chat · RAG Quickstart · Tutorial · CLI Reference · Discord Bot · Telegram Bot |
| Agents & tools | Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers |
| Knowledge & memory | Ingestion · Memory · Multimodal Input |
| Orchestration | Compose · Delegation · Team Mode · Autonomy · Triggers · Intent Sensing |
| Interfaces | Dashboard · TUI · API Server · MCP Gateway |
| Distribution | OCI Distribution · Shareable Templates |
| Operations | Security · Guardrails · Audit · Reports · Evals · Doctor · Observability · CI/CD |
See docs/ for the full index.
Examples
initrunner examples list # see all available examples
initrunner examples copy code-reviewer # copy to current directory
The examples/ directory includes 20+ ready-to-run agents, skills, and compose pipelines covering code review, support bots, data analysis, web monitoring, and multi-agent orchestration.
Community & Support
- Discord - InitRunner Hub - Chat, ask questions, share roles
- GitHub Issues - Bug reports and feature requests
- Changelog - Release notes and version history
If you find InitRunner useful, consider giving it a star - it helps others discover the project.
Contributing
Contributions welcome! See CONTRIBUTING.md for dev setup, PR guidelines, and quality checks. Share your roles by pushing to a public GitHub repo - anyone can install them with initrunner install user/repo. For security vulnerabilities, see SECURITY.md.
License
MIT - see LICENSE for details.
v1.20.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file initrunner-1.20.0.tar.gz.
File metadata
- Download URL: initrunner-1.20.0.tar.gz
- Upload date:
- Size: 3.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c7a61478b1fa0047378199c7c5f2cdf4599946ec3a1cd3c7e46057f9988e052b
|
|
| MD5 |
3d6614f14511e9852a8d0fdcb623eec4
|
|
| BLAKE2b-256 |
7a1f1ea810b6111c1ea04793277e135553032880917486b086874bb6847d52d7
|
Provenance
The following attestation bundles were made for initrunner-1.20.0.tar.gz:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-1.20.0.tar.gz -
Subject digest:
c7a61478b1fa0047378199c7c5f2cdf4599946ec3a1cd3c7e46057f9988e052b - Sigstore transparency entry: 1092020245
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@d5ade49497aadefa4f8ec6bb41fe4a4ee0762e3b -
Branch / Tag:
refs/tags/v1.20.0 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d5ade49497aadefa4f8ec6bb41fe4a4ee0762e3b -
Trigger Event:
push
-
Statement type:
File details
Details for the file initrunner-1.20.0-py3-none-any.whl.
File metadata
- Download URL: initrunner-1.20.0-py3-none-any.whl
- Upload date:
- Size: 807.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eed60a1f58450a0f2ead1e3b04ac9abf1593fa588283aa6bf57f21e34547cc5e
|
|
| MD5 |
0571b6dd1d1d75d547e2c1fb3ca8efc5
|
|
| BLAKE2b-256 |
07c9b24230164c997678267dd89d72d3ff6c797e59db9c064b06d3f2805efb22
|
Provenance
The following attestation bundles were made for initrunner-1.20.0-py3-none-any.whl:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-1.20.0-py3-none-any.whl -
Subject digest:
eed60a1f58450a0f2ead1e3b04ac9abf1593fa588283aa6bf57f21e34547cc5e - Sigstore transparency entry: 1092020248
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@d5ade49497aadefa4f8ec6bb41fe4a4ee0762e3b -
Branch / Tag:
refs/tags/v1.20.0 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d5ade49497aadefa4f8ec6bb41fe4a4ee0762e3b -
Trigger Event:
push
-
Statement type: