Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon
Project description
InitRunner
Website · Docs · Discord · Issues
Define AI agents in YAML. Run them as CLI tools, Telegram bots, Discord bots, API servers, or autonomous daemons. Built-in RAG, persistent memory, 40+ tools. Any model.
One YAML file is all it takes to go from idea to running agent - with document search, persistent memory, and tools wired in automatically. Start with initrunner chat for a zero-config assistant, then scale to bots, pipelines, and API servers without rewriting anything.
v1.10.0 -- Think tool, script tool, and MCP gateway mode. Expose agents to Claude Desktop, Cursor, and any MCP client. See the Changelog.
30-Second Quickstart
pip install "initrunner[all]"
export OPENAI_API_KEY=sk-...
initrunner chat --ingest ./my-docs/
That's it. You have an AI agent that knows your docs and remembers across sessions.
--ingestembeds documents with OpenAI by default. Using another provider? See RAG Quickstart to configure embeddings.
Try It
initrunner chat --ingest ./docs/ # chat with your docs, memory on by default
>>> summarize the getting started guide
The guide covers installation, creating your first agent with a role.yaml file, ...
>>> what retrieval strategies does it mention?
The docs describe three strategies: full-text search, semantic similarity, ...
>>> /quit
No YAML, no config files. Add --tool-profile all to enable every built-in tool.
Define Agent Roles in YAML
When you need more control, define an agent as a YAML file:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"
That's it. No Python, no boilerplate. Using Claude? pip install "initrunner[anthropic]" and set model: { provider: anthropic, name: claude-opus-4-6 }.
Quick Chat - ask a question, send the answer to Slack
Why InitRunner
Zero config to start. initrunner chat gives you an AI assistant with persistent memory and document search out of the box. No YAML, no setup beyond an API key.
Config, not code. Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 18 built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, think, script, and more) work out of the box. Need a custom tool? One file, one decorator.
Version-control your agents. Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.
Prototype to production. Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.
How It Compares
| InitRunner | Build from scratch | LangChain | |
|---|---|---|---|
| Setup | pip install initrunner + API key |
Install 5-10 packages, write glue code | pip install langchain + adapters |
| Agent config | One YAML file | Python classes + wiring | Python chains + config objects |
| RAG | --ingest ./docs/ (one flag) |
Embed, store, retrieve, prompt - DIY | Loaders > splitters > vectorstore chain |
| Bot deployment | --telegram / --discord flag |
Build bot framework integration | Separate bot framework + adapter |
| Model switching | Change model.provider in YAML |
Rewrite client code | Swap LLM class + adjust prompts |
| Multi-agent | compose.yaml with delegation |
Custom orchestration layer | Agent executor + custom routing |
What Can You Build?
- A Telegram bot that answers questions about your codebase - point it at your repo, deploy with one flag
- A cron job that monitors competitors and sends daily digests - cron trigger + web scraper + Slack sink
- A document Q&A agent for your team's knowledge base - ingest PDFs and Markdown, serve as an API
- A code review bot triggered by new commits - file-watch trigger + git tools + structured output
- A multi-agent pipeline: inbox watcher > triager > responder - define in
compose.yaml, run with one command - A personal assistant that remembers everything - persistent memory across sessions, no setup
Quickstart
1. Install
curl -fsSL https://initrunner.ai/install.sh | sh
Or with a package manager:
pip install "initrunner[all]" # everything included
pip install initrunner # core only (OpenAI)
uv tool install initrunner # or with uv
Common extras: anthropic (Claude), ingest (PDF/DOCX), dashboard (web UI), all (everything). See Installation docs for the full extras table and platform notes.
2. Set your API key
export OPENAI_API_KEY=sk-... # OpenAI (default)
export ANTHROPIC_API_KEY=sk-ant-... # Claude
You can also store keys in ~/.initrunner/.env - it's loaded automatically by all commands. Environment variables set in the shell take precedence over .env values.
Or run
initrunner setup- it walks through provider, key, and first role interactively, and stores the key in~/.initrunner/.envfor you.
3. Start chatting
initrunner chat # zero-config chat with persistent memory
initrunner chat --resume # resume previous session + auto-recall memories
initrunner chat --ingest ./docs/ # chat with your documents (instant RAG)
initrunner chat --tool-profile all # chat with all tools enabled
initrunner chat --telegram # one-command Telegram bot
initrunner chat --telegram --allowed-user-ids 123456789 # restrict access
initrunner run role.yaml -p "Hello!" # one-shot prompt
initrunner run role.yaml -i # interactive REPL
Embedding note:
--ingestuses OpenAI embeddings by default (text-embedding-3-small). Anthropic and other non-OpenAI users also needOPENAI_API_KEYset, or can switch embedding providers in their role YAML. See RAG Quickstart.
Memory is on by default - the agent remembers facts across sessions. Use --no-memory to disable. See Chat docs for all options, and CLI Reference for the full command list.
From Simple to Powerful
Start with the code-reviewer above. Each step adds one capability - no rewrites, just add a section to your YAML.
1. Add knowledge & memory
Point at your docs for RAG - a search_documents tool is auto-registered. Add memory for persistent recall across sessions:
spec:
ingest:
sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
memory:
store_path: ./memory.db
max_memories: 1000
initrunner ingest role.yaml # extract | chunk | embed | store
initrunner run role.yaml -i --resume # search_documents + memory ready
See Ingestion · Memory · RAG Quickstart.
2. Add skills
Compose reusable bundles of tools and prompts. Each skill is a SKILL.md file - reference it by path:
spec:
skills:
- ../skills/web-researcher
- ../skills/code-tools.md
The agent inherits each skill's tools and prompt instructions automatically. Run initrunner init --skill my-skill to scaffold one. See Skills.
3. Add triggers
Turn it into a daemon that reacts to events - cron, file watch, webhook, Telegram, or Discord:
spec:
triggers:
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
initrunner daemon role.yaml # runs until stopped
See Triggers · Telegram · Discord.
4. Compose agents
Orchestrate multiple agents into a pipeline - one agent's output feeds into the next:
apiVersion: initrunner/v1
kind: Compose
metadata: { name: email-pipeline }
spec:
services:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager: { role: roles/triager.yaml }
Run with initrunner compose up pipeline.yaml. See Compose · Delegation.
5. Team up agents
Run multiple personas on the same task in a single file - each persona sees the previous output:
apiVersion: initrunner/v1
kind: Team
metadata:
name: code-review-team
description: Multi-perspective code review
spec:
model: { provider: openai, name: gpt-5-mini }
personas:
architect: "review for design patterns, SOLID principles, and architecture issues"
security: "find security vulnerabilities, injection risks, auth issues"
maintainer: "check readability, naming, test coverage gaps, docs"
tools:
- type: filesystem
root_path: .
read_only: true
- type: git
repo_path: .
read_only: true
guardrails:
max_tokens_per_run: 50000
team_token_budget: 150000
initrunner run team.yaml -p "Review the latest commit"
See Team Mode.
6. Serve as an API
Turn any agent into an OpenAI-compatible endpoint - drop-in for Open WebUI, Vercel AI SDK, or any OpenAI client:
initrunner serve support-agent.yaml --port 3000
See Server docs for client examples and Open WebUI integration.
7. Attach files and media
Send images, audio, video, and documents alongside your prompts:
initrunner run role.yaml -p "Describe this image" -A photo.png
initrunner run role.yaml -p "Compare these" -A before.png -A after.png
In the REPL, use /attach to queue files. See Multimodal Input.
8. Get structured output
Force the agent to return validated JSON matching a schema - ideal for pipelines and automation. Add an output section with a JSON schema and the agent's response is validated against it:
initrunner run classifier.yaml -p "Acme Corp invoice for $250"
# => {"status": "approved", "amount": 250.0}
See Structured Output for inline schemas, external schema files, and pipeline integration.
9. Test your agents
Define eval suites in YAML to verify output quality, tool usage, and performance:
# eval-suite.yaml
cases:
- name: search-test
prompt: "Find info about Docker"
assertions:
- type: tool_calls
expected: ["web_search"]
- type: llm_judge
criteria: ["Response explains Docker clearly"]
- type: max_latency
limit_ms: 30000
initrunner test role.yaml -s eval-suite.yaml -v -j 4 -o results.json
See Evals.
10. Expose as MCP tools
Turn any agent into an MCP server that Claude Desktop, Claude Code, and Cursor can call directly:
initrunner mcp serve researcher.yaml writer.yaml reviewer.yaml
Each role becomes a tool. Configure in Claude Desktop's claude_desktop_config.json:
{
"mcpServers": {
"initrunner": {
"command": "initrunner",
"args": ["mcp", "serve", "roles/agent.yaml"]
}
}
}
See MCP Gateway docs for SSE/HTTP transports, pass-through mode, and multi-agent setups.
Community Roles
Browse, install, and run roles shared by the community:
initrunner search "code review" # browse the community index
initrunner install code-reviewer # download, validate, confirm
initrunner install user/repo:roles/agent.yaml@v1.0 # install from any GitHub repo
initrunner run ~/.initrunner/roles/code-reviewer.yaml -i # run an installed role
Every install shows a security summary and asks for confirmation. See docs/agents/registry.md for details.
Docker
Available on GHCR and Docker Hub. The image ships with all extras pre-installed.
# Interactive chat with memory
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest chat
# Chat with cherry-picked tools
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data -v .:/workspace \
ghcr.io/vladkesler/initrunner:latest \
chat --tools git --tools filesystem
# Enable all built-in tools at once
# chat --tool-profile all
# Chat with your documents (instant RAG)
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data -v ./docs:/docs \
ghcr.io/vladkesler/initrunner:latest chat --ingest /docs
# Ingest documents for a role, then query
docker run --rm -e OPENAI_API_KEY \
-v ./roles:/roles -v ./docs:/docs -v initrunner-data:/data \
ghcr.io/vladkesler/initrunner:latest ingest /roles/rag-agent.yaml
docker run --rm -it -e OPENAI_API_KEY \
-v ./roles:/roles -v initrunner-data:/data \
ghcr.io/vladkesler/initrunner:latest run /roles/rag-agent.yaml -i
# Telegram bot
docker run -d -e OPENAI_API_KEY -e TELEGRAM_BOT_TOKEN \
-v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest \
chat --telegram
# OpenAI-compatible API server on port 8000
docker run -d -e OPENAI_API_KEY -v ./roles:/roles \
-p 8000:8000 ghcr.io/vladkesler/initrunner:latest \
serve /roles/my-agent.yaml --host 0.0.0.0
# Web dashboard at http://localhost:8420
docker run -d -e OPENAI_API_KEY -v ./roles:/roles -v initrunner-data:/data \
-p 8420:8420 ghcr.io/vladkesler/initrunner:latest ui --role-dir /roles
Or use docker compose up with the included docker-compose.yml (copy examples/.env.example to .env first). Example roles are seeded automatically on first boot. To use your own roles, uncomment the ./roles:/data/roles volume mount in the compose file.
Cloud Deploy
Deploy the InitRunner dashboard to a cloud platform with one click:
Fly.io: See Cloud Deployment Guide.
All deploys include the web dashboard with example roles pre-loaded. Set your LLM provider API key and a dashboard password during setup. See the full guide.
User Interfaces
Terminal UI (tui) |
Web Dashboard (ui) |
|
|---|---|---|
| Launch | initrunner tui |
initrunner ui |
| Install | pip install initrunner[tui] |
pip install initrunner[dashboard] |
| Roles | Create from template, edit via forms | Form builder with live preview, AI generate |
| Chat | Streaming chat with token counts | SSE streaming with file attachments |
| Extras | Audit log, memory, daemon event log | Audit detail panel, memory, trigger monitor |
| Style | k9s-style keyboard-driven (Textual) | Server-rendered HTML (HTMX + DaisyUI) |
See TUI docs · Dashboard docs · API Server docs
Documentation
| Area | Key docs |
|---|---|
| Getting started | Installation · Setup · Chat · RAG Quickstart · Tutorial · CLI Reference · Discord Bot · Telegram Bot |
| Agents & tools | Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers |
| Knowledge & memory | Ingestion · Memory · Multimodal Input |
| Orchestration | Compose · Delegation · Team Mode · Autonomy · Triggers · Intent Sensing |
| Interfaces | Dashboard · TUI · API Server · MCP Gateway |
| Operations | Security · Guardrails · Audit · Reports · Evals · Doctor · Observability · CI/CD |
See docs/ for the full index.
Examples
initrunner examples list # see all available examples
initrunner examples copy code-reviewer # copy to current directory
The examples/ directory includes 20+ ready-to-run agents, skills, and compose pipelines covering code review, support bots, data analysis, web monitoring, and multi-agent orchestration.
Community & Support
- Discord - InitRunner Hub - Chat, ask questions, share roles
- GitHub Issues - Bug reports and feature requests
- Changelog - Release notes and version history
If you find InitRunner useful, consider giving it a star - it helps others discover the project.
Contributing
Contributions welcome! See CONTRIBUTING.md for dev setup, PR guidelines, and quality checks. Share your roles by pushing to a public GitHub repo - anyone can install them with initrunner install user/repo. For security vulnerabilities, see SECURITY.md.
License
MIT - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file initrunner-1.10.0.tar.gz.
File metadata
- Download URL: initrunner-1.10.0.tar.gz
- Upload date:
- Size: 3.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
013b354459d919dbb2ba25a2e1d62f0ece8c0455f749456c18ebda7a45764e5e
|
|
| MD5 |
e4eb464f709aa22ad80906f60cebc987
|
|
| BLAKE2b-256 |
40f3d43159990d3f62c47e1103a58d1597c6a5340c47779bf745331907b3d19f
|
Provenance
The following attestation bundles were made for initrunner-1.10.0.tar.gz:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-1.10.0.tar.gz -
Subject digest:
013b354459d919dbb2ba25a2e1d62f0ece8c0455f749456c18ebda7a45764e5e - Sigstore transparency entry: 1006821050
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@1b47bfe91ca485dac2828ec055bf84e40482a374 -
Branch / Tag:
refs/tags/v1.10.0 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@1b47bfe91ca485dac2828ec055bf84e40482a374 -
Trigger Event:
push
-
Statement type:
File details
Details for the file initrunner-1.10.0-py3-none-any.whl.
File metadata
- Download URL: initrunner-1.10.0-py3-none-any.whl
- Upload date:
- Size: 762.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2292b94f3f86c86ac7093fa5ff68286be7e61488b5c5ae60e3f27b189bcc568
|
|
| MD5 |
b82ded9daeb27a5d26e3df732967293e
|
|
| BLAKE2b-256 |
631e7cda2b8278fea807436fdb8865304c8378605c28976bd2cd88bfbeb46554
|
Provenance
The following attestation bundles were made for initrunner-1.10.0-py3-none-any.whl:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-1.10.0-py3-none-any.whl -
Subject digest:
e2292b94f3f86c86ac7093fa5ff68286be7e61488b5c5ae60e3f27b189bcc568 - Sigstore transparency entry: 1006821056
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@1b47bfe91ca485dac2828ec055bf84e40482a374 -
Branch / Tag:
refs/tags/v1.10.0 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@1b47bfe91ca485dac2828ec055bf84e40482a374 -
Trigger Event:
push
-
Statement type: