Skip to main content

Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon

Project description

InitRunner

InitRunner mascot

Python 3.11+ PyPI version GitHub stars Docker pulls MIT License Tests Ruff PydanticAI Website Discord

Website · Docs · Discord · Issues

Define AI agents in YAML. Run them as CLI tools, Telegram bots, Discord bots, API servers, or autonomous daemons. Built-in RAG, persistent memory, 40+ tools. Any model.

One YAML file is all it takes to go from idea to running agent - with document search, persistent memory, and tools wired in automatically. Start with initrunner chat for a zero-config assistant, then scale to bots, pipelines, and API servers without rewriting anything.

v1.5.0 - Stable release. See the Changelog for details.

30-Second Quickstart

pip install "initrunner[all]"
export OPENAI_API_KEY=sk-...
initrunner chat --ingest ./my-docs/

That's it. You have an AI agent that knows your docs and remembers across sessions.

--ingest embeds documents with OpenAI by default. Using another provider? See RAG Quickstart to configure embeddings.

Try It

initrunner chat --ingest ./docs/   # chat with your docs, memory on by default
>>> summarize the getting started guide
The guide covers installation, creating your first agent with a role.yaml file, ...

>>> what retrieval strategies does it mention?
The docs describe three strategies: full-text search, semantic similarity, ...

>>> /quit

No YAML, no config files. Add --tool-profile all to enable every built-in tool.

Define Agent Roles in YAML

When you need more control, define an agent as a YAML file:

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: code-reviewer
  description: Reviews code for bugs and style issues
spec:
  role: |
    You are a senior engineer. Review code for correctness and readability.
    Use git tools to examine changes and read files for context.
  model: { provider: openai, name: gpt-5-mini }
  tools:
    - type: git
      repo_path: .
    - type: filesystem
      root_path: .
      read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"

That's it. No Python, no boilerplate. Using Claude? pip install "initrunner[anthropic]" and set model: { provider: anthropic, name: claude-opus-4-6 }.

InitRunner CLI REPL
Interactive REPL - chat with any agent from the terminal

Why InitRunner

Zero config to start. initrunner chat gives you an AI assistant with persistent memory and document search out of the box. No YAML, no setup beyond an API key.

Config, not code. Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 16 built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, and more) work out of the box. Need a custom tool? One file, one decorator.

Version-control your agents. Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.

Prototype to production. Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.

How It Compares

InitRunner Build from scratch LangChain
Setup pip install initrunner + API key Install 5-10 packages, write glue code pip install langchain + adapters
Agent config One YAML file Python classes + wiring Python chains + config objects
RAG --ingest ./docs/ (one flag) Embed, store, retrieve, prompt - DIY Loaders > splitters > vectorstore chain
Bot deployment --telegram / --discord flag Build bot framework integration Separate bot framework + adapter
Model switching Change model.provider in YAML Rewrite client code Swap LLM class + adjust prompts
Multi-agent compose.yaml with delegation Custom orchestration layer Agent executor + custom routing

What Can You Build?

  • A Telegram bot that answers questions about your codebase - point it at your repo, deploy with one flag
  • A cron job that monitors competitors and sends daily digests - cron trigger + web scraper + Slack sink
  • A document Q&A agent for your team's knowledge base - ingest PDFs and Markdown, serve as an API
  • A code review bot triggered by new commits - file-watch trigger + git tools + structured output
  • A multi-agent pipeline: inbox watcher > triager > responder - define in compose.yaml, run with one command
  • A personal assistant that remembers everything - persistent memory across sessions, no setup

Quickstart

1. Install

curl -fsSL https://initrunner.ai/install.sh | sh

Or with a package manager:

pip install "initrunner[all]"       # everything included
pip install initrunner              # core only (OpenAI)
uv tool install initrunner          # or with uv

Common extras: anthropic (Claude), ingest (PDF/DOCX), dashboard (web UI), all (everything). See Installation docs for the full extras table and platform notes.

2. Set your API key

export OPENAI_API_KEY=sk-...          # OpenAI (default)
export ANTHROPIC_API_KEY=sk-ant-...   # Claude

You can also store keys in ~/.initrunner/.env - it's loaded automatically by all commands. Environment variables set in the shell take precedence over .env values.

Or run initrunner setup - it walks through provider, key, and first role interactively, and stores the key in ~/.initrunner/.env for you.

3. Start chatting

initrunner chat                        # zero-config chat with persistent memory
initrunner chat --resume               # resume previous session + auto-recall memories
initrunner chat --ingest ./docs/       # chat with your documents (instant RAG)
initrunner chat --tool-profile all     # chat with all tools enabled
initrunner chat --telegram             # one-command Telegram bot
initrunner chat --telegram --allowed-user-ids 123456789  # restrict access
initrunner run role.yaml -p "Hello!"   # one-shot prompt
initrunner run role.yaml -i            # interactive REPL

Embedding note: --ingest uses OpenAI embeddings by default (text-embedding-3-small). Anthropic and other non-OpenAI users also need OPENAI_API_KEY set, or can switch embedding providers in their role YAML. See RAG Quickstart.

Memory is on by default - the agent remembers facts across sessions. Use --no-memory to disable. See Chat docs for all options, and CLI Reference for the full command list.

From Simple to Powerful

Start with the code-reviewer above. Each step adds one capability - no rewrites, just add a section to your YAML.

1. Add knowledge & memory

Point at your docs for RAG - a search_documents tool is auto-registered. Add memory for persistent recall across sessions:

spec:
  ingest:
    sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
  memory:
    store_path: ./memory.db
    max_memories: 1000
initrunner ingest role.yaml   # extract | chunk | embed | store
initrunner run role.yaml -i --resume   # search_documents + memory ready

See Ingestion · Memory · RAG Quickstart.

2. Add skills

Compose reusable bundles of tools and prompts. Each skill is a SKILL.md file - reference it by path:

spec:
  skills:
    - ../skills/web-researcher
    - ../skills/code-tools.md

The agent inherits each skill's tools and prompt instructions automatically. Run initrunner init --skill my-skill to scaffold one. See Skills.

3. Add triggers

Turn it into a daemon that reacts to events - cron, file watch, webhook, Telegram, or Discord:

spec:
  triggers:
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate the weekly status report."
    - type: file_watch
      paths: [./src]
      prompt_template: "File changed: {path}. Review it."
initrunner daemon role.yaml   # runs until stopped

See Triggers · Telegram · Discord.

4. Compose agents

Orchestrate multiple agents into a pipeline - one agent's output feeds into the next:

apiVersion: initrunner/v1
kind: Compose
metadata: { name: email-pipeline }
spec:
  services:
    inbox-watcher:
      role: roles/inbox-watcher.yaml
      sink: { type: delegate, target: triager }
    triager: { role: roles/triager.yaml }

Run with initrunner compose up pipeline.yaml. See Compose · Delegation.

5. Serve as an API

Turn any agent into an OpenAI-compatible endpoint - drop-in for Open WebUI, Vercel AI SDK, or any OpenAI client:

initrunner serve support-agent.yaml --port 3000

See Server docs for client examples and Open WebUI integration.

6. Attach files and media

Send images, audio, video, and documents alongside your prompts:

initrunner run role.yaml -p "Describe this image" -A photo.png
initrunner run role.yaml -p "Compare these" -A before.png -A after.png

In the REPL, use /attach to queue files. See Multimodal Input.

7. Get structured output

Force the agent to return validated JSON matching a schema - ideal for pipelines and automation. Add an output section with a JSON schema and the agent's response is validated against it:

initrunner run classifier.yaml -p "Acme Corp invoice for $250"
# => {"status": "approved", "amount": 250.0}

See Structured Output for inline schemas, external schema files, and pipeline integration.

Community Roles

Browse, install, and run roles shared by the community:

initrunner search "code review"                          # browse the community index
initrunner install code-reviewer                         # download, validate, confirm
initrunner install user/repo:roles/agent.yaml@v1.0       # install from any GitHub repo
initrunner run ~/.initrunner/roles/code-reviewer.yaml -i # run an installed role

Every install shows a security summary and asks for confirmation. See docs/agents/registry.md for details.

Docker

Available on GHCR and Docker Hub. The image ships with all extras pre-installed.

# Interactive chat with memory
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest chat

# Chat with cherry-picked tools
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data -v .:/workspace \
    ghcr.io/vladkesler/initrunner:latest \
    chat --tools git --tools filesystem

# Enable all built-in tools at once
#   chat --tool-profile all

# Chat with your documents (instant RAG)
docker run --rm -it -e OPENAI_API_KEY \
    -v initrunner-data:/data -v ./docs:/docs \
    ghcr.io/vladkesler/initrunner:latest chat --ingest /docs

# Ingest documents for a role, then query
docker run --rm -e OPENAI_API_KEY \
    -v ./roles:/roles -v ./docs:/docs -v initrunner-data:/data \
    ghcr.io/vladkesler/initrunner:latest ingest /roles/rag-agent.yaml
docker run --rm -it -e OPENAI_API_KEY \
    -v ./roles:/roles -v initrunner-data:/data \
    ghcr.io/vladkesler/initrunner:latest run /roles/rag-agent.yaml -i

# Telegram bot
docker run -d -e OPENAI_API_KEY -e TELEGRAM_BOT_TOKEN \
    -v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest \
    chat --telegram

# OpenAI-compatible API server on port 8000
docker run -d -e OPENAI_API_KEY -v ./roles:/roles \
    -p 8000:8000 ghcr.io/vladkesler/initrunner:latest \
    serve /roles/my-agent.yaml --host 0.0.0.0

# Web dashboard at http://localhost:8420
docker run -d -e OPENAI_API_KEY -v ./roles:/roles -v initrunner-data:/data \
    -p 8420:8420 ghcr.io/vladkesler/initrunner:latest ui --role-dir /roles

Or use docker compose up with the included docker-compose.yml (copy examples/.env.example to .env first). See hello-world.yaml for a starter role.

User Interfaces

Terminal UI (tui) Web Dashboard (ui)
Launch initrunner tui initrunner ui
Install pip install initrunner[tui] pip install initrunner[dashboard]
Roles Create from template, edit via forms Form builder with live preview, AI generate
Chat Streaming chat with token counts SSE streaming with file attachments
Extras Audit log, memory, daemon event log Audit detail panel, memory, trigger monitor
Style k9s-style keyboard-driven (Textual) Server-rendered HTML (HTMX + DaisyUI)

See TUI docs · Dashboard docs · API Server docs

Documentation

Area Key docs
Getting started Installation · Setup · Chat · RAG Quickstart · Tutorial · CLI Reference · Discord Bot · Telegram Bot
Agents & tools Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers
Knowledge & memory Ingestion · Memory · Multimodal Input
Orchestration Compose · Delegation · Autonomy · Triggers · Intent Sensing
Interfaces Dashboard · TUI · API Server
Operations Security · Guardrails · Audit · Reports · Doctor · Observability · CI/CD

See docs/ for the full index.

Examples

initrunner examples list               # see all available examples
initrunner examples copy code-reviewer # copy to current directory

The examples/ directory includes 20+ ready-to-run agents, skills, and compose pipelines covering code review, support bots, data analysis, web monitoring, and multi-agent orchestration.

Community & Support

If you find InitRunner useful, consider giving it a star - it helps others discover the project.

Contributing

Contributions welcome! See CONTRIBUTING.md for dev setup, PR guidelines, and quality checks. Share your roles by pushing to a public GitHub repo - anyone can install them with initrunner install user/repo. For security vulnerabilities, see SECURITY.md.

License

MIT - see LICENSE for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

initrunner-1.5.0.tar.gz (3.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

initrunner-1.5.0-py3-none-any.whl (733.6 kB view details)

Uploaded Python 3

File details

Details for the file initrunner-1.5.0.tar.gz.

File metadata

  • Download URL: initrunner-1.5.0.tar.gz
  • Upload date:
  • Size: 3.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-1.5.0.tar.gz
Algorithm Hash digest
SHA256 87b671ed92c833ad991858ae1db4fe2e711efc8eb947a8acc80e51c88b7fbbd4
MD5 8310432b407f4bccd892f1a2905239a1
BLAKE2b-256 6ac876a970aa5f8e8f5776e45aef4763b17647b6be94d7c22a2236a8de53dff2

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-1.5.0.tar.gz:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file initrunner-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: initrunner-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 733.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for initrunner-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 24a2743a8d53be3216851803d6f6b959b639b9bccfda29eeb03c1c1ee66b335d
MD5 9ef9962e420403ea1a7e62c48b877d4e
BLAKE2b-256 6139eedf66a8176ac0cbb8b55e6dcdc7f2c59e0da762b993285f2d7e2e41b90a

See more details on using hashes here.

Provenance

The following attestation bundles were made for initrunner-1.5.0-py3-none-any.whl:

Publisher: release.yml on vladkesler/initrunner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page