Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon
Project description
InitRunner — AI Agent Roles as YAML
Website · Docs · Discord · Issues
Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon.
Your agent is a YAML file. Its tools, knowledge base, memory, triggers, and multimodal input — all config, not code. Deploy it as a CLI tool, a cron-driven daemon, a Telegram or Discord bot, or an OpenAI-compatible API. Compose agents into pipelines. RAG and long-term memory come batteries-included. Manage, chat, and audit from a web dashboard or terminal TUI.
v1.2.0 — Stable release. See the Changelog for details.
Table of Contents
- See It in Action
- Why InitRunner
- From Simple to Powerful
- Community Roles
- Install & Quickstart
- Creating Roles
- Docker
- Core Concepts
- CLI Quick Reference
- User Interfaces
- Documentation
- Examples
- Community & Support
- Contributing
- License
See It in Action
A code reviewer that can read your files and inspect git history — one YAML file:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
initrunner run reviewer.yaml -p "Review the latest commit"
That's it. No Python, no boilerplate.
Using Claude? Install the Anthropic extra and swap the model line:
pip install "initrunner[anthropic]"
model: { provider: anthropic, name: claude-opus-4-6 }
Or just run initrunner chat — no YAML needed. When you're ready, the same role file runs as a daemon, a Telegram or Discord bot, or an OpenAI-compatible API server.
Interactive REPL — chat with any agent from the terminal
Why InitRunner
Config, not code — Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 16 built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, and more) work out of the box. Need a custom tool? One file, one decorator.
Version-control your agents — Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.
Prototype to production — Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.
From Simple to Powerful
Start with the code-reviewer above. Each step adds one capability — no rewrites, just add a section to your YAML.
1. Start chatting — zero config
No YAML needed. initrunner chat auto-detects your provider and starts an interactive session:
initrunner chat # auto-detects provider, starts chatting
initrunner chat role.yaml # chat with a specific role
Want a bot? One flag turns your agent into a Telegram or Discord bot:
initrunner chat --telegram # Telegram bot (requires TELEGRAM_BOT_TOKEN)
initrunner chat --discord # Discord bot (requires DISCORD_BOT_TOKEN)
Key options:
initrunner chat --tool-profile all # enable all tools (search, Python, filesystem, git, shell, slack)
initrunner chat --tools git --tools shell # cherry-pick specific tools
initrunner chat --provider anthropic # override auto-detected provider
initrunner chat -p "summarize this repo" # send prompt then enter REPL
initrunner chat --list-tools # show available extra tools
Tool profiles:
minimal(default — datetime, web reader),all(every available tool),none. Use--toolsto cherry-pick individual tools.
See Chat docs for all options.
2. Add knowledge & memory
Point at your docs for RAG — a search_documents tool is auto-registered. Add memory for persistent recall across sessions:
spec:
ingest:
sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
memory:
store_path: ./memory.db
max_memories: 1000
initrunner ingest role.yaml # extract | chunk | embed | store
initrunner run role.yaml -i --resume # search_documents + memory ready
3. Add skills
Compose reusable bundles of tools and prompts. Each skill is a SKILL.md file — reference it by path:
spec:
skills:
- ../skills/web-researcher
- ../skills/code-tools.md
The agent inherits each skill's tools and prompt instructions automatically.
A SKILL.md file has a YAML frontmatter block defining the tools it provides, followed by markdown guidelines the agent will follow:
---
name: my-skill
description: What this skill does
tools:
- type: web_reader
timeout_seconds: 15
- type: search
---
Use the web_reader tool to fetch pages as markdown before answering.
Cite URLs in your responses.
Run initrunner init --skill my-skill to scaffold one.
4. Add triggers
Turn it into a daemon that reacts to events:
spec:
triggers:
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
initrunner daemon role.yaml # runs until stopped
Or connect your agent to a messaging platform — one trigger turns it into a bot:
spec:
triggers:
- type: telegram
prompt_template: "{message}"
export TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
initrunner daemon role.yaml # now a live Telegram bot
Discord works the same way (type: discord + DISCORD_BOT_TOKEN). See Telegram docs · Discord docs for the full setup.
5. Compose agents
Orchestrate multiple agents into a pipeline. One agent's output feeds into the next:
apiVersion: initrunner/v1
kind: Compose
metadata:
name: email-pipeline
description: Multi-agent email processing pipeline
spec:
services:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager:
role: roles/triager.yaml
initrunner compose up pipeline.yaml
6. Serve as an API
Turn any agent into an OpenAI-compatible endpoint. Drop-in for Open WebUI, Vercel AI SDK, or any OpenAI-compatible client:
initrunner serve support-agent.yaml --port 3000
from openai import OpenAI
client = OpenAI(base_url="http://localhost:3000/v1", api_key="unused")
response = client.chat.completions.create(
model="support-agent",
messages=[{"role": "user", "content": "How do I reset my password?"}],
)
Or connect Open WebUI for a full chat interface:
docker run -d --name open-webui --network host \
-e OPENAI_API_BASE_URL=http://127.0.0.1:3000/v1 \
-e OPENAI_API_KEY=unused \
-v open-webui:/app/backend/data \
ghcr.io/open-webui/open-webui:main
# Open http://localhost:8080 and select the support-agent model
See Server docs for the full walkthrough.
7. Attach files and media
Send images, audio, video, and documents alongside your prompts — from the CLI, REPL, API, or dashboard:
# Attach an image to a prompt
initrunner run role.yaml -p "Describe this image" -A photo.png
# Multiple attachments
initrunner run role.yaml -p "Compare these" -A before.png -A after.png
# URL attachment
initrunner run role.yaml -p "What's in this image?" -A https://example.com/photo.jpg
In the interactive REPL, use /attach to queue files:
> /attach diagram.png
Queued attachment: diagram.png
> /attach notes.pdf
Queued attachment: notes.pdf
> What do these show?
[assistant response with both attachments]
The API server accepts multimodal content in the standard OpenAI format. See Multimodal Input for the full reference.
8. Get structured output
Force the agent to return validated JSON matching a schema — ideal for pipelines and automation:
spec:
output:
type: json_schema
schema:
type: object
properties:
status:
type: string
enum: [approved, rejected, needs_review]
amount:
type: number
vendor:
type: string
required: [status, amount, vendor]
initrunner run classifier.yaml -p "Acme Corp invoice for $250"
# → {"status": "approved", "amount": 250.0, "vendor": "Acme Corp"}
See Structured Output for inline schemas, external schema files, and pipeline integration.
Community Roles
Browse, install, and run roles shared by the community — no copy-paste needed:
initrunner search "code review" # browse the community index
initrunner install code-reviewer # download, validate, confirm
initrunner run ~/.initrunner/roles/code-reviewer.yaml -i
Install directly from any GitHub repo:
initrunner install user/repo:roles/support-agent.yaml@v1.0
Every install shows a security summary (tools, model, author) and asks for confirmation before saving. See docs/agents/registry.md for source formats, the community index, and update workflows.
Install & Quickstart
1. Install
curl -fsSL https://initrunner.ai/install.sh | sh
Or with a package manager:
pip install initrunner
# or
uv tool install initrunner
# or
pipx install initrunner
Common extras:
| Extra | What it adds |
|---|---|
initrunner[anthropic] |
Anthropic provider (Claude) |
initrunner[ingest] |
PDF, DOCX, XLSX ingestion |
initrunner[dashboard] |
FastAPI web dashboard (HTMX + DaisyUI) |
initrunner[search] |
Web search (DuckDuckGo) |
initrunner[telegram] |
Telegram bot trigger |
initrunner[discord] |
Discord bot trigger |
See docs/getting-started/installation.md for the full extras table, dev setup, and environment configuration.
2. Set your API key
Before running an agent, set your provider API key:
export OPENAI_API_KEY=sk-... # OpenAI (default)
export ANTHROPIC_API_KEY=sk-ant-... # Claude (requires initrunner[anthropic])
initrunner setup walks through this interactively and stores the key in ~/.initrunner/.env. You can also edit this file directly — it's loaded automatically by all commands. Keys set in the environment take precedence over .env values.
3. Create your first agent and run it
The fastest way to get started — setup walks you through provider, API key, model, and agent creation in one step:
initrunner setup # guided wizard — picks provider, stores API key, creates a role
initrunner run my-agent.yaml -p "Hello!" # single-shot prompt
initrunner run my-agent.yaml -i # interactive chat
See Creating Roles for all the ways to build roles (CLI wizards, AI generation, web dashboard, TUI, and more), or jump to the hands-on Tutorial.
Creating Roles
| Method | How | Best for |
|---|---|---|
| Guided wizard | initrunner setup |
First-time setup — configures provider, API key, and creates a role |
| Interactive scaffold | initrunner init -i |
Step-by-step prompted creation with tool and feature selection |
| Template scaffold | initrunner init --template rag |
One-liner from a template (basic, rag, memory, daemon, ollama, tool, api, skill) |
| CLI flags | initrunner init --name my-agent --model gpt-5-mini |
Quick one-liner when you know what you want |
| AI generation | initrunner create "code reviewer for Python" |
Describe what you want in plain English — AI writes the YAML |
| Copy an example | initrunner examples copy code-reviewer |
Production-ready agents you can run immediately |
| Install community role | initrunner install user/repo |
Reuse roles shared by others (details) |
| Web dashboard | initrunner ui → New Role → Form Builder / AI Generate |
Form builder with live YAML preview, or describe and generate (requires initrunner[dashboard]) |
| Terminal UI | initrunner tui → press n |
Template picker in a keyboard-driven interface (requires initrunner[tui]) |
| Manual YAML | Copy the example above | Full control — write the YAML yourself |
Validate any role file with initrunner validate role.yaml. See Role Generation docs for the full reference.
Docker
Run InitRunner without installing Python — just Docker:
Before running, create a ./roles/ directory and add a role YAML file — the examples below reference it as /roles/my-agent.yaml. No role yet? Run initrunner examples copy hello-world if you have InitRunner installed, or copy hello-world.yaml from this repo.
# One-shot prompt
docker run --rm -e OPENAI_API_KEY \
-v ./roles:/roles ghcr.io/vladkesler/initrunner:latest \
run /roles/my-agent.yaml -p "Hello"
# Interactive chat
docker run --rm -it -e OPENAI_API_KEY \
-v ./roles:/roles ghcr.io/vladkesler/initrunner:latest \
run /roles/my-agent.yaml -i
# Web dashboard — open http://localhost:8420 after starting
docker run -d -e OPENAI_API_KEY \
-v ./roles:/roles \
-v initrunner-data:/data \
-p 8420:8420 ghcr.io/vladkesler/initrunner:latest \
ui --role-dir /roles
# ./roles — your local role files (mounted read/write into /roles)
# initrunner-data — named volume: audit log, embeddings, memory (persists across restarts)
-e OPENAI_API_KEY forwards the variable from your current shell — make sure it's exported first (export OPENAI_API_KEY=sk-...). Prefer a file? Copy examples/.env.example to .env, fill in your key, and replace -e OPENAI_API_KEY with --env-file .env.
The image is also available on Docker Hub: vladkesler/initrunner
Or use the included docker-compose.yml to start the dashboard with persistent storage:
# Copy examples/.env.example → .env, add your key, then:
docker compose up
# Dashboard is now at http://localhost:8420
Build the image locally:
docker build -t initrunner .
docker run --rm initrunner --version
The default image includes dashboard, ingestion, all model providers, and safety extras. Override with --build-arg EXTRAS="dashboard,anthropic" to customize.
Using Ollama on the host? Set the model endpoint to http://host.docker.internal:11434/v1 in your role YAML.
Core Concepts
Web dashboard — create and manage roles with a live YAML preview
Role files
Every agent is a YAML file with four top-level keys:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: my-agent
description: What this agent does
spec:
role: "System prompt goes here."
model: { provider: openai, name: gpt-5-mini }
tools: [...]
guardrails:
max_tool_calls: 20
timeout_seconds: 300
max_tokens_per_run: 50000
autonomous_token_budget: 200000
Validate with initrunner validate role.yaml or scaffold one with initrunner init --name my-agent --model gpt-5-mini.
metadata.tags are used by intent sensing (--sense) and community search. Specific, task-oriented tags improve role selection:
metadata:
name: web-searcher
description: Research assistant that searches the web
tags: [search, web, research, summarize, browse]
Tools
Tools give your agent capabilities beyond text generation. Configure them in spec.tools.
Built-in tools
| Type | What it does |
|---|---|
filesystem |
Read/write files within a root directory |
git |
Git log, diff, blame, show (read-only by default) |
shell |
Run shell commands with allowlist/blocklist |
python |
Run Python in an isolated subprocess |
sql |
Query SQLite databases (read-only by default) |
http |
HTTP requests to a base URL |
web_reader |
Fetch web pages and convert to markdown |
web_scraper |
Scrape, chunk, embed, and store web pages |
search |
Web and news search (DuckDuckGo, SerpAPI, Brave, Tavily) |
email |
Search, read, and send email via IMAP/SMTP |
slack |
Send messages to Slack channels |
api |
Declarative REST API endpoints from YAML |
datetime |
Get current time and parse dates |
mcp |
Connect to MCP servers (stdio, SSE, streamable-http) |
delegate |
Hand off to other agents |
custom |
Load tool functions from external Python modules |
See docs/agents/tools.md for the full reference.
Custom tools
Add a built-in tool by creating a single file in initrunner/agent/tools/ with a config class and a @register_tool decorated builder function — it's auto-discovered and immediately available in role YAML. Alternatively, load your own Python functions with type: custom and a module path pointing to any importable module. See docs/agents/tool_creation.md for the full guide.
Plugin registry
Third-party packages can register new tool types via the initrunner.tools entry point. Once installed (pip install initrunner-<name>), the tool type is available in YAML like any built-in. Run initrunner plugins to list discovered plugins. See the plugin section of the tool creation guide for details.
Run modes
| Mode | Command | Use case |
|---|---|---|
| Chat | initrunner chat |
Zero-config interactive chat (auto-detects provider) |
| Single-shot | initrunner run role.yaml -p "prompt" |
One question, one answer |
| Interactive | initrunner run role.yaml -i |
Multi-turn chat (REPL) |
| Autonomous | initrunner run role.yaml -p "prompt" -a |
Multi-step agentic loop with self-reflection |
| Intent Sensing | initrunner run --sense -p "prompt" |
Pick the best role automatically from discovered roles |
| Daemon | initrunner daemon role.yaml |
Trigger-driven (cron, file watch, webhook, telegram, discord) |
| API server | initrunner serve role.yaml |
OpenAI-compatible HTTP API |
Intent Sensing options
| Flag | Description |
|---|---|
--sense |
Sense the best role for the given prompt |
--role-dir PATH |
Directory to search for roles (used with --sense) |
--confirm-role |
Confirm the sensed role before running |
Without --role-dir, roles are discovered from the current directory (.), ./examples/roles/, and ~/.config/initrunner/roles/ (the global roles directory).
See Intent Sensing for algorithm details, role tagging tips, and troubleshooting.
Guardrails
Control costs and runaway agents with spec.guardrails:
| Setting | Default | Scope |
|---|---|---|
max_tokens_per_run |
50 000 | Output tokens per single LLM call |
max_tool_calls |
20 | Tool invocations per run |
timeout_seconds |
300 | Wall-clock timeout per run |
autonomous_token_budget |
— | Total tokens across all autonomous iterations |
session_token_budget |
— | Cumulative limit for an interactive session |
daemon_daily_token_budget |
— | Daily token cap for daemon mode |
When any limit is reached the run stops immediately and raises an error. In autonomous mode, the partial result up to that point is returned.
See Guardrails and Token Control for the full reference.
Audit log — track every agent run with tokens, duration, and trigger mode
For RAG, memory, triggers, compose, and skills see From Simple to Powerful above. Full references: Ingestion · Memory · Triggers · Compose · Skills · Providers
CLI Quick Reference
| Command | Description |
|---|---|
chat |
Zero-config interactive chat (auto-detects provider) |
chat <role.yaml> |
Chat with a specific role |
chat --telegram / chat --discord |
One-command Telegram or Discord bot |
run <role.yaml> -p "..." |
Single-shot prompt |
run <role.yaml> -i |
Interactive REPL |
run <role.yaml> -p "..." -a |
Autonomous agentic loop |
run <role.yaml> -p "..." -a --max-iterations N |
Autonomous with iteration limit |
run --sense -p "..." |
Sense best role and run |
run --sense --role-dir PATH -p "..." |
Sense best role from a specific directory |
run --sense --confirm-role -p "..." |
Sense best role with confirmation prompt |
validate <role.yaml> |
Validate a role definition |
init --name <name> [--model <model>] |
Scaffold a new role from CLI flags |
init -i |
Interactive role-creation wizard |
create "<description>" |
AI-generate a role from a description |
setup |
Guided provider + API key + role setup |
ingest <role.yaml> |
Ingest documents into vector store |
daemon <role.yaml> |
Run in trigger-driven daemon mode |
run <role.yaml> -p "..." -A file.png |
Attach files or URLs to prompt |
run <role.yaml> -p "..." --export-report |
Export a markdown report after the run |
doctor |
Check provider config, API keys, connectivity |
doctor --quickstart |
End-to-end smoke test with a real API call |
serve <role.yaml> |
Serve as OpenAI-compatible API |
tui |
Launch terminal dashboard |
ui |
Launch web dashboard |
compose up <compose.yaml> |
Run multi-agent orchestration |
install <source> |
Install a community role from GitHub |
uninstall <name> |
Remove an installed role |
search <query> |
Search the community role index |
info <source> |
Inspect a role before installing |
list |
Show installed roles |
update [name] / --all |
Update installed roles |
See docs/getting-started/cli.md for the full command list and all options.
User Interfaces
Beyond the CLI, InitRunner includes a terminal UI and a web dashboard for visual agent management.
Terminal UI (tui) |
Web Dashboard (ui) |
|
|---|---|---|
| Launch | initrunner tui |
initrunner ui |
| Install | pip install initrunner[tui] |
pip install initrunner[dashboard] |
| Roles | Create from template, edit sections via forms | Form builder with live preview, AI generate, YAML editor |
| Chat | Streaming chat with token counts | SSE streaming chat with file attachments |
| Audit | Browse & filter audit records | Audit log with detail panel |
| Memory | View, export, delete memories | View, filter, export, clear memories |
| Daemon | Real-time trigger event log | WebSocket trigger monitor |
| Style | k9s-style keyboard-driven (Textual) | Server-rendered HTML (HTMX + DaisyUI) |
See TUI docs · Dashboard docs · API Server docs
Documentation
| Area | Key docs |
|---|---|
| Getting started | Installation · Setup · Chat · RAG Quickstart · Tutorial · CLI Reference · Discord Bot · Telegram Bot |
| Agents & tools | Tools · Tool Creation · Tool Search · Skills · Structured Output · Providers |
| Knowledge & memory | Ingestion · Memory · Multimodal Input |
| Orchestration | Compose · Delegation · Autonomy · Triggers · Intent Sensing |
| Interfaces | Dashboard · TUI · API Server |
| Operations | Security · Guardrails · Audit · Reports · Doctor · Observability · CI/CD |
See docs/ for the full index.
Examples
Browse and copy any example locally:
initrunner examples list # see all available examples
initrunner examples copy code-reviewer # copy to current directory
The examples/ directory includes 20+ ready-to-run agents, skills, and compose pipelines covering real-world scenarios:
Role definitions (examples/roles/) — single-agent configs for support bots, code reviewers, changelog generators, deploy notifiers, web monitors, data analysts, Discord assistants, Telegram assistants, and more.
Skills (examples/skills/) — reusable capability bundles:
web-researcher/— web research tools (fetch pages, HTTP requests)code-tools.md— code execution and file browsing tools
See examples/roles/skill-demo.yaml for a role composing multiple skills.
Compose pipelines (examples/compose/) — multi-agent orchestration:
email-pipeline/— cron-driven email triage with fan-out to researcher and respondercontent-pipeline/— file-watch-driven content creation withprocess_existingstartup scanci-pipeline/— webhook-driven CI build analysis with notifications
Community & Support
- Discord — InitRunner Hub — Chat, ask questions, share roles
- GitHub Issues — Bug reports and feature requests
- Changelog — Release notes and version history
If you find InitRunner useful, consider giving it a star — it helps others discover the project.
Contributing
Contributions welcome! See CONTRIBUTING.md for dev setup, PR guidelines, and quality checks.
Share a role
Push your role.yaml to a public GitHub repo — anyone can install it with initrunner install user/repo. To list it in the community index so users can initrunner install my-role by name, open a PR to vladkesler/community-roles adding an entry to index.yaml. See docs/agents/registry.md for details.
For security vulnerabilities, please see SECURITY.md.
License
MIT — see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file initrunner-1.2.0.tar.gz.
File metadata
- Download URL: initrunner-1.2.0.tar.gz
- Upload date:
- Size: 3.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74f05fc2859002e5b70d17ca9890f076c3e6307edfac8950bb34e4189c00926b
|
|
| MD5 |
7e9c54012c77ceeae673ea97be5d7340
|
|
| BLAKE2b-256 |
74be0e24230515c1e2682c3b6d83ab8494044f14d3b0f7d320f3bfc1c3427a4f
|
Provenance
The following attestation bundles were made for initrunner-1.2.0.tar.gz:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-1.2.0.tar.gz -
Subject digest:
74f05fc2859002e5b70d17ca9890f076c3e6307edfac8950bb34e4189c00926b - Sigstore transparency entry: 976075753
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@418362d188d43b1392a53ce2470b35366dfde8a4 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@418362d188d43b1392a53ce2470b35366dfde8a4 -
Trigger Event:
push
-
Statement type:
File details
Details for the file initrunner-1.2.0-py3-none-any.whl.
File metadata
- Download URL: initrunner-1.2.0-py3-none-any.whl
- Upload date:
- Size: 730.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
932b3e84e0756f18973ee386a99f5a91913fce6174a789bd4cd980216f797329
|
|
| MD5 |
1a88831258e5467c3b4bbfcb42098c51
|
|
| BLAKE2b-256 |
29c56acdcb22e5ce40288d796d0444bed93f278c5353da8f798d1d0146580560
|
Provenance
The following attestation bundles were made for initrunner-1.2.0-py3-none-any.whl:
Publisher:
release.yml on vladkesler/initrunner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
initrunner-1.2.0-py3-none-any.whl -
Subject digest:
932b3e84e0756f18973ee386a99f5a91913fce6174a789bd4cd980216f797329 - Sigstore transparency entry: 976075756
- Sigstore integration time:
-
Permalink:
vladkesler/initrunner@418362d188d43b1392a53ce2470b35366dfde8a4 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/vladkesler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@418362d188d43b1392a53ce2470b35366dfde8a4 -
Trigger Event:
push
-
Statement type: