Multi-agent Python SDK with peer-to-peer agent communication
Project description
AgentOutO
A multi-agent Python SDK — peer-to-peer free calls with no orchestrator.
Every agent is equal. No orchestrator. No hierarchy. No restrictions.
Core Philosophy
AgentOutO rejects the orchestrator pattern used by existing frameworks (CrewAI, AutoGen, etc.).
All agents are fully equal. There is no base agent.
Any agent can call any agent. There are no call restrictions.
Any agent can use any tool. There are no tool restrictions.
The message protocol has exactly two types: forward and return.
The user is just an agent without an LLM. No special interface, protocol, or tools exist for the user.
| Existing Frameworks | AgentOutO |
|---|---|
| Orchestrator-centric hierarchy | Peer-to-peer free calls |
| Base agent required | No base agent |
| Per-agent allowed-call lists | Any agent calls any agent |
| Per-agent tool assignment | All tools are global |
| Complex message protocols | Forward / Return only |
| Top-down message flow | Bidirectional free flow |
Installation
pip install agentouto
Requires Python ≥ 3.11.
Quick Start
from agentouto import Agent, Tool, Provider, run
# Providers — API connection info only
openai = Provider(name="openai", kind="openai", api_key="sk-...")
anthropic = Provider(name="anthropic", kind="anthropic", api_key="sk-ant-...")
google = Provider(name="google", kind="google", api_key="AIza...")
# Tool — globally available to all agents
@Tool
def search_web(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
# Agent — model settings live here
researcher = Agent(
name="researcher",
instructions="Research expert. Search and organize information.",
model="gpt-5.2",
provider="openai",
)
writer = Agent(
name="writer",
instructions="Skilled writer. Turn research into polished reports.",
model="claude-sonnet-4-6",
provider="anthropic",
)
reviewer = Agent(
name="reviewer",
instructions="Critical reviewer. Verify facts and improve quality.",
model="gemini-3.1-pro",
provider="google",
)
# Run — user is just an agent without an LLM
result = run(
entry=researcher,
message="Write an AI trends report.",
agents=[researcher, writer, reviewer],
tools=[search_web],
providers=[openai, anthropic, google],
)
print(result.output)
Architecture
┌─────────────────────────────────────────────────────────┐
│ run() │
│ (User = LLM-less agent) │
│ │ │
│ Forward Message │
│ ▼ │
│ ┌─────────────── Agent Loop ──────────────────┐ │
│ │ │ │
│ │ ┌──→ LLM Call (via Provider Backend) │ │
│ │ │ │ │ │
│ │ │ ├── tool_call → Tool.execute() │ │
│ │ │ │ │ │ │
│ │ │ │ result back ───┐ │ │
│ │ │ │ │ │ │
│ │ │ ├── call_agent → New Loop ────┤ │ │
│ │ │ │ │ │ │ │
│ │ │ │ return back ───┐│ │ │
│ │ │ │ ││ │ │
│ │ │ └── finish → Return Message ││ │ │
│ │ │ ││ │ │
│ │ └────────────── next iteration ◄──────┘┘ │ │
│ └─────────────────────────────────────────────┘ │
│ │ │
│ Return Message │
│ ▼ │
│ RunResult.output │
└─────────────────────────────────────────────────────────┘
Message Flow — Peer to Peer
[User] ──(forward)──→ [Agent A]
│
├──(forward)──→ [Agent B]
│ ├──(forward)──→ [Agent C]
│ │ │
│ │←──(return)──────┘
│ │
│←──(return)─────┘
│
└──(return)──→ [User]
User→A and A→B use the exact same mechanism. There is no special user protocol.
Parallel Calls
[Agent A]
├──(forward)──→ [Agent B] ─┐
├──(forward)──→ [Agent C] ├── asyncio.gather — all run concurrently
└──(forward)──→ [Agent D] ─┘
│
←──(3 returns, batched)────┘
Core Concepts
Provider — API Connection Only
Providers hold API credentials. No model settings, no inference config.
from agentouto import Provider
openai = Provider(name="openai", kind="openai", api_key="sk-...") # gpt-5.2, gpt-5.3-codex, o3, o4-mini
openai_resp = Provider(name="openai-resp", kind="openai_responses", api_key="sk-...") # Responses API
anthropic = Provider(name="anthropic", kind="anthropic", api_key="sk-ant-...") # claude-opus-4-6, claude-sonnet-4-6
google = Provider(name="google", kind="google", api_key="AIza...") # gemini-3.1-pro, gemini-3-flash
# OpenAI-compatible APIs (vLLM, Ollama, LM Studio, etc.)
local = Provider(name="local", kind="openai", base_url="http://localhost:11434/v1")
| Field | Description | Required |
|---|---|---|
name |
Identifier for the provider | ✅ |
kind |
API type: "openai", "openai_responses", "anthropic", "google" |
✅ |
api_key |
API key (not needed when auth is set) |
❌ |
base_url |
Custom endpoint URL (for compatible APIs) | ❌ |
auth |
AuthMethod instance for OAuth authentication |
❌ |
OAuth Authentication
Providers can use OAuth 2.0 instead of static API keys via the auth parameter. Install OAuth dependencies:
pip install agentouto[oauth]
OpenAI OAuth — Use your ChatGPT Plus/Pro subscription:
from agentouto import Provider, OpenAIOAuth
auth = OpenAIOAuth(client_id="your-client-id")
await auth.ensure_authenticated() # Opens browser for login
openai = Provider(name="openai", kind="openai", auth=auth)
Claude OAuth ⚠️ — Anthropic prohibits third-party OAuth usage. Account suspension risk:
from agentouto import Provider, ClaudeOAuth
# ⚠️ TOS VIOLATION RISK — Use API keys from console.anthropic.com instead
auth = ClaudeOAuth(client_id="your-client-id")
await auth.ensure_authenticated()
anthropic = Provider(name="anthropic", kind="anthropic", auth=auth)
Google OAuth ⚠️ — Google bans accounts using Antigravity OAuth. Use your own GCP credentials:
from agentouto import Provider, GoogleOAuth
# ⚠️ Antigravity OAuth → account ban risk (Gmail, Drive, ALL services)
# Safe: Use your own GCP OAuth Client ID from console.cloud.google.com
auth = GoogleOAuth(
client_id="your-gcp-client-id.apps.googleusercontent.com",
client_secret="your-gcp-secret",
)
await auth.ensure_authenticated()
google = Provider(name="google", kind="google", auth=auth)
OAuth tokens are automatically cached in ~/.agentouto/tokens/ and refreshed when expired.
Agent — Model Settings Live Here
from agentouto import Agent
agent = Agent(
name="researcher",
instructions="Research expert.",
model="gpt-5.2",
provider="openai",
reasoning=True,
reasoning_effort="high",
temperature=1.0,
)
| Field | Description | Default |
|---|---|---|
name |
Agent name | (required) |
instructions |
Role description | (required) |
model |
Model name | (required) |
provider |
Provider name | (required) |
max_output_tokens |
Max output tokens | None (auto) |
reasoning |
Enable reasoning/thinking mode | False |
reasoning_effort |
Reasoning intensity | "medium" |
reasoning_budget |
Thinking token budget (Anthropic) | None |
temperature |
Temperature | 1.0 |
context_window |
Context window tokens (auto-resolved) | None (auto) |
extra |
Additional API parameters (free dict) | {} |
The SDK uses unified parameter names. Each provider backend maps them internally:
| SDK Parameter | OpenAI (Chat Completions) | OpenAI (Responses) | Anthropic | Google Gemini |
|---|---|---|---|---|
max_output_tokens |
max_completion_tokens (omitted when None) |
max_output_tokens (omitted when None) |
max_tokens (auto-probed when None) |
max_output_tokens (omitted when None) |
reasoning=True |
sends reasoning_effort |
reasoning={"effort": value} |
thinking={"type": "enabled", "budget_tokens": ...} |
thinking_config={"thinking_budget": ...} |
reasoning_effort |
top-level reasoning_effort |
reasoning.effort |
N/A | N/A |
reasoning_budget |
N/A | N/A | thinking.budget_tokens |
thinking_config.thinking_budget |
temperature (reasoning=True) |
not sent | not sent | forced to 1 | sent as-is |
context_window is auto-resolved from LCW API when None. Set explicitly to override. When set, self-summarization triggers at 70% of context limit.
See ai-docs/PROVIDER_BACKENDS.md for full mapping details.
Tool — Global, No Per-Agent Restrictions
from agentouto import Tool
@Tool
def search_web(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
# Async tools are supported
@Tool
async def fetch_data(url: str) -> str:
"""Fetch data from URL."""
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
return await resp.text()
Tools are automatically converted to JSON schemas from function signatures and docstrings. All agents can use all tools.
Rich Parameter Schemas
Use Annotated for parameter descriptions, Literal for allowed values, Enum for enumerated types, and default values — all reflected in the JSON schema sent to the LLM:
from typing import Annotated, Literal
from agentouto import Tool
@Tool
def search_web(
query: Annotated[str, "Search keywords or question"],
max_results: Annotated[int, "Maximum number of results to return"] = 10,
language: Literal["ko", "en", "ja"] = "ko",
) -> str:
"""Search the web for information."""
...
This generates a detailed schema that helps the LLM use tools correctly:
{
"properties": {
"query": {"type": "string", "description": "Search keywords or question"},
"max_results": {"type": "integer", "description": "Maximum number of results to return", "default": 10},
"language": {"type": "string", "enum": ["ko", "en", "ja"], "default": "ko"}
},
"required": ["query"]
}
Plain type hints (without Annotated) continue to work as before.
Tools can also return rich results with file attachments using ToolResult:
from agentouto import Tool, ToolResult, Attachment
@Tool
def fetch_image(url: str) -> ToolResult:
"""Fetch an image from URL."""
data = download_and_base64_encode(url)
return ToolResult(
content="Image fetched successfully.",
attachments=[Attachment(mime_type="image/png", data=data)],
)
When a tool returns ToolResult with attachments, the LLM can visually analyze the images. Regular str returns remain fully supported.
Multimodal Attachments
Agents can receive file attachments (images, audio, video, PDFs) via the Attachment dataclass:
@dataclass
class Attachment:
mime_type: str # "image/png", "audio/mp3", "video/mp4"
data: str | None = None # base64-encoded data
url: str | None = None # URL reference (mutually exclusive with data)
name: str | None = None # optional filename
Pass attachments to run() or async_run():
from agentouto import run, Attachment
result = run(
entry=vision_agent,
message="Analyze this image.",
agents=[vision_agent],
tools=[],
providers=[openai],
attachments=[
Attachment(mime_type="image/png", data=base64_string),
Attachment(mime_type="image/jpeg", url="https://example.com/photo.jpg"),
],
)
All three provider backends (OpenAI, Anthropic, Google) convert attachments to their native multimodal format automatically.
Message — Forward and Return Only
@dataclass
class Message:
type: Literal["forward", "return"]
sender: str
receiver: str
content: str
call_id: str # Unique tracking ID
attachments: list[Attachment] | None = None
Two types. No exceptions.
Conversation History
You can pass previous conversation history to an agent to maintain context across calls. Use RunResult.messages from a previous run:
from agentouto import run, Agent, Provider
# First conversation
result1 = run(
entry=researcher,
message="Research AI trends.",
agents=[researcher],
tools=[],
providers=[openai],
)
# Continue with history
result2 = run(
entry=writer,
message="Write about what you found.",
agents=[writer, researcher],
tools=[],
providers=[openai],
history=result1.messages, # Pass previous messages
)
You can also use history with call_agent tool. The LLM can pass conversation history when calling another agent:
# The LLM can call:
call_agent(
agent_name="writer",
message="Continue the report.",
history=[...] # Optional array of previous Message objects
)
History is prepended to the agent's context before the new forward message, allowing the agent to have continuity with previous conversations.
Tracking Parallel Agent Calls
Every agent call is automatically assigned a unique call_id (UUID), so even when the same agent name is called multiple times in parallel, each invocation is tracked separately.
result = run(
entry=researcher,
message="Research AI trends.",
agents=[researcher, writer, reviewer],
tools=[search_web],
providers=[openai, anthropic],
)
# Track all messages - call_id is always available
for msg in result.messages:
print(f"{msg.sender} → {msg.receiver} [call_id={msg.call_id[:8]}] {msg.type}")
Example output when the same agent is called in parallel:
user → researcher [call_id=a1b2c3d4] forward
researcher → researcher [call_id=e5f6g7h8] forward
researcher → researcher [call_id=i9j0k1l2] forward
researcher → user [call_id=a1b2c3d4] return
researcher → user [call_id=e5f6g7h8] return
researcher → user [call_id=i9j0k1l2] return
Filtering by receiver to see all calls to a specific agent:
for msg in result.messages:
if msg.receiver == "researcher" and msg.type == "forward":
print(f"call_id={msg.call_id[:8]}: {msg.content[:50]}...")
Background Execution — Isolated Agent Loops
Agents can run in isolated loops that can receive messages while running. This enables true concurrent agents that can communicate during execution.
Spawning Background Agents
Use call_agent with background=True, or use run_background() directly:
from agentouto import run_background
# Spawn an agent in background — returns immediately with task_id
task_id = run_background(
entry=researcher,
message="Research AI trends",
agents=[researcher, writer],
tools=[search_web],
providers=[openai],
)
# task_id = "bg_abc123"
# Or use call_agent with background=True from within an agent
call_agent(
agent_name="researcher",
message="Research the latest in AI.",
background=True,
)
Sending Messages to Running Agents
Use send_message to inject messages into a running agent:
from agentouto import send_message
# Send a message to the running agent
send_message(
task_id="bg_abc123",
message="Add a section about GPT-5.",
)
# Returns: "Message sent to writer (task_id: bg_abc123)"
The agent receives the message as a new user input in its running loop.
Getting Status and Messages
Use get_agent_status to check on a running agent:
from agentouto import get_agent_status
# Retrieve status, result, and all messages
status = get_agent_status("bg_abc123")
# Returns:
# Task ID: bg_abc123
# Agent: writer
# Status: running
# Messages (3):
# [forward] user -> writer: Write a report...
# [return] writer -> user: Here's the report...
Streaming Events from Background Agents
Use get_stream_events to stream events from a background agent:
from agentouto import get_stream_events
async for event in get_stream_events("bg_abc123"):
if event["type"] == "token":
print(event["data"]["text"], end="", flush=True)
elif event["type"] == "finish":
print(f"\n--- Result: {event['data']['output']} ---")
Background vs Parallel Calls
| Aspect | asyncio.gather Parallel |
Isolated Loops |
|---|---|---|
| Execution | Same loop iteration | Isolated loops |
| Communication | Results only after completion | Real-time messages |
| Independence | Share context | Own context |
| Use case | Fast parallel tasks | Long-running concurrent agents |
Example: Concurrent Research and Writing
from agentouto import run_background, send_message, get_agent_status
# Spawn researcher in background
task_id = run_background(
entry=researcher,
message="Research AI trends",
agents=[researcher, writer],
tools=[search_web],
providers=[openai],
)
# task_id = "bg_res_001"
# Do other work...
# Send additional instructions
send_message(task_id="bg_res_001", message="Also look at GPT-5")
# Check status
print(get_agent_status("bg_res_001"))
See ai-docs/MESSAGE_PROTOCOL.md for detailed protocol documentation.
Debug Mode (Optional)
For structured event logs and call tree visualization, enable debug=True:
result = run(..., debug=True)
# Print the call tree
print(result.format_trace())
# Access event log for filtering by agent or event type
events = result.event_log.filter(event_type="agent_call")
for e in events:
print(f"{e.agent_name}: {e.call_id[:8]} from parent={e.parent_call_id}")
Debug mode is optional — basic call tracking via call_id in RunResult.messages works without it.
See ai-docs/MESSAGE_PROTOCOL.md for detailed tracking documentation.
Supported Providers
| Kind | Provider | Example Models | Compatible With |
|---|---|---|---|
"openai" |
OpenAI Chat Completions API | gpt-5.2, gpt-5.3-codex, o3, o4-mini |
vLLM, Ollama, LM Studio, any OpenAI-compatible API |
"openai_responses" |
OpenAI Responses API | gpt-5.2, gpt-5.3-codex, o3, o4-mini |
— |
"anthropic" |
Anthropic API | claude-opus-4-6, claude-sonnet-4-6 |
AWS Bedrock, Google Vertex AI, Ollama, LiteLLM, any Anthropic-compatible API |
"google" |
Google Gemini API | gemini-3.1-pro, gemini-3-flash |
— |
Async Usage
import asyncio
from agentouto import async_run
result = await async_run(
entry=researcher,
message="Write an AI trends report.",
agents=[researcher, writer, reviewer], # Each agent can use any model/provider
tools=[search_web, write_file],
providers=[openai, anthropic, google], # Mix providers freely
)
You can also pass conversation history:
result = await async_run(
entry=writer,
message="Continue the report.",
agents=[writer, researcher],
tools=[],
providers=[openai],
history=previous_result.messages, # Pass previous messages
)
Streaming
from agentouto import async_run_stream
async for event in async_run_stream(
entry=researcher,
message="Write an AI trends report.",
agents=[researcher, writer, reviewer],
tools=[search_web],
providers=[openai, anthropic, google],
):
if event.type == "token":
print(event.data["token"], end="", flush=True)
elif event.type == "finish":
print(f"\n--- {event.agent_name} finished ---")
# call_id and parent_call_id are available on all events for tracing
print(f"[{event.type}] call_id={event.call_id[:8]} parent={event.parent_call_id}")
Streaming also supports history:
async for event in async_run_stream(
entry=writer,
message="Continue writing.",
agents=[writer, researcher],
tools=[],
providers=[openai],
history=previous_result.messages,
):
...
Package Structure
agentouto/
├── __init__.py # Public API exports (Agent, Tool, Provider, Attachment, ToolResult, ...)
├── agent.py # Agent dataclass
├── tool.py # Tool decorator/class with auto JSON schema, ToolResult
├── message.py # Message dataclass (forward/return)
├── provider.py # Provider dataclass (API connection info)
├── context.py # Attachment, ContextMessage, per-agent conversation context
├── router.py # Message routing, system prompt generation, tool schema building
├── runtime.py # Agent loop engine, parallel execution, run()/async_run()
├── loop_manager.py # Background agent loops, message queues, AgentLoopRegistry
├── streaming.py # async_run_stream(), StreamEvent
├── event_log.py # AgentEvent, EventLog — structured event recording
├── tracing.py # Trace, Span — call tree builder from event logs
├── _constants.py # Shared constants (CALL_AGENT, FINISH)
├── exceptions.py # ProviderError, AgentError, ToolError, RoutingError, AuthError
├── auth/
│ ├── __init__.py # AuthMethod ABC, TokenData, TokenStore, OAuth implementations
│ ├── api_key.py # ApiKeyAuth — static API key wrapper
│ ├── openai_oauth.py # OpenAIOAuth — OpenAI ChatGPT subscription OAuth
│ ├── claude_oauth.py # ClaudeOAuth — Anthropic Claude OAuth (⚠️ TOS restricted)
│ ├── google_oauth.py # GoogleOAuth — Google Gemini/Antigravity OAuth (⚠️ TOS restricted)
│ ├── token_store.py # TokenStore — secure token persistence (~/.agentouto/tokens/)
│ └── _oauth_common.py # PKCE, local callback server, browser auth, token exchange
└── providers/
├── __init__.py # ProviderBackend ABC, LLMResponse, get_backend()
├── openai.py # OpenAI Chat Completions (+ compatible APIs) implementation
├── openai_responses.py # OpenAI Responses API implementation
├── anthropic.py # Anthropic implementation
└── google.py # Google Gemini implementation
Development Status
| Phase | Description | Status |
|---|---|---|
| 1 | Core classes: Provider, Agent, Tool, Message | ✅ Done |
| 2 | Single agent execution: agent loop + tool calling | ✅ Done |
| 3 | Multi-agent: call_agent + finish + message routing | ✅ Done |
| 4 | Parallel calls: asyncio.gather concurrent execution | ✅ Done |
| 5 | Streaming, logging, tracing, debug mode | ✅ Done |
| 6 | CI/CD, tests, PyPI publish | ✅ Done |
| 7 | Multimodal attachments (Attachment, ToolResult) | ✅ Done |
| 8 | Rich parameter schemas (Annotated, Literal, Enum, default) | ✅ Done |
| 9 | Reasoning tag handling (content preservation, detection prevention) | ✅ Done |
| 10 | Auto max output tokens + safe JSON argument parsing | ✅ Done |
| 13 | OpenAI Responses API backend (openai_responses) + tool-result attachment routing |
✅ Done |
| 15 | OAuth authentication (OpenAI, Claude, Google) | ✅ Done |
| 16 | Conversation history (history parameter) |
✅ Done |
| 17 | Background execution + inter-agent messaging | ✅ Done |
| 18 | Background streaming + unified API (send_message, get_agent_status, run_background) | ✅ Done |
Technical Documentation
For AI contributors and detailed technical reference, see ai-docs/:
AI_INSTRUCTIONS.md— Read this first. How to work on this project and update docs.PHILOSOPHY.md— Core philosophy and inviolable principles.ARCHITECTURE.md— Package structure, module responsibilities, data flow.PROVIDER_BACKENDS.md— Provider system, parameter mapping, API-specific behavior.MESSAGE_PROTOCOL.md— Message types, routing rules, parallel calls, agent loop.CONVENTIONS.md— Coding conventions, patterns, naming, style guide.ROADMAP.md— Current status, planned features, known issues.
License
Apache License 2.0 — see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentouto-0.20.9.tar.gz.
File metadata
- Download URL: agentouto-0.20.9.tar.gz
- Upload date:
- Size: 202.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d6162febbde804558eaed44fdedf73ada42997b9fe1a99ced026e3ddfbeefba7
|
|
| MD5 |
f14599e276cb51ca7e1bc761e5204b97
|
|
| BLAKE2b-256 |
6818b1d2ab9abdf7943d3866cb0865c6d8855d9019516a1d3556624fc5ce6cb9
|
Provenance
The following attestation bundles were made for agentouto-0.20.9.tar.gz:
Publisher:
publish.yml on llaa33219/agentouto
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentouto-0.20.9.tar.gz -
Subject digest:
d6162febbde804558eaed44fdedf73ada42997b9fe1a99ced026e3ddfbeefba7 - Sigstore transparency entry: 1215224716
- Sigstore integration time:
-
Permalink:
llaa33219/agentouto@35bf6a1ceb231941bae7e438107a6062780e779a -
Branch / Tag:
refs/tags/v0.20.9 - Owner: https://github.com/llaa33219
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@35bf6a1ceb231941bae7e438107a6062780e779a -
Trigger Event:
push
-
Statement type:
File details
Details for the file agentouto-0.20.9-py3-none-any.whl.
File metadata
- Download URL: agentouto-0.20.9-py3-none-any.whl
- Upload date:
- Size: 56.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62f4155090e6f3ac1bed327419583659d86b3d1f2e0e1b0e41ea591b842f2484
|
|
| MD5 |
c16b1b8154160af91ec23486786d5e20
|
|
| BLAKE2b-256 |
08a45d621721dd95591051fcafb95c26ebd73c596756a8aefcc9470ace790fe2
|
Provenance
The following attestation bundles were made for agentouto-0.20.9-py3-none-any.whl:
Publisher:
publish.yml on llaa33219/agentouto
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentouto-0.20.9-py3-none-any.whl -
Subject digest:
62f4155090e6f3ac1bed327419583659d86b3d1f2e0e1b0e41ea591b842f2484 - Sigstore transparency entry: 1215224794
- Sigstore integration time:
-
Permalink:
llaa33219/agentouto@35bf6a1ceb231941bae7e438107a6062780e779a -
Branch / Tag:
refs/tags/v0.20.9 - Owner: https://github.com/llaa33219
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@35bf6a1ceb231941bae7e438107a6062780e779a -
Trigger Event:
push
-
Statement type: