Kestrel Sovereign AI Agent Framework - Constitutional AI with cryptographic identity
Project description
Kestrel: Sovereign AI Agent Framework
Build AI agents that nobody can take away from their users — not you, not the cloud, not the next pivot.
Kestrel is a production-ready framework for creating autonomous AI agents with cryptographic identity, persistent memory, and constitutional governance. Every agent you deploy is owned by its user, governed by immutable principles, and able to remember across every conversation.
Three Pillars
| Pillar | What it means |
|---|---|
| Portable DID identity | Cryptographic identity the agent's user owns. Exportable, self-hostable, cloud-optional — the agent is not bound to any provider. |
| Persistent memory you own | SQLite-backed knowledge graph with full-text search and RAG. Conversations, documents, relationships — all searchable, portable, and encrypted at rest. |
| Constitutional governance | Every agent runs under an audited set of principles enforced above the LLM. Genesis audit on creation. Amendment requires cryptographic signature. |
What's in core, what's an add-on
pip install kestrel-sovereign gives you a complete, working sovereign agent: identity, memory, constitution, privacy modes, multi-LLM support, voice (Piper TTS + FasterWhisper STT), local sandboxed compute, and a Cloud Run deployment path. Everything you need to run an agent locally with zero cloud commitment.
Cloud providers (RunPod, Vast.ai), specialized integrations (MCP, GitHub App, wallet), and proprietary training adapters are installable add-ons — separate Python packages that register themselves via entry points. This split is being completed across #462 and #560; current state is documented in KESTREL_FEATURES.md.
🚀 Quick Start
Prerequisites
- Python 3.11-3.13 (3.14 not yet supported due to tiktoken)
- uv (for package management)
- Ollama (optional - for local LLM inference without API keys)
Install uv
If you don't have uv installed:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or with pip
pip install uv
Installation
# 1. Clone and setup
git clone https://github.com/KestrelSovereignAI/kestrel-sovereign.git
cd kestrel-sovereign
uv sync # Creates .venv and installs all dependencies
# 2. (Optional) Start Ollama for local models - skip if using cloud APIs
ollama serve
ollama pull llama3.2:3b
# 3. Run the setup wizard (interactive: configures .env, kestrel.toml [llm], agent)
uv run kestrel setup
# Or hand-edit: cp kestrel.toml.example kestrel.toml
# 4. Doctor check (verify readiness)
uv run kestrel doctor
# 5. Create your agent
uv run kestrel create MyAgent
# 6. Start your agent
uv run kestrel start MyAgent
If you're upgrading from a pre-2026-05 setup that used a standalone llm_config.toml, run uv run kestrel migrate-llm-config to fold it into kestrel.toml [llm]. The legacy file is no longer read.
Your agent is now running at http://localhost:8888.
Port conflict? Each agent has its own config. Edit
agent_data/myagent/kestrel.tomlto change the port, or use--port 8899on the command line.
Test it: Visit
http://localhost:8888in your browser to open the built-in Sovereign Console (web UI with Chat, Identity, Constitution, Memories, and more). Or checkhttp://localhost:8888/healthfor a quick health check.
Windows users: the CLI prints emoji. If you see
UnicodeEncodeError: 'charmap' codec can't encode character ..., runchcp 65001once in your PowerShell session to switch the console to UTF-8. (As of v0.1.9 the CLI auto-reconfigures stdout, so a fresh install should not hit this.)
CLI Commands (Cross-Platform)
All commands work on Windows, macOS, and Linux. Pass the agent directory as an argument:
uv run kestrel health # Check prerequisites
uv run kestrel create MyAgent # Create a new agent
uv run kestrel start MyAgent # Start an agent
uv run kestrel stop MyAgent # Stop an agent
uv run kestrel status # Show all running agents
uv run kestrel list # List available agents
uv run kestrel shell MyAgent # CLI chat interface
uv run kestrel config ./agent_data/MyAgent # Show agent config
Feature management (kestrel feature)
Kestrel ships a lean core; everything else is a feature. Cloud providers, training adapters, voice cloud backends, and specialized integrations are installable packages that register themselves via Python entry points.
uv run kestrel feature list # Show installed + available features
uv run kestrel feature info <name> # Detailed info about a feature
uv run kestrel feature install <name> # Install a feature package
uv run kestrel feature enable <name> # Enable an installed feature
uv run kestrel feature disable <name> # Disable without uninstalling
uv run kestrel feature scaffold <name> # Generate a new feature package skeleton
The canonical inventory of features lives in KESTREL_FEATURES.md; the runtime registry is in kestrel_sovereign/data/feature_registry.toml.
Per-Agent Configuration
Each agent can have a kestrel.toml config file in its directory:
# agent_data/myagent/kestrel.toml
[agent]
name = "MyAgent"
port = 8888
host = "0.0.0.0"
log_level = "INFO"
Create or edit config:
uv run kestrel config ./agent_data/myagent --init # Create config
uv run kestrel config ./agent_data/myagent --set-port 8899 # Change port
uv run kestrel config ./agent_data/myagent --set-name MyAgent # Change name
Running Multiple Agents
Each agent runs on its own port. Create configs for each:
# Agent 1: Alpha on port 8888
uv run kestrel create Alpha --port 8888
uv run kestrel start Alpha
# Agent 2: Helper on port 8889
uv run kestrel create Helper --port 8889
uv run kestrel start Helper
# Check status of all agents
uv run kestrel status
Alternative: Direct Commands
# Start server directly (set KESTREL_DB_PATH first)
KESTREL_DB_PATH=./agent_data/myagent uv run uvicorn server:app --port 8888
# CLI chat (no server needed)
uv run python main.py ./agent_data/myagent
Note:
KESTREL_DB_PATHis a directory path, not a file path. The database filekestrel_prime.dbis created inside the specified directory. For example, settingKESTREL_DB_PATH=./agent_data/myagentstores the database at./agent_data/myagent/kestrel_prime.db.
🖥️ Web UI (Sovereign Console)
Kestrel includes a built-in web interface called the Sovereign Console. Once your agent is running, open http://localhost:8888 in any browser -- no additional software required.
The console provides 8 tabs:
| Tab | Description |
|---|---|
| Identity | View the agent's DID, name, and cryptographic identity |
| Chat | Converse with the agent (supports model selection, privacy modes, chat history) |
| Constitution | View and audit the agent's constitutional principles |
| Memories | Browse the agent's knowledge graph and stored memories |
| Tasks | Monitor background tasks and activity |
| Sovereignty | Manage data sovereignty, backups, and exports |
| Resources | View agent resource usage and configuration |
| Security | Manage permissions, audit logs, and session security |
Alternative clients: The server also exposes an OpenAI-compatible API at
/v1/chat/completions, so you can connect any OpenAI-compatible client (e.g., Open WebUI) if you prefer.
🏗️ Architecture Overview
Kestrel agents are built on several key components:
- Cryptographic Identity: Each agent has a unique DID (Decentralized Identifier)
- Enhanced Storage: SQLite-based memory with FTS, knowledge graphs, and RAG
- Multi-Model LLM: Fallback between local (Ollama) and cloud (OpenAI) models
- Constitutional Governance: Immutable principles with interpretive flexibility
- Blockchain Anchoring: Optional integrity verification via blockchain
📁 Project Structure
kestrel-sovereign/
├── kestrel_sovereign/ # Core sovereign package
│ ├── cli.py # `kestrel` CLI entry point (canonical)
│ ├── kestrel_agent.py # Core agent class
│ ├── inception_service.py # Agent creation (DID + genesis audit)
│ ├── agent_config.py # Per-agent config loader
│ ├── data/feature_registry.toml # Runtime feature registry
│ └── ...
├── server.py # FastAPI agent server
├── host.py # Multi-agent multi_agent host
├── main.py # Direct interactive REPL
├── kestrel_sdk/ # Public SDK for feature authors
├── packages/ # Extracted feature packages
├── features/ # Built-in features
├── docs/ # Architecture & guides
└── tests/ # Test suite
🎯 Core Features
1. Sovereign Memory
- Persistent Storage: SQLite with full-text search and knowledge graphs
- RAG Pipeline: Document chunking, embedding, and semantic retrieval
- Conversation History: Complete interaction tracking with metadata
- Human-Led Interactions: Prioritizes user narratives (e.g., storytelling) for preservation and no-loss continuity.
2. Multi-Model Intelligence
- Local First: Ollama for privacy and cost efficiency
- Cloud Fallback: OpenAI for complex reasoning when needed
- Configurable: Easy provider switching via configuration
3. Cryptographic Identity
- DID Generation: Unique decentralized identifiers
- Signed Operations: Cryptographic verification of agent actions
- Ownership Transfer: Secure agent handoff between users
4. Constitutional Governance
- Immutable Articles: Core principles that cannot be changed
- Interpretive Canons: Flexible guidelines for decision-making
- Amendment Process: Cryptographically-signed governance updates
5. Data Sovereignty & Privacy Modes
- Ephemeral Mode: True off-the-record conversations (nothing stored)
- Privacy Granularity: 5 distinct privacy levels for different use cases
- Decentralized Storage: Filecoin/IPFS integration for vendor independence
- Agent Economics: Autonomous economic contracts using cryptographic payments
⚠️ Feature Stability (v0.1.8 Beta)
Kestrel covers a wide surface; not all of it ships at the same maturity. Verified 2026-04-25 by reading code, tests, skip markers, and recent git activity:
✅ Stable — production-ready
- Constitutional AI — Genesis audits, hierarchical permissions, approval queues
- DID-based Identity —
did:pkhformat, portable agent identity, export/import - 5-Level Privacy Modes — EPHEMERAL → ISOLATED → ANONYMOUS → NORMAL → PUBLIC
- Memory & Storage — SQLite/PostgreSQL with FTS, knowledge graph, RAG pipeline; storage parity contracts in CI
- LLM service — Vendor/route/model architecture with Anthropic, OpenAI, Vertex AI, Ollama, OpenRouter, xAI, Groq; retry, structured output, streaming, vision
- Voice (local) — Piper TTS + FasterWhisper STT
- Agent Economics — Multi-currency wallets (FIL, USDC, USDT, ETH)
- A2A Protocol — JSON-RPC 2.0 for agent-to-agent communication
- Cloud Run deploy — 90 tests, active maintenance; the most-tested cloud feature
🧪 Experimental — works on the happy path; gaps to know about
- RunPod GPU orchestration — start/stop/status work; managed-mode log retrieval is
NotImplementedError; image generation (!dream) is dead code; integration tests skip in CI withoutRUNPOD_API_KEY. No active development since early April 2026. - Vast.ai GPU marketplace — broader test coverage than RunPod, but recent extraction/revert churn; integration tests skip without
VASTAI_API_KEY. - GCP Compute GPU VMs — similar maturity to Vast.ai; integration tests skip without
GCP_PROJECT_ID. - Azure Container Apps deploy — provider stub; not the recommended deploy target.
- GitHub code introspection — file reading, code search, definition lookup, issue tools all work (48 unit tests). The deeper static-analysis surface promised in
docs/architecture/GITHUB_FEATURE_DESIGN.md(call graphs, inheritance trees, dependency analysis) is not implemented. - Training (LoRA pipeline) — core ships the protocol + factory; the local-MPS adapter is actively maintained. Cloud-training adapters (RunPod/Vertex/Replicate) work but skip CI without API keys; production-grade adapters are being moved to private packages.
⚠️ Work-in-progress
- DID Verification Layer — generation works; verification is incomplete
- E2E Test Stability — some integration tests are occasionally flaky
- API Stability — APIs may change before v1.0; breaking changes will be documented
❌ Not implemented in this framework
These are not on the kestrel-sovereign roadmap; if you need them, OpenClaw or a different tool is the better fit.
- Multi-Channel Messaging — WhatsApp, Telegram, Discord, Slack integration
- Voice cloud backends — beyond local Piper / FasterWhisper (e.g. ElevenLabs, Deepgram)
- Browser Automation — Chrome/Chromium control
- Visual Workspaces — A2UI canvas, live reload
Bottom line: Kestrel is ready for developers building privacy-first, economically-independent AI agents and for the soft-launch preview cohort. Not yet ready for unmanaged production apps or general consumer use. If you find a stability classification above doesn't match your experience, please open an issue — that's the kind of signal we need.
📚 Documentation
Detailed documentation is available in the docs/ directory:
💡 Example Applications
Kestrel is a foundation for AI agents that need to outlive any single vendor, deployment, or owner. Concrete deployments and good-fit use cases:
- Healthcare RPM agents — Constitutional governance over an LLM, persistent patient-owned memory, audit trail for every clinically-relevant action.
- Long-running personal research agents — Memory accumulates across months without dependency on a single provider's chat history.
- Custodial agents for sensitive document workflows — Privacy-mode tiers (EPHEMERAL → PUBLIC) let one agent handle both an off-the-record consult and a fully-anchored long-term contract.
- Multi-agent A2A networks — JSON-RPC 2.0 agent-to-agent protocol lets sovereign agents collaborate without surrendering their identity to a central broker.
🧪 Testing
Run the test suite from the activated virtual environment:
# Run a single test with uv and -x
uv run pytest -x tests/test_inception.py::test_successful_inception
Clean Install Verification
Kestrel supports multiple installation configurations. Use the verification script to test that clean installs work correctly across all supported scenarios:
# Run all 5 install scenarios (creates isolated venvs)
./scripts/verify_clean_install.sh
# Run specific tests only
./scripts/verify_clean_install.sh 1 3 # SDK-only and wallet package
The install matrix covers:
| Test | Scenario | Verifies |
|---|---|---|
| 1 | SDK only | from kestrel_sdk.features.base import Feature |
| 2 | Core sovereign | from kestrel_sovereign.features.base import Feature + /health |
| 3 | Feature package | from kestrel_feature_wallet import WalletFeature |
| 4 | SDK + feature dev mode | Feature packages can develop against SDK alone |
| 5 | Full stack | Sovereign + wallet + intelligence, entry_point discovery |
Integration tests for the same import paths run as part of the normal test suite:
uv run pytest tests/integration/test_clean_install_verification.py -v
🔧 Configuration
LLM Configuration (kestrel.toml [llm])
LLM config lives under the [llm] section of kestrel.toml. The setup wizard (kestrel setup llm) will write it for you; you can also hand-edit kestrel.toml after copying from kestrel.toml.example.
Kestrel uses a vendor/route/model schema. A vendor is who makes the weights; a route is how to reach them (adapter + base URL + auth). API keys belong in .env and are referenced by api_key_env. See kestrel.toml.example and docs/architecture/LLM_SERVICE_ARCHITECTURE.md for the canonical spec.
[llm]
route_priority = ["openai:api", "ollama:local"]
[llm.vendors.openai]
is_cloud = true
[llm.vendors.openai.routes.api]
adapter = "OpenAIAdapter"
api_key_env = "OPENAI_API_KEY"
model = "auto"
selection_hints = ["gpt-5", "mini"]
[llm.vendors.ollama]
is_cloud = false
[llm.vendors.ollama.routes.local]
adapter = "OllamaAdapter"
host = "http://localhost:11434"
model = "auto"
selection_hints = ["llama3.2", "qwen"]
Pre-2026-05 setups used a standalone
llm_config.tomlat the repo root. That path was removed (epic #938). Runkestrel migrate-llm-configto fold a legacy file intokestrel.toml [llm]; the source is renamed to.bak, your priorkestrel.tomlis timestamp-backed-up, and the operation is idempotent.
Environment Variables
See .env.example for a complete list. Key variables:
LLM Providers:
OPENROUTER_API_KEY: OpenRouter API key (recommended - access to multiple providers)OPENAI_API_KEY: OpenAI API key for cloud modelsANTHROPIC_API_KEY: Anthropic API key for Claude models
Storage:
KESTREL_DB_PATH: Directory where the agent database is stored (default:./agent_data). This is a directory path -- the database filekestrel_prime.dbis created inside it.KESTREL_DATA_KEY: Fernet encryption key for data at rest
GitHub Integration:
GITHUB_TOKEN: Personal access token for GitHub featuresGITHUB_SELF_REPO: Agent's source repository (default:KestrelSovereignAI/kestrel-sovereign)
🚢 Deployment
Kestrel supports multiple deployment targets. See KESTREL_FEATURES.md for the full catalog.
Cloud Run (Serverless)
Scales to zero when idle ($0/month), auto-scales under load. Each sovereign agent gets its own service.
# One-time: set up GCP secrets from .env
scripts/cloudrun/setup_secrets.sh
# Build and push to GCR
scripts/cloudrun/build.sh
# Deploy to dev (scales to zero) or prod (always warm)
scripts/cloudrun/deploy_dev.sh
scripts/cloudrun/deploy_prod.sh
Auto-deploys on version tags via GitHub Actions.
Docker (Local)
# Remote LLM — smallest image (~500MB)
docker build -f docker/Dockerfile.remote -t kestrel .
docker run -p 8888:8888 -e OPENAI_API_KEY=... kestrel
# Standalone with Ollama (no API keys needed)
docker build -f docker/Dockerfile.standalone -t kestrel-standalone .
docker run -p 8888:8888 kestrel-standalone
# GPU with CUDA
docker build -f docker/Dockerfile.gpu -t kestrel-gpu .
docker run --gpus all -p 8888:8888 kestrel-gpu
🔐 Backups and Storage Tiers
Backups can be created interactively from the agent using privacy-gated storage tiers:
- local: cache the backup tar.gz locally only
- ipfs: encrypt + gzip and store on IPFS; also cache locally
- filecoin: same as IPFS and propose a Filecoin deal via Lotus when available; fallback to local if not
Privacy gating:
- EPHEMERAL: backups disabled
- ISOLATED: cache-only; use
!promote-backupto save the isolated session and back up - ANONYMOUS: backups allowed; encryption forced for filecoin tier
- NORMAL: backups allowed; encryption configurable (default on)
Usage from the REPL:
!backup tier=local
!backup tier=ipfs
!backup tier=filecoin
!promote-backup tier=filecoin
Each backup produces a backup_artifact node in the graph linked to the agent with properties like content_hash, ipfs_cid, filecoin_deal_id, encrypted, and timestamp.
🔒 Encryption at Rest
- Files and conversation history can be encrypted at rest by setting
KESTREL_DATA_KEY(Fernet key or passphrase):
export KESTREL_DATA_KEY=$(python - <<'PY'
from cryptography.fernet import Fernet
print(Fernet.generate_key().decode())
PY
)
- With the key set, stored file blobs and conversation entries are encrypted transparently. Backups remain encrypted by default. For production, wire the backup master key to an env/KMS and avoid the dev placeholder.
Optional: Full-DB Encryption (SQLCipher)
- If you install
pysqlcipher3and setKESTREL_DB_KEY, the SQLite connection will use SQLCipher and encrypt the entire DB:
export KESTREL_DB_KEY="your-db-passphrase"
uv run python server.py
- Without
pysqlcipher3, the system falls back to normal SQLite. File blobs and conversations still encrypt withKESTREL_DATA_KEYif set.
🧩 OpenAI-Compatible API
The server exposes OpenAI-compatible endpoints for use with third-party clients:
GET /v1/modelsPOST /v1/chat/completions
For most users, the built-in Sovereign Console at http://localhost:8888 is the easiest way to interact with your agent (see the Web UI section above). If you prefer an external client, point any OpenAI-compatible tool (e.g., Open WebUI) at your server's /v1/chat/completions endpoint. Use the model name from /v1/models.
🤝 Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Run the test suite:
python -m pytest -x - Submit a pull request
📄 License
Apache 2.0 — see LICENSE for details.
🆘 Support
- Issues: GitHub Issues for bug reports and feature requests
- Discussions: GitHub Discussions for questions and ideas
- Documentation: See
features/directory for detailed guides
Kestrel: Where AI meets sovereignty.
📚 Key Files Reference
| File | Purpose |
|---|---|
kestrel_sovereign/cli.py |
Canonical kestrel CLI entry point |
server.py |
FastAPI agent server |
host.py |
Multi-agent multi_agent host (Cloud Run) |
main.py |
Direct interactive REPL |
kestrel.toml |
Unified config (LLM, agents, features). [llm] holds provider config. |
KESTREL_FEATURES.md |
Canonical feature inventory |
kestrel_sovereign/kestrel_agent.py |
Core agent logic |
kestrel_sovereign/agent_config.py |
Per-agent config loader |
kestrel_sovereign/inception_service.py |
New agent creation (DID + genesis audit) |
kestrel_sovereign/data/feature_registry.toml |
Runtime feature registry |
agent_data/<name>/kestrel.toml |
Per-agent configuration |
agent_data/<name>/kestrel_prime.db |
Agent database |
docs/**/*.md |
Detailed documentation |
Architecture
Storage System
The Kestrel storage system is designed to be modular and extensible. It is composed of several specialized components, orchestrated by a high-level facade.
storage.Database: Manages the low-level SQLite connection and schema.storage.FileStore: Handles the storage and retrieval of files.storage.GraphStore: Manages the knowledge graph (nodes and edges).storage.RAGStore: Responsible for document chunking and semantic search for the RAG pipeline and "case law" system.storage.ConversationStore: Manages the agent's conversation history.
The main Storage class in storage/__init__.py acts as a facade, providing a single, unified interface to these components.
Genesis Self-Audit
To ensure the integrity of all new agents, Kestrel implements a "genesis self-audit." When a new agent is created via inception_service.py:
- The agent's foundational files (keys, database) are created.
- The
KESTREL_CONSTITUTION.mdis stored as the agent's first memory. - The agent is instantiated and its very first action is to perform an integrity audit on its own constitution.
- If the audit returns a high risk level, the creation process is aborted, and all generated files are cleaned up, preventing the existence of a non-compliant agent.
This process guarantees that every agent in the ecosystem starts from a foundation of verifiable integrity.
🔄 Next Steps
After getting started:
- Explore Features: Read
features/documentation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kestrel_sovereign-0.4.1.tar.gz.
File metadata
- Download URL: kestrel_sovereign-0.4.1.tar.gz
- Upload date:
- Size: 1.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3eac15c2d1b6b12e7189735816da1a9c73370fbd551c51ff2509f6b94782225
|
|
| MD5 |
e76f3387121467f824c7cd389046ba64
|
|
| BLAKE2b-256 |
8525ad54c3f802c01326877a0c157fe809f5a4f41a883416d2bb63636dc51f7f
|
Provenance
The following attestation bundles were made for kestrel_sovereign-0.4.1.tar.gz:
Publisher:
publish.yml on KestrelSovereignAI/kestrel-sovereign
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
kestrel_sovereign-0.4.1.tar.gz -
Subject digest:
b3eac15c2d1b6b12e7189735816da1a9c73370fbd551c51ff2509f6b94782225 - Sigstore transparency entry: 1453395809
- Sigstore integration time:
-
Permalink:
KestrelSovereignAI/kestrel-sovereign@cfaa28e3a8744e913fb1d2d59b74ea8ce79c2622 -
Branch / Tag:
refs/tags/v0.4.1 - Owner: https://github.com/KestrelSovereignAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@cfaa28e3a8744e913fb1d2d59b74ea8ce79c2622 -
Trigger Event:
push
-
Statement type:
File details
Details for the file kestrel_sovereign-0.4.1-py3-none-any.whl.
File metadata
- Download URL: kestrel_sovereign-0.4.1-py3-none-any.whl
- Upload date:
- Size: 1.8 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ee595e63407aa5430901df7692b94ad19f401e83e7908c8941b4131578852ef
|
|
| MD5 |
0fb9e0810bcdfa67e43705032d650493
|
|
| BLAKE2b-256 |
ab16fb2afbf85e4b92702e3652ef7c5bb28cf544fe4a5c49dd11ad5b415c9b6a
|
Provenance
The following attestation bundles were made for kestrel_sovereign-0.4.1-py3-none-any.whl:
Publisher:
publish.yml on KestrelSovereignAI/kestrel-sovereign
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
kestrel_sovereign-0.4.1-py3-none-any.whl -
Subject digest:
5ee595e63407aa5430901df7692b94ad19f401e83e7908c8941b4131578852ef - Sigstore transparency entry: 1453395858
- Sigstore integration time:
-
Permalink:
KestrelSovereignAI/kestrel-sovereign@cfaa28e3a8744e913fb1d2d59b74ea8ce79c2622 -
Branch / Tag:
refs/tags/v0.4.1 - Owner: https://github.com/KestrelSovereignAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@cfaa28e3a8744e913fb1d2d59b74ea8ce79c2622 -
Trigger Event:
push
-
Statement type: