Agent Kernel - Unified AI Agents Runtime
Project description
Agent Kernel
Agent Kernel is a lightweight runtime and adapter layer for building and running AI agents across multiple frameworks and running within a unified execution environment. Migrate your existing agents to Agent Kernel and instantly utilize pre-built execution and testing capabilities.
Features
- Unified API: Common abstractions (Agent, Runner, Session, Module, Runtime) across frameworks
- Multi-Framework Support: OpenAI Agents SDK, CrewAI, LangGraph, Google ADK
- Session Management: Built-in session abstraction for conversational state
- Flexible Deployment: Interactive CLI for local development and testing, AWS Lambda handler for serverless deployment, AWS ECS Fargate deployment
- Pluggable Architecture: Easy to extend with custom framework adapters
- MCP Server: Built-in Model Context Protocol server for exposing agents as MCP tools and exposing any custom tool
- A2A Server: Built-in Agent-to-Agent communication server for exposing agents with a simple configuration change
- REST API: Built-in REST API server for agent interaction
- Test Automation: Built-in test suite for testing agents
Installation
pip install agentkernel
Requirements:
- Python 3.12+
Quick Start
Basic Concepts
- Agent: Framework-specific agent wrapped by an Agent Kernel adapter
- Runner: Framework-specific execution strategy
- Session: Shared state across conversation turns
- Module: Container that registers agents with the Runtime
- Runtime: Global registry and orchestrator for agents
CrewAI Example
from crewai import Agent as CrewAgent
from agentkernel.cli import CLI
from agentkernel.crewai import CrewAIModule
general_agent = CrewAgent(
role="general",
goal="Agent for general questions",
backstory="You provide assistance with general queries. Give direct and short answers",
verbose=False,
)
math_agent = CrewAgent(
role="math",
goal="Specialist agent for math questions",
backstory="You provide help with math problems. Explain your reasoning at each step and include examples. \
If prompted for anything else you refuse to answer.",
verbose=False,
)
# Register agents with Agent Kernel
CrewAIModule([general_agent, math_agent])
if __name__ == "__main__":
CLI.main()
LangGraph Example
from langgraph.graph import StateGraph
from agentkernel.cli import CLI
from agentkernel.langgraph import LangGraphModule
# Build and compile your graph
sg = StateGraph(...)
compiled = sg.compile()
compiled.name = "assistant"
LangGraphModule([compiled])
if __name__ == "__main__":
CLI.main()
OpenAI Agents SDK Example
from agents import Agent as OpenAIAgent
from agentkernel.cli import CLI
from agentkernel.openai import OpenAIModule
general_agent = OpenAIAgent(
name="general",
handoff_description="Agent for general questions",
instructions="You provide assistance with general queries. Give short and direct answers.",
)
OpenAIModule([general_agent])
if __name__ == "__main__":
CLI.main()
Google ADK Example
from google.adk.agents import Agent
from agentkernel.cli import CLI
from agentkernel.adk import GoogleADKModule
from google.adk.models.lite_llm import LiteLlm
# Create Google ADK agents
math_agent = Agent(
name="math",
model=LiteLlm(model="openai/gpt-4o-mini"),
description="Specialist agent for math questions",
instruction="""
You provide help with math problems.
Explain your reasoning at each step and include examples.
If prompted for anything else you refuse to answer.
""",
)
GoogleADKModule([math_agent])
if __name__ == "__main__":
CLI.main()
Interactive CLI
Agent Kernel includes an interactive CLI for local development and testing.
Available Commands:
!h,!help— Show help!ld,!load <module_name>— Load a Python module containing agents!ls,!list— List registered agents!s,!select <agent_name>— Select an agent!n,!new— Start a new session!q,!quit— Exit
Usage:
python demo.py
Then interact with your agents:
(assistant) >> !load my_agents
(assistant) >> !select researcher
(researcher) >> What is the latest news on AI?
AWS Lambda Deployment
Deploy your agents as serverless functions using the built-in Lambda handler.
from openai import OpenAI
from agents import Agent as OpenAIAgent
from agentkernel.aws import Lambda
from agentkernel.openai import OpenAIModule
client = OpenAI()
assistant = OpenAIAgent(name="assistant")
OpenAIModule([assistant])
handler = Lambda.handler
Request Format:
{
"prompt": "Hello agent",
"agent": "assistant"
}
Response Format:
{
"result": "Agent response here"
}
Status Codes:
200— Success400— No agent available500— Unexpected error
Configuration
Agent Kernel can be configured via environment variables, .env files, or YAML/JSON configuration files.
Configuration Precedence
Values are loaded in the following order (highest precedence first):
- Environment variables (including variables from
.envfile) - Configuration file (YAML or JSON)
- Built-in defaults
Configuration File
By default, Agent Kernel looks for ./config.yaml in the current working directory.
Override the config file path:
export AK_CONFIG_PATH_OVERRIDE=config.json
# or
export AK_CONFIG_PATH_OVERRIDE=conf/agent-kernel.yaml
Supported formats: .yaml, .yml, .json
Configuration Options
Debug Mode
- Field:
debug - Type: boolean
- Default:
false - Description: Enable debug mode across the library
- Environment Variable:
AK_DEBUG
Session Store
Configure where agent sessions are stored.
- Field:
session.type - Type: string
- Options:
in_memory,redis - Default:
in_memory - Environment Variable:
AK_SESSION__TYPE
Redis Configuration
Required when session.type=redis:
-
URL
- Field:
session.redis.url - Default:
redis://localhost:6379 - Description: Redis connection URL. Use
rediss://for SSL - Environment Variable:
AK_SESSION__REDIS__URL
- Field:
-
TTL (Time to Live)
- Field:
session.redis.ttl - Default:
604800(7 days) - Description: Session TTL in seconds
- Environment Variable:
AK_SESSION__REDIS__TTL
- Field:
-
Key Prefix
- Field:
session.redis.prefix - Default:
ak:sessions: - Description: Key prefix for session storage
- Environment Variable:
AK_SESSION__REDIS__PREFIX
- Field:
API Configuration
Configure the REST API server (if using the API module).
-
Host
- Field:
api.host - Default:
0.0.0.0 - Environment Variable:
AK_API__HOST
- Field:
-
Port
- Field:
api.port - Default:
8000 - Environment Variable:
AK_API__PORT
- Field:
-
Custom Router Prefix
- Field:
api.custom_router_prefix - Default:
/custom - Environment Variable:
AK_API__CUSTOM_ROUTER_PREFIX
- Field:
-
Enabled Routes
- Field:
api.enabled_routes.agents - Default:
true - Description: Enable agent interaction routes
- Environment Variable:
AK_API__ENABLED_ROUTES__AGENTS
- Field:
A2A (Agent-to-Agent) Configuration
-
Enabled
- Field:
a2a.enabled - Default:
false - Environment Variable:
AK_A2A__ENABLED
- Field:
-
Agents
- Field:
a2a.agents - Default:
["*"] - Description: List of agent names to enable A2A (use
["*"]for all) - Environment Variable:
AK_A2A__AGENTS(comma-separated)
- Field:
-
URL
- Field:
a2a.url - Default:
http://localhost:8000/a2a - Environment Variable:
AK_A2A__URL
- Field:
-
Task Store Type
- Field:
a2a.task_store_type - Options:
in_memory,redis - Default:
in_memory - Environment Variable:
AK_A2A__TASK_STORE_TYPE
- Field:
MCP (Model Context Protocol) Configuration
-
Enabled
- Field:
mcp.enabled - Default:
false - Environment Variable:
AK_MCP__ENABLED
- Field:
-
Expose Agents
- Field:
mcp.expose_agents - Default:
false - Description: Expose agents as MCP tools
- Environment Variable:
AK_MCP__EXPOSE_AGENTS
- Field:
-
Agents
- Field:
mcp.agents - Default:
["*"] - Description: List of agent names to expose as MCP tools
- Environment Variable:
AK_MCP__AGENTS(comma-separated)
- Field:
-
URL
- Field:
mcp.url - Default:
http://localhost:8000/mcp - Environment Variable:
AK_MCP__URL
- Field:
Trace (Observability) Configuration
Configure tracing and observability for monitoring agent execution.
-
Enabled
- Field:
trace.enabled - Default:
false - Description: Enable tracing/observability
- Environment Variable:
AK_TRACE__ENABLED
- Field:
-
Type
- Field:
trace.type - Options:
langfuse,openllmetry - Default:
langfuse - Description: Type of tracing provider to use
- Environment Variable:
AK_TRACE__TYPE
- Field:
Langfuse Setup:
To use Langfuse for tracing, install the langfuse extra:
pip install agentkernel[langfuse]
Configure Langfuse credentials via environment variables:
export LANGFUSE_PUBLIC_KEY=pk-lf-...
export LANGFUSE_SECRET_KEY=sk-lf-...
export LANGFUSE_HOST=https://cloud.langfuse.com # or your self-hosted instance
Enable tracing in your configuration:
trace:
enabled: true
type: langfuse
OpenLLMetry (Traceloop) Setup:
To use OpenLLMetry for tracing, install the openllmetry extra:
pip install agentkernel[openllmetry]
Configure Traceloop credentials via environment variables:
export TRACELOOP_API_KEY=your-api-key
export TRACELOOP_BASE_URL=https://api.traceloop.com # Optional: for self-hosted
Enable tracing in your configuration:
trace:
enabled: true
type: openllmetry
Test Configuration
Configure test comparison modes for automated testing.
-
Mode
- Field:
test.mode - Options:
fuzzy,judge,fallback - Default:
fallback - Description: Test comparison mode
- Environment Variable:
AK_TEST__MODE
- Field:
-
Judge Model
- Field:
test.judge.model - Default:
gpt-4o-mini - Description: LLM model for judge evaluation
- Environment Variable:
AK_TEST__JUDGE__MODEL
- Field:
-
Judge Provider
- Field:
test.judge.provider - Default:
openai - Description: LLM provider for judge evaluation
- Environment Variable:
AK_TEST__JUDGE__PROVIDER
- Field:
-
Judge Embedding Model
- Field:
test.judge.embedding_model - Default:
text-embedding-3-small - Description: Embedding model for similarity evaluation
- Environment Variable:
AK_TEST__JUDGE__EMBEDDING_MODEL
- Field:
Test Modes:
fuzzy: Uses fuzzy string matching (RapidFuzz)judge: Uses LLM-based evaluation (Ragas) for semantic similarityfallback: Tries fuzzy first, falls back to judge if fuzzy fails
test:
mode: fallback
judge:
model: gpt-4o-mini
provider: openai
embedding_model: text-embedding-3-small
Guardrails Configuration
Configure input and output guardrails to validate agent requests and responses for safety and compliance.
-
Input Guardrails
-
Enabled
- Field:
guardrail.input.enabled - Default:
false - Description: Enable input validation guardrails
- Environment Variable:
AK_GUARDRAIL__INPUT__ENABLED
- Field:
-
Type
- Field:
guardrail.input.type - Default:
openai - Options:
openai,bedrock - Description: Guardrail provider type
- Environment Variable:
AK_GUARDRAIL__INPUT__TYPE
- Field:
-
Config Path
- Field:
guardrail.input.config_path - Default:
None - Description: Path to guardrail configuration JSON file (OpenAI only)
- Environment Variable:
AK_GUARDRAIL__INPUT__CONFIG_PATH
- Field:
-
Model
- Field:
guardrail.input.model - Default:
gpt-4o-mini - Description: LLM model to use for guardrail validation (OpenAI only)
- Environment Variable:
AK_GUARDRAIL__INPUT__MODEL
- Field:
-
ID
- Field:
guardrail.input.id - Default:
None - Description: AWS Bedrock guardrail ID (Bedrock only)
- Environment Variable:
AK_GUARDRAIL__INPUT__ID
- Field:
-
Version
- Field:
guardrail.input.version - Default:
DRAFT - Description: AWS Bedrock guardrail version (Bedrock only)
- Environment Variable:
AK_GUARDRAIL__INPUT__VERSION
- Field:
-
-
Output Guardrails
-
Enabled
- Field:
guardrail.output.enabled - Default:
false - Description: Enable output validation guardrails
- Environment Variable:
AK_GUARDRAIL__OUTPUT__ENABLED
- Field:
-
Type
- Field:
guardrail.output.type - Default:
openai - Options:
openai,bedrock - Description: Guardrail provider type
- Environment Variable:
AK_GUARDRAIL__OUTPUT__TYPE
- Field:
-
Config Path
- Field:
guardrail.output.config_path - Default:
None - Description: Path to guardrail configuration JSON file (OpenAI only)
- Environment Variable:
AK_GUARDRAIL__OUTPUT__CONFIG_PATH
- Field:
-
Model
- Field:
guardrail.output.model - Default:
gpt-4o-mini - Description: LLM model to use for guardrail validation (OpenAI only)
- Environment Variable:
AK_GUARDRAIL__OUTPUT__MODEL
- Field:
-
ID
- Field:
guardrail.output.id - Default:
None - Description: AWS Bedrock guardrail ID (Bedrock only)
- Environment Variable:
AK_GUARDRAIL__OUTPUT__ID
- Field:
-
Version
- Field:
guardrail.output.version - Default:
DRAFT - Description: AWS Bedrock guardrail version (Bedrock only)
- Environment Variable:
AK_GUARDRAIL__OUTPUT__VERSION
- Field:
-
Guardrail Setup:
To use OpenAI guardrails, install the openai-guardrails package:
pip install agentkernel[openai]
To use AWS Bedrock guardrails, install the AWS package:
pip install agentkernel[aws]
Create guardrail configuration:
For OpenAI: Create configuration files following the OpenAI Guardrails format.
For Bedrock: Create a guardrail in AWS Bedrock and note the guardrail ID and version.
Configure guardrails in your configuration:
OpenAI Example:
guardrail:
input:
enabled: true
type: openai
model: gpt-4o-mini
config_path: /path/to/guardrails_input.json
output:
enabled: true
type: openai
model: gpt-4o-mini
config_path: /path/to/guardrails_output.json
Bedrock Example:
guardrail:
input:
enabled: true
type: bedrock
id: your-guardrail-id
version: "1" # or "DRAFT"
output:
enabled: true
type: bedrock
id: your-guardrail-id
version: "1"
Messaging Platform Integrations
Configure integrations with messaging platforms.
Slack
-
Agent
- Field:
slack.agent - Default:
"" - Description: Default agent for Slack interactions
- Environment Variable:
AK_SLACK__AGENT
- Field:
-
Agent Acknowledgement
- Field:
slack.agent_acknowledgement - Default:
"" - Description: Acknowledgement message when Slack message is received
- Environment Variable:
AK_SLACK__AGENT_ACKNOWLEDGEMENT
- Field:
-
Agent
- Field:
whatsapp.agent - Default:
"" - Description: Default agent for WhatsApp interactions
- Environment Variable:
AK_WHATSAPP__AGENT
- Field:
-
Verify Token, Access Token, App Secret, Phone Number ID, API Version
- Environment Variables:
AK_WHATSAPP__VERIFY_TOKEN,AK_WHATSAPP__ACCESS_TOKEN,AK_WHATSAPP__APP_SECRET,AK_WHATSAPP__PHONE_NUMBER_ID,AK_WHATSAPP__API_VERSION
- Environment Variables:
Facebook Messenger
-
Agent
- Field:
messenger.agent - Default:
"" - Description: Default agent for Facebook Messenger interactions
- Environment Variable:
AK_MESSENGER__AGENT
- Field:
-
Verify Token, Access Token, App Secret, API Version
- Environment Variables:
AK_MESSENGER__VERIFY_TOKEN,AK_MESSENGER__ACCESS_TOKEN,AK_MESSENGER__APP_SECRET,AK_MESSENGER__API_VERSION
- Environment Variables:
-
Agent
- Field:
instagram.agent - Default:
"" - Description: Default agent for Instagram interactions
- Environment Variable:
AK_INSTAGRAM__AGENT
- Field:
-
Instagram Account ID, Verify Token, Access Token, App Secret, API Version
- Environment Variables:
AK_INSTAGRAM__INSTAGRAM_ACCOUNT_ID,AK_INSTAGRAM__VERIFY_TOKEN,AK_INSTAGRAM__ACCESS_TOKEN,AK_INSTAGRAM__APP_SECRET,AK_INSTAGRAM__API_VERSION
- Environment Variables:
Telegram
-
Agent
- Field:
telegram.agent - Default:
"" - Description: Default agent for Telegram interactions
- Environment Variable:
AK_TELEGRAM__AGENT
- Field:
-
Bot Token, Webhook Secret, API Version
- Environment Variables:
AK_TELEGRAM__BOT_TOKEN,AK_TELEGRAM__WEBHOOK_SECRET,AK_TELEGRAM__API_VERSION
- Environment Variables:
Gmail
-
Agent
- Field:
gmail.agent - Default:
"general" - Description: Default agent for Gmail interactions
- Environment Variable:
AK_GMAIL__AGENT
- Field:
-
Client ID, Client Secret, Token File, Poll Interval, Label Filter
- Environment Variables:
AK_GMAIL__CLIENT_ID,AK_GMAIL__CLIENT_SECRET,AK_GMAIL__TOKEN_FILE,AK_GMAIL__POLL_INTERVAL,AK_GMAIL__LABEL_FILTER
- Environment Variables:
Configuration Examples
Environment Variables
Use the AK_ prefix and underscores for nested fields:
export AK_DEBUG=true
export AK_SESSION__TYPE=redis
export AK_SESSION__REDIS__URL=redis://localhost:6379
export AK_SESSION__REDIS__TTL=604800
export AK_SESSION__REDIS__PREFIX=ak:sessions:
export AK_API__HOST=0.0.0.0
export AK_API__PORT=8000
export AK_A2A__ENABLED=true
export AK_MCP__ENABLED=false
export AK_TRACE__ENABLED=true
export AK_TRACE__TYPE=langfuse # or openllmetry
# For Langfuse:
# export LANGFUSE_PUBLIC_KEY=pk-lf-...
# export LANGFUSE_SECRET_KEY=sk-lf-...
# export LANGFUSE_HOST=https://cloud.langfuse.com
# For OpenLLMetry:
# export TRACELOOP_API_KEY=your-api-key
export AK_TEST__MODE=fallback # Options: fuzzy, judge, fallback
export AK_TEST__JUDGE__MODEL=gpt-4o-mini
export AK_TEST__JUDGE__PROVIDER=openai
export AK_TEST__JUDGE__EMBEDDING_MODEL=text-embedding-3-small
# Guardrails configuration
export AK_GUARDRAIL__INPUT__ENABLED=false
export AK_GUARDRAIL__INPUT__TYPE=openai
export AK_GUARDRAIL__INPUT__MODEL=gpt-4o-mini
export AK_GUARDRAIL__INPUT__CONFIG_PATH=/path/to/guardrails_input.json
export AK_GUARDRAIL__OUTPUT__ENABLED=false
export AK_GUARDRAIL__OUTPUT__TYPE=openai
export AK_GUARDRAIL__OUTPUT__MODEL=gpt-4o-mini
export AK_GUARDRAIL__OUTPUT__CONFIG_PATH=/path/to/guardrails_output.json
# Messaging platforms (optional)
export AK_SLACK__AGENT=my-agent
export AK_WHATSAPP__AGENT=my-agent
export AK_MESSENGER__AGENT=my-agent
export AK_INSTAGRAM__AGENT=my-agent
export AK_TELEGRAM__AGENT=my-agent
export AK_GMAIL__AGENT=my-agent
export AK_GMAIL__CLIENT_ID=your-google-client-id
export AK_GMAIL__CLIENT_SECRET=your-google-client-secret
.env File
Create a .env file in your working directory:
AK_DEBUG=false
AK_SESSION__TYPE=redis
AK_SESSION__REDIS__URL=rediss://my-redis:6379
AK_SESSION__REDIS__TTL=1209600
AK_SESSION__REDIS__PREFIX=ak:prod:sessions:
AK_API__HOST=0.0.0.0
AK_API__PORT=8080
AK_A2A__ENABLED=true
AK_A2A__URL=http://localhost:8080/a2a
AK_TRACE__ENABLED=true
AK_TRACE__TYPE=langfuse # or openllmetry
# Langfuse credentials (if using langfuse):
# LANGFUSE_PUBLIC_KEY=pk-lf-...
# LANGFUSE_SECRET_KEY=sk-lf-...
# LANGFUSE_HOST=https://cloud.langfuse.com
# OpenLLMetry credentials (if using openllmetry):
# TRACELOOP_API_KEY=your-api-key
config.yaml
debug: false
session:
type: redis
redis:
url: redis://localhost:6379
ttl: 604800
prefix: "ak:sessions:"
api:
host: 0.0.0.0
port: 8000
enabled_routes:
agents: true
a2a:
enabled: true
agents: ["*"]
url: http://localhost:8000/a2a
task_store_type: in_memory
mcp:
enabled: false
expose_agents: false
agents: ["*"]
url: http://localhost:8000/mcp
trace:
enabled: true
type: langfuse
test:
mode: fallback
judge:
model: gpt-4o-mini
provider: openai
embedding_model: text-embedding-3-small
guardrail:
input:
enabled: false
type: openai
model: gpt-4o-mini
config_path: /path/to/guardrails_input.json
output:
enabled: false
type: openai
model: gpt-4o-mini
config_path: /path/to/guardrails_output.json
slack:
agent: my-agent
agent_acknowledgement: "Processing your request..."
whatsapp:
agent: my-agent
agent_acknowledgement: "Processing..."
messenger:
agent: my-agent
instagram:
agent: my-agent
telegram:
agent: my-agent
gmail:
agent: my-agent
poll_interval: 30
label_filter: "INBOX"
config.json
{
"debug": false,
"session": {
"type": "redis",
"redis": {
"url": "redis://localhost:6379",
"ttl": 604800,
"prefix": "ak:sessions:"
}
},
"api": {
"host": "0.0.0.0",
"port": 8000,
"enabled_routes": {
"agents": true
}
},
"a2a": {
"enabled": true,
"agents": ["*"],
"url": "http://localhost:8000/a2a",
"task_store_type": "in_memory"
},
"mcp": {
"enabled": false,
"expose_agents": false,
"agents": ["*"],
"url": "http://localhost:8000/mcp"
},
"trace": {
"enabled": true,
"type": "langfuse"
},
"test": {
"mode": "fallback",
"judge": {
"model": "gpt-4o-mini",
"provider": "openai",
"embedding_model": "text-embedding-3-small"
}
},
"guardrail": {
"input": {
"enabled": false,
"type": "openai",
"model": "gpt-4o-mini",
"config_path": "/path/to/guardrails_input.json"
},
"output": {
"enabled": false,
"type": "openai",
"model": "gpt-4o-mini",
"config_path": "/path/to/guardrails_output.json"
}
},
"slack": {
"agent": "my-agent",
"agent_acknowledgement": "Processing your request..."
},
"whatsapp": {
"agent": "my-agent",
"agent_acknowledgement": "Processing..."
},
"messenger": {
"agent": "my-agent"
},
"instagram": {
"agent": "my-agent"
},
"telegram": {
"agent": "my-agent"
},
"gmail": {
"agent": "my-agent",
"poll_interval": 30,
"label_filter": "INBOX"
}
}
Configuration Notes
- Empty environment variables are ignored
- Unknown fields in files or environment variables are ignored
- Environment variables override configuration file values
- Configuration file values override built-in defaults
- Nested fields use underscore (
_) delimiter in environment variables
Extensibility
Custom Framework Adapters
To add support for a new framework:
- Implement a
Runnerclass for your framework - Create an
Agentwrapper class - Create a
Moduleclass that registers agents with the Runtime
Example structure:
from agentkernel.core import Agent, Runner, Module
class MyFrameworkRunner(Runner):
def run(self, agent, prompt, session):
# Implement framework-specific execution
pass
class MyFrameworkAgent(Agent):
def __init__(self, native_agent):
self.native_agent = native_agent
self.runner = MyFrameworkRunner()
class MyFrameworkModule(Module):
def __init__(self, agents):
super().__init__()
for agent in agents:
wrapped = MyFrameworkAgent(agent)
self.register(wrapped)
Session Management
Sessions maintain state across agent interactions. Framework adapters manage their own session storage within the Session object using namespaced keys:
"crewai"— CrewAI session data"langgraph"— LangGraph session data"openai"— OpenAI Agents SDK session data"adk"— Google ADK session data
Access the session in your runner:
def run(self, agent, prompt, session):
# Get framework-specific data
my_data = session.get("my_framework", {})
# Process and update data
my_data["last_prompt"] = prompt
# Update session
session.set("my_framework", my_data)
Development
Requirements:
- Python 3.12+
- uv 0.8.0+ (recommended) or pip
Setup:
git clone https://github.com/yaalalabs/agent-kernel.git
cd agent-kernel/ak-py
uv sync # or: pip install -e ".[dev]"
Run Tests:
uv run pytest
# or: pytest
Code Quality:
The project uses:
black— Code formattingisort— Import sortingmypy— Type checking
License
MIT License - see LICENSE file for details.
Support
- Issues: GitHub Issues
- Documentation: Full Documentation
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentkernel-0.2.11.tar.gz.
File metadata
- Download URL: agentkernel-0.2.11.tar.gz
- Upload date:
- Size: 86.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a6af9c9316f037b2619292867bec7a9db0bbeb5ca29cc618b947a21904672c6e
|
|
| MD5 |
da0b7e3ccae6891779a7bfeea92c66bd
|
|
| BLAKE2b-256 |
b9315965c6de4f9b135535fd2389026b0dc3fb492267799470ce0413c171cd10
|
Provenance
The following attestation bundles were made for agentkernel-0.2.11.tar.gz:
Publisher:
publish.yaml on yaalalabs/agent-kernel
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentkernel-0.2.11.tar.gz -
Subject digest:
a6af9c9316f037b2619292867bec7a9db0bbeb5ca29cc618b947a21904672c6e - Sigstore transparency entry: 817077877
- Sigstore integration time:
-
Permalink:
yaalalabs/agent-kernel@4b54f26f1c3108075b937fdbd5ded016a1d80d07 -
Branch / Tag:
refs/heads/develop - Owner: https://github.com/yaalalabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@4b54f26f1c3108075b937fdbd5ded016a1d80d07 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file agentkernel-0.2.11-py3-none-any.whl.
File metadata
- Download URL: agentkernel-0.2.11-py3-none-any.whl
- Upload date:
- Size: 133.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9adacf45f0ae155828629b10e9c90aee3eccd6df7dfbedfa86d940de3c0fca5a
|
|
| MD5 |
e90f46ac2e85577583b06a276425fe92
|
|
| BLAKE2b-256 |
00c3cc35726be0222a550c9f4bb8bea636d6cb893bf2aad5affba0a6eb030f11
|
Provenance
The following attestation bundles were made for agentkernel-0.2.11-py3-none-any.whl:
Publisher:
publish.yaml on yaalalabs/agent-kernel
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentkernel-0.2.11-py3-none-any.whl -
Subject digest:
9adacf45f0ae155828629b10e9c90aee3eccd6df7dfbedfa86d940de3c0fca5a - Sigstore transparency entry: 817077938
- Sigstore integration time:
-
Permalink:
yaalalabs/agent-kernel@4b54f26f1c3108075b937fdbd5ded016a1d80d07 -
Branch / Tag:
refs/heads/develop - Owner: https://github.com/yaalalabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@4b54f26f1c3108075b937fdbd5ded016a1d80d07 -
Trigger Event:
workflow_dispatch
-
Statement type: