A Python library for building LangGraph agents from YAML configuration with OpenAI-compatible REST API
Project description
KubeAgentic v2 🤖
Build powerful AI agents from YAML configuration with OpenAI-compatible REST API
🌟 Overview
KubeAgentic v2 is a powerful Python library that simplifies building AI agents with LangGraph. Define your agents declaratively in YAML and access them through an OpenAI-compatible REST API. No complex code required!
Key Features
✨ Declarative Configuration - Define agents in simple YAML files
🔌 OpenAI-Compatible API - Drop-in replacement for OpenAI endpoints
🚀 Multiple LLM Providers - OpenAI, Anthropic, Ollama, Hugging Face, and more
🔧 Flexible Tool System - Built-in and custom tool support
📊 Production-Ready - Logging, monitoring, rate limiting, and more
🔒 Secure by Default - API key auth, CORS, rate limiting
💾 Session Management - Persistent conversation history
⚡ Streaming Support - Real-time token streaming
📈 Cost Tracking - Monitor token usage and costs
🚀 Quick Start
Installation
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install KubeAgentic
pip install kubeagentic
# Or install from source
git clone https://github.com/KubeAgentic-Community/kubeagenticpkg.git
cd kubeagenticpkg
pip install -e ".[dev]"
Create Your First Agent
1. Create a configuration file (my_agent.yaml):
version: "1.0"
agent:
name: "customer_support_agent"
description: "A helpful customer support agent"
llm:
provider: "openai"
model: "gpt-4"
temperature: 0.7
max_tokens: 1000
system_prompt: |
You are a helpful customer support agent.
Be friendly, professional, and concise.
tools:
- name: "search_knowledge_base"
description: "Search the company knowledge base"
- name: "create_ticket"
description: "Create a support ticket"
logging:
level: "info"
2. Start the API server:
# Using CLI
kubeagentic serve --config my_agent.yaml --port 8000
# Or using Python
from kubeagentic import AgentServer
server = AgentServer.from_yaml("my_agent.yaml")
server.run(host="0.0.0.0", port=8000)
3. Use the API:
import openai
# Point to your KubeAgentic server
client = openai.OpenAI(
api_key="your-api-key",
base_url="http://localhost:8000/v1"
)
# Chat with your agent
response = client.chat.completions.create(
model="customer_support_agent",
messages=[
{"role": "user", "content": "How do I reset my password?"}
]
)
print(response.choices[0].message.content)
📖 Documentation
Configuration Schema
Full YAML configuration options:
version: "1.0"
agent:
name: "my_agent"
description: "Agent description"
# LLM Configuration
llm:
provider: "openai" # openai, anthropic, ollama, huggingface
model: "gpt-4"
temperature: 0.7
max_tokens: 1000
top_p: 1.0
frequency_penalty: 0.0
presence_penalty: 0.0
# Alternative: Multiple LLMs with fallback
llms:
- provider: "openai"
model: "gpt-4"
priority: 1
- provider: "anthropic"
model: "claude-3-opus-20240229"
priority: 2
# System prompt
system_prompt: "Your system prompt here"
# Tools configuration
tools:
- name: "tool_name"
description: "Tool description"
parameters:
type: "object"
properties:
param1:
type: "string"
description: "Parameter description"
# Memory & session configuration
memory:
type: "buffer" # buffer, summary, conversation
max_messages: 10
# Logging configuration
logging:
level: "info" # debug, info, warning, error
format: "json"
output: "console" # console, file
# Cost & rate limits
limits:
max_tokens_per_request: 4000
max_requests_per_minute: 60
daily_token_budget: 1000000
API Endpoints
KubeAgentic provides OpenAI-compatible endpoints:
Chat Completions
POST /v1/chat/completions
{
"model": "customer_support_agent",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"temperature": 0.7,
"stream": false
}
Streaming
response = client.chat.completions.create(
model="my_agent",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")
Health & Monitoring
GET /health # Liveness check
GET /ready # Readiness check
GET /metrics # Prometheus metrics
🔧 Advanced Features
Custom Tools
Create custom tools for your agents:
from kubeagentic.tools import BaseTool
from pydantic import BaseModel, Field
class SearchInput(BaseModel):
query: str = Field(..., description="Search query")
class SearchTool(BaseTool):
name = "search"
description = "Search the web"
args_schema = SearchInput
async def _arun(self, query: str) -> str:
# Your search implementation
results = await search_web(query)
return results
Register in YAML:
tools:
- type: "custom"
class: "my_module.SearchTool"
Multiple Agents
Run multiple agents simultaneously:
from kubeagentic import AgentManager
manager = AgentManager()
manager.load_agent("agent1.yaml")
manager.load_agent("agent2.yaml")
# Access different agents
response = await manager.chat(
agent_name="agent1",
message="Hello!"
)
Session Management
Maintain conversation context:
# Create a session
session_id = await manager.create_session(
agent_name="my_agent",
user_id="user123"
)
# Continue conversation
response = await manager.chat(
agent_name="my_agent",
message="What did we discuss earlier?",
session_id=session_id
)
🔒 Security
API Key Authentication
# Generate API key
kubeagentic apikey create --name "my-app"
# Use in requests
curl -H "Authorization: Bearer YOUR_API_KEY" \
http://localhost:8000/v1/chat/completions
Environment Variables
# .env file
KUBEAGENTIC_API_KEYS=key1,key2,key3
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
DATABASE_URL=postgresql://user:pass@localhost/db
REDIS_URL=redis://localhost:6379
📊 Monitoring & Observability
Prometheus Metrics
# prometheus.yml
scrape_configs:
- job_name: 'kubeagentic'
static_configs:
- targets: ['localhost:8000']
Available metrics:
kubeagentic_requests_total- Total requestskubeagentic_request_duration_seconds- Request latencykubeagentic_tokens_used_total- Token usagekubeagentic_costs_total- Total costs
Structured Logging
# Configure logging
import logging
from kubeagentic.logging import setup_logging
setup_logging(
level="INFO",
format="json",
output="console"
)
🐳 Docker Deployment
Docker
# Build image
docker build -t kubeagentic:latest .
# Run container
docker run -d \
-p 8000:8000 \
-v $(pwd)/config:/app/config \
-e OPENAI_API_KEY=sk-... \
kubeagentic:latest
Docker Compose
version: '3.8'
services:
kubeagentic:
build: .
ports:
- "8000:8000"
volumes:
- ./config:/app/config
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- DATABASE_URL=postgresql://postgres:password@db:5432/kubeagentic
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:16
environment:
- POSTGRES_DB=kubeagentic
- POSTGRES_PASSWORD=password
redis:
image: redis:7-alpine
☸️ Kubernetes Deployment
# Apply manifests
kubectl apply -f k8s/
# Or use Helm
helm install kubeagentic ./helm/kubeagentic
🧪 Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=kubeagentic --cov-report=html
# Run specific test
pytest tests/test_agent.py -v
# Run load tests
locust -f tests/load/locustfile.py
🤝 Contributing
We welcome contributions! Please see CONTRIBUTING.md for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📝 License
This project is licensed under the MIT License - see LICENSE file for details.
🙏 Acknowledgments
📮 Support
- 🌐 Website: https://kubeagentic.com
- 📧 Email: contact@kubeagentic.com
- 🐛 Issues: GitHub Issues
- 📚 Docs: Documentation
- 💬 Discussions: GitHub Discussions
Built with ❤️ by the KubeAgentic Team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kubeagentic-0.2.1.tar.gz.
File metadata
- Download URL: kubeagentic-0.2.1.tar.gz
- Upload date:
- Size: 29.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e0524612fddef2b27fe83a6494b81c3a8c244cabb56218cbc9b20fb969bab50c
|
|
| MD5 |
d9d174327b6d71da0472051e29d212a6
|
|
| BLAKE2b-256 |
b0060a78ec729b5ddcf8b13ce5d946e0c361efd0bb91a666851c55593f6c40a8
|
File details
Details for the file kubeagentic-0.2.1-py3-none-any.whl.
File metadata
- Download URL: kubeagentic-0.2.1-py3-none-any.whl
- Upload date:
- Size: 9.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d217788df4aa3850ff33075326f77ad03bf19f9d6b5c4de0f777249a49b9aa54
|
|
| MD5 |
d007603a0fd5427cfc7707363dd1cf2c
|
|
| BLAKE2b-256 |
66a943f675681d36c4c680afad2f143fc0bb91995ce2bb13327abe8365a36d4f
|