Skip to main content

A Python library for building LangGraph agents from YAML configuration with OpenAI-compatible REST API

Project description

KubeAgentic v2 🤖

Build powerful AI agents from YAML configuration with OpenAI-compatible REST API

Python 3.10+ License: MIT Code style: black


🌟 Overview

KubeAgentic v2 is a powerful Python library that simplifies building AI agents with LangGraph. Define your agents declaratively in YAML and access them through an OpenAI-compatible REST API. No complex code required!

Key Features

Declarative Configuration - Define agents in simple YAML files
🔌 OpenAI-Compatible API - Drop-in replacement for OpenAI endpoints
🚀 Multiple LLM Providers - OpenAI, Anthropic, Ollama, Hugging Face, and more
🔧 Flexible Tool System - Built-in and custom tool support
📊 Production-Ready - Logging, monitoring, rate limiting, and more
🔒 Secure by Default - API key auth, CORS, rate limiting
💾 Session Management - Persistent conversation history
Streaming Support - Real-time token streaming
📈 Cost Tracking - Monitor token usage and costs


🚀 Quick Start

Installation

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install KubeAgentic
pip install kubeagentic

# Or install from source
git clone https://github.com/yourusername/kubeagentic.git
cd kubeagentic
pip install -e ".[dev]"

Create Your First Agent

1. Create a configuration file (my_agent.yaml):

version: "1.0"
agent:
  name: "customer_support_agent"
  description: "A helpful customer support agent"
  
  llm:
    provider: "openai"
    model: "gpt-4"
    temperature: 0.7
    max_tokens: 1000
  
  system_prompt: |
    You are a helpful customer support agent.
    Be friendly, professional, and concise.
  
  tools:
    - name: "search_knowledge_base"
      description: "Search the company knowledge base"
    - name: "create_ticket"
      description: "Create a support ticket"
  
  logging:
    level: "info"

2. Start the API server:

# Using CLI
kubeagentic serve --config my_agent.yaml --port 8000

# Or using Python
from kubeagentic import AgentServer

server = AgentServer.from_yaml("my_agent.yaml")
server.run(host="0.0.0.0", port=8000)

3. Use the API:

import openai

# Point to your KubeAgentic server
client = openai.OpenAI(
    api_key="your-api-key",
    base_url="http://localhost:8000/v1"
)

# Chat with your agent
response = client.chat.completions.create(
    model="customer_support_agent",
    messages=[
        {"role": "user", "content": "How do I reset my password?"}
    ]
)

print(response.choices[0].message.content)

📖 Documentation

Configuration Schema

Full YAML configuration options:

version: "1.0"

agent:
  name: "my_agent"
  description: "Agent description"
  
  # LLM Configuration
  llm:
    provider: "openai"  # openai, anthropic, ollama, huggingface
    model: "gpt-4"
    temperature: 0.7
    max_tokens: 1000
    top_p: 1.0
    frequency_penalty: 0.0
    presence_penalty: 0.0
    
  # Alternative: Multiple LLMs with fallback
  llms:
    - provider: "openai"
      model: "gpt-4"
      priority: 1
    - provider: "anthropic"
      model: "claude-3-opus-20240229"
      priority: 2
      
  # System prompt
  system_prompt: "Your system prompt here"
  
  # Tools configuration
  tools:
    - name: "tool_name"
      description: "Tool description"
      parameters:
        type: "object"
        properties:
          param1:
            type: "string"
            description: "Parameter description"
      
  # Memory & session configuration
  memory:
    type: "buffer"  # buffer, summary, conversation
    max_messages: 10
    
  # Logging configuration
  logging:
    level: "info"  # debug, info, warning, error
    format: "json"
    output: "console"  # console, file
    
  # Cost & rate limits
  limits:
    max_tokens_per_request: 4000
    max_requests_per_minute: 60
    daily_token_budget: 1000000

API Endpoints

KubeAgentic provides OpenAI-compatible endpoints:

Chat Completions

POST /v1/chat/completions
{
  "model": "customer_support_agent",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "temperature": 0.7,
  "stream": false
}

Streaming

response = client.chat.completions.create(
    model="my_agent",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="")

Health & Monitoring

GET /health          # Liveness check
GET /ready           # Readiness check
GET /metrics         # Prometheus metrics

🔧 Advanced Features

Custom Tools

Create custom tools for your agents:

from kubeagentic.tools import BaseTool
from pydantic import BaseModel, Field

class SearchInput(BaseModel):
    query: str = Field(..., description="Search query")

class SearchTool(BaseTool):
    name = "search"
    description = "Search the web"
    args_schema = SearchInput
    
    async def _arun(self, query: str) -> str:
        # Your search implementation
        results = await search_web(query)
        return results

Register in YAML:

tools:
  - type: "custom"
    class: "my_module.SearchTool"

Multiple Agents

Run multiple agents simultaneously:

from kubeagentic import AgentManager

manager = AgentManager()
manager.load_agent("agent1.yaml")
manager.load_agent("agent2.yaml")

# Access different agents
response = await manager.chat(
    agent_name="agent1",
    message="Hello!"
)

Session Management

Maintain conversation context:

# Create a session
session_id = await manager.create_session(
    agent_name="my_agent",
    user_id="user123"
)

# Continue conversation
response = await manager.chat(
    agent_name="my_agent",
    message="What did we discuss earlier?",
    session_id=session_id
)

🔒 Security

API Key Authentication

# Generate API key
kubeagentic apikey create --name "my-app"

# Use in requests
curl -H "Authorization: Bearer YOUR_API_KEY" \
     http://localhost:8000/v1/chat/completions

Environment Variables

# .env file
KUBEAGENTIC_API_KEYS=key1,key2,key3
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
DATABASE_URL=postgresql://user:pass@localhost/db
REDIS_URL=redis://localhost:6379

📊 Monitoring & Observability

Prometheus Metrics

# prometheus.yml
scrape_configs:
  - job_name: 'kubeagentic'
    static_configs:
      - targets: ['localhost:8000']

Available metrics:

  • kubeagentic_requests_total - Total requests
  • kubeagentic_request_duration_seconds - Request latency
  • kubeagentic_tokens_used_total - Token usage
  • kubeagentic_costs_total - Total costs

Structured Logging

# Configure logging
import logging
from kubeagentic.logging import setup_logging

setup_logging(
    level="INFO",
    format="json",
    output="console"
)

🐳 Docker Deployment

Docker

# Build image
docker build -t kubeagentic:latest .

# Run container
docker run -d \
  -p 8000:8000 \
  -v $(pwd)/config:/app/config \
  -e OPENAI_API_KEY=sk-... \
  kubeagentic:latest

Docker Compose

version: '3.8'
services:
  kubeagentic:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - ./config:/app/config
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - DATABASE_URL=postgresql://postgres:password@db:5432/kubeagentic
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
      
  db:
    image: postgres:16
    environment:
      - POSTGRES_DB=kubeagentic
      - POSTGRES_PASSWORD=password
      
  redis:
    image: redis:7-alpine

☸️ Kubernetes Deployment

# Apply manifests
kubectl apply -f k8s/

# Or use Helm
helm install kubeagentic ./helm/kubeagentic

🧪 Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=kubeagentic --cov-report=html

# Run specific test
pytest tests/test_agent.py -v

# Run load tests
locust -f tests/load/locustfile.py

🤝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📝 License

This project is licensed under the MIT License - see LICENSE file for details.


🙏 Acknowledgments


📮 Support


Built with ❤️ by the KubeAgentic Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kubeagentic-0.2.0.tar.gz (28.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kubeagentic-0.2.0-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file kubeagentic-0.2.0.tar.gz.

File metadata

  • Download URL: kubeagentic-0.2.0.tar.gz
  • Upload date:
  • Size: 28.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for kubeagentic-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4498bf619ce2dd4d2aac0dca516a7712a511db5b7ba06926e497e7e441f4b634
MD5 791c650b476ba88aa1edcd18dce0440f
BLAKE2b-256 0bc12c72d1de6e9333cb5d0dd438eaddf4e1a85f55122ba484b4613bd0cfaa6a

See more details on using hashes here.

File details

Details for the file kubeagentic-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: kubeagentic-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 9.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for kubeagentic-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 95eab1414961effa932857c871f96d07a84411bf2a79a43b9b7f0fef73d549a8
MD5 1a4d80a708605cb3994a77afb10738a7
BLAKE2b-256 603b41d1f8eb862e6974778240a2446e2c85bcc7cf405021957e13ed79b511a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page