Skip to main content

A Python library for building LangGraph agents from YAML configuration with OpenAI-compatible REST API

Project description

KubeAgentic v2 🤖

Build powerful AI agents from YAML configuration with OpenAI-compatible REST API

Python 3.10+ License: MIT Code style: black PyPI version

🌐 Website | 📚 Documentation | 🚀 Quick Start | 💬 Discussions


🌟 Overview

KubeAgentic v2 is a powerful Python library that simplifies building AI agents with LangGraph. Define your agents declaratively in YAML and access them through an OpenAI-compatible REST API. No complex code required!

Key Features

Declarative Configuration - Define agents in simple YAML files
🔌 OpenAI-Compatible API - Drop-in replacement for OpenAI endpoints
🚀 Multiple LLM Providers - OpenAI, Anthropic, Ollama, Hugging Face, and more
🔧 Flexible Tool System - Built-in and custom tool support
📊 Production-Ready - Logging, monitoring, rate limiting, and more
🔒 Secure by Default - API key auth, CORS, rate limiting
💾 Session Management - Persistent conversation history
Streaming Support - Real-time token streaming
📈 Cost Tracking - Monitor token usage and costs


🚀 Quick Start

Installation

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install KubeAgentic
pip install kubeagentic

# Or install from source
git clone https://github.com/KubeAgentic-Community/kubeagenticpkg.git
cd kubeagenticpkg
pip install -e ".[dev]"

Create Your First Agent

1. Create a configuration file (my_agent.yaml):

version: "1.0"
agent:
  name: "customer_support_agent"
  description: "A helpful customer support agent"
  
  llm:
    provider: "openai"
    model: "gpt-4"
    temperature: 0.7
    max_tokens: 1000
  
  system_prompt: |
    You are a helpful customer support agent.
    Be friendly, professional, and concise.
  
  tools:
    - name: "search_knowledge_base"
      description: "Search the company knowledge base"
    - name: "create_ticket"
      description: "Create a support ticket"
  
  logging:
    level: "info"

2. Start the API server:

# Using CLI
kubeagentic serve --config my_agent.yaml --port 8000

# Or using Python
from kubeagentic import AgentServer

server = AgentServer.from_yaml("my_agent.yaml")
server.run(host="0.0.0.0", port=8000)

3. Use the API:

import openai

# Point to your KubeAgentic server
client = openai.OpenAI(
    api_key="your-api-key",
    base_url="http://localhost:8000/v1"
)

# Chat with your agent
response = client.chat.completions.create(
    model="customer_support_agent",
    messages=[
        {"role": "user", "content": "How do I reset my password?"}
    ]
)

print(response.choices[0].message.content)

📖 Documentation

Configuration Schema

Full YAML configuration options:

version: "1.0"

agent:
  name: "my_agent"
  description: "Agent description"
  
  # LLM Configuration
  llm:
    provider: "openai"  # openai, anthropic, ollama, huggingface
    model: "gpt-4"
    temperature: 0.7
    max_tokens: 1000
    top_p: 1.0
    frequency_penalty: 0.0
    presence_penalty: 0.0
    
  # Alternative: Multiple LLMs with fallback
  llms:
    - provider: "openai"
      model: "gpt-4"
      priority: 1
    - provider: "anthropic"
      model: "claude-3-opus-20240229"
      priority: 2
      
  # System prompt
  system_prompt: "Your system prompt here"
  
  # Tools configuration
  tools:
    - name: "tool_name"
      description: "Tool description"
      parameters:
        type: "object"
        properties:
          param1:
            type: "string"
            description: "Parameter description"
      
  # Memory & session configuration
  memory:
    type: "buffer"  # buffer, summary, conversation
    max_messages: 10
    
  # Logging configuration
  logging:
    level: "info"  # debug, info, warning, error
    format: "json"
    output: "console"  # console, file
    
  # Cost & rate limits
  limits:
    max_tokens_per_request: 4000
    max_requests_per_minute: 60
    daily_token_budget: 1000000

API Endpoints

KubeAgentic provides OpenAI-compatible endpoints:

Chat Completions

POST /v1/chat/completions
{
  "model": "customer_support_agent",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "temperature": 0.7,
  "stream": false
}

Streaming

response = client.chat.completions.create(
    model="my_agent",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="")

Health & Monitoring

GET /health          # Liveness check
GET /ready           # Readiness check
GET /metrics         # Prometheus metrics

🔧 Advanced Features

Custom Tools

Create custom tools for your agents:

from kubeagentic.tools import BaseTool
from pydantic import BaseModel, Field

class SearchInput(BaseModel):
    query: str = Field(..., description="Search query")

class SearchTool(BaseTool):
    name = "search"
    description = "Search the web"
    args_schema = SearchInput
    
    async def _arun(self, query: str) -> str:
        # Your search implementation
        results = await search_web(query)
        return results

Register in YAML:

tools:
  - type: "custom"
    class: "my_module.SearchTool"

Multiple Agents

Run multiple agents simultaneously:

from kubeagentic import AgentManager

manager = AgentManager()
manager.load_agent("agent1.yaml")
manager.load_agent("agent2.yaml")

# Access different agents
response = await manager.chat(
    agent_name="agent1",
    message="Hello!"
)

Session Management

Maintain conversation context:

# Create a session
session_id = await manager.create_session(
    agent_name="my_agent",
    user_id="user123"
)

# Continue conversation
response = await manager.chat(
    agent_name="my_agent",
    message="What did we discuss earlier?",
    session_id=session_id
)

🔒 Security

API Key Authentication

# Generate API key
kubeagentic apikey create --name "my-app"

# Use in requests
curl -H "Authorization: Bearer YOUR_API_KEY" \
     http://localhost:8000/v1/chat/completions

Environment Variables

# .env file
KUBEAGENTIC_API_KEYS=key1,key2,key3
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
DATABASE_URL=postgresql://user:pass@localhost/db
REDIS_URL=redis://localhost:6379

📊 Monitoring & Observability

Prometheus Metrics

# prometheus.yml
scrape_configs:
  - job_name: 'kubeagentic'
    static_configs:
      - targets: ['localhost:8000']

Available metrics:

  • kubeagentic_requests_total - Total requests
  • kubeagentic_request_duration_seconds - Request latency
  • kubeagentic_tokens_used_total - Token usage
  • kubeagentic_costs_total - Total costs

Structured Logging

# Configure logging
import logging
from kubeagentic.logging import setup_logging

setup_logging(
    level="INFO",
    format="json",
    output="console"
)

🐳 Docker Deployment

Docker

# Build image
docker build -t kubeagentic:latest .

# Run container
docker run -d \
  -p 8000:8000 \
  -v $(pwd)/config:/app/config \
  -e OPENAI_API_KEY=sk-... \
  kubeagentic:latest

Docker Compose

version: '3.8'
services:
  kubeagentic:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - ./config:/app/config
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - DATABASE_URL=postgresql://postgres:password@db:5432/kubeagentic
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
      
  db:
    image: postgres:16
    environment:
      - POSTGRES_DB=kubeagentic
      - POSTGRES_PASSWORD=password
      
  redis:
    image: redis:7-alpine

☸️ Kubernetes Deployment

# Apply manifests
kubectl apply -f k8s/

# Or use Helm
helm install kubeagentic ./helm/kubeagentic

🧪 Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=kubeagentic --cov-report=html

# Run specific test
pytest tests/test_agent.py -v

# Run load tests
locust -f tests/load/locustfile.py

🤝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📝 License

This project is licensed under the MIT License - see LICENSE file for details.


🙏 Acknowledgments


📮 Support


Built with ❤️ by the KubeAgentic Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kubeagentic-0.2.2.tar.gz (54.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kubeagentic-0.2.2-py3-none-any.whl (43.2 kB view details)

Uploaded Python 3

File details

Details for the file kubeagentic-0.2.2.tar.gz.

File metadata

  • Download URL: kubeagentic-0.2.2.tar.gz
  • Upload date:
  • Size: 54.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for kubeagentic-0.2.2.tar.gz
Algorithm Hash digest
SHA256 e4d58ee5199bc0c569bee397bf5d4d4685a39d1ed615b888ec54414df2dfa245
MD5 83bb5f955b3c09ee4645937cfa1fe1c2
BLAKE2b-256 8d2cad4e9f66b4ef4b4614638c56106f48111c32bc00124163c03629cb95d061

See more details on using hashes here.

File details

Details for the file kubeagentic-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: kubeagentic-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 43.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for kubeagentic-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 cd393dd306bf2ff2c4fb21ceaad0110f5391a5b2681d7a40c6a1560ae3de78f5
MD5 37179fb6dc1b8389e02afbb75a4b7619
BLAKE2b-256 7a5f996ab51d410689656a6f63fa3836dc59baa1eedae66eb7af4e1ae59756b0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page