Skip to main content

AI Agent Framework for building intelligent agents with multiple LLM providers

Project description

Demiurg SDK

A powerful AI agent framework for building production-ready conversational agents with support for multiple LLM providers and external tool integrations.

Features

  • 🚀 Clean API - Simple, intuitive agent initialization
  • 🔌 Multi-Provider Support - OpenAI with more providers coming soon
  • 💰 Flexible Billing - Choose who pays for API calls (builder or end-user)
  • 🛠️ Composio Integration - Connect to 150+ external services with OAuth
  • 📬 Built-in Messaging - Queue management and conversation history
  • 📁 Multimodal Support - Handle images, audio, text, and files
  • 🎨 OpenAI Tools - Image generation (DALL-E 3), TTS, transcription
  • Progress Indicators - Real-time feedback for long operations
  • 🏗️ Production Ready - Error handling, logging, and scalability

Installation

pip install demiurg

Quick Start

Simple Agent

from demiurg import Agent, OpenAIProvider

# Create an agent with OpenAI
agent = Agent(OpenAIProvider())

# Or with user-based billing
agent = Agent(OpenAIProvider(), billing="user")

Agent with External Tools (Composio)

from demiurg import Agent, OpenAIProvider, Composio

# Create agent with Twitter and GitHub access
agent = Agent(
    OpenAIProvider(),
    Composio("TWITTER", "GITHUB"),
    billing="user"
)

Custom Configuration

from demiurg import Agent, OpenAIProvider, Config

config = Config(
    name="My Assistant",
    description="A helpful AI assistant",
    model="gpt-4o",
    temperature=0.7,
    show_progress_indicators=True
)

agent = Agent(OpenAIProvider(), config=config)

Core Concepts

Billing Modes

The SDK supports two billing modes:

  • "builder" (default) - API calls are charged to the agent builder's account
  • "user" - API calls are charged to the end user's account
# Builder pays for all API calls
agent = Agent(OpenAIProvider(), billing="builder")

# End users pay for their own API calls
agent = Agent(OpenAIProvider(), billing="user")

Composio Integration

Connect your agents to external services like Twitter, GitHub, Gmail, and 150+ more:

# Configure Composio tools
agent = Agent(
    OpenAIProvider(),
    Composio("TWITTER", "GITHUB", "GMAIL"),
    billing="user"
)

# Check if user has connected their account
status = await agent.check_composio_connection("TWITTER", user_id)

# Handle OAuth flow in conversation
if not status["connected"]:
    await agent.handle_composio_auth_in_conversation(message, "TWITTER")

Create a composio-tools.txt file in your project root:

TWITTER=ac_your_twitter_config_id
GITHUB=ac_your_github_config_id
GMAIL=ac_your_gmail_config_id

Progress Indicators

Long operations automatically show progress messages:

config = Config(show_progress_indicators=True)  # Enabled by default

# Users will see:
# "🎨 Creating your image... This may take a moment."
# "🎵 Transcribing audio... This may take a moment."

Message Handling

Sending Messages

from demiurg import send_text, send_file

# Send text message
await send_text(conversation_id, "Hello from my agent!")

# Send file with caption
await send_file(
    conversation_id, 
    "/path/to/image.png", 
    caption="Here's your generated image!"
)

Processing Messages

from demiurg import Message

# Process user message
message = Message(
    content="Generate an image of a sunset",
    user_id="user123",
    conversation_id="conv456"
)

response = await agent.process_message(message)

Conversation History

from demiurg import get_conversation_history

# Get formatted history for LLM context
messages = await get_conversation_history(
    conversation_id,
    limit=50,
    provider="openai"  # Formats for specific provider
)

Built-in OpenAI Tools

When using OpenAI provider with tools enabled:

config = Config(use_tools=True)
agent = Agent(OpenAIProvider(), config=config)

Available tools:

  • generate_image - Create images with DALL-E 3
  • text_to_speech - Convert text to natural speech
  • transcribe_audio - Transcribe audio files

Custom Agents

Basic Custom Agent

from demiurg import Agent, OpenAIProvider, Message

class MyCustomAgent(Agent):
    def __init__(self):
        super().__init__(
            OpenAIProvider(),
            billing="user"
        )
    
    async def process_message(self, message: Message, content=None) -> str:
        # Add custom preprocessing
        if "urgent" in message.content.lower():
            return await self.handle_urgent_request(message)
        
        # Use standard processing
        return await super().process_message(message, content)

Agent with Custom Tools

class ToolAgent(Agent):
    def __init__(self):
        config = Config(use_tools=True)
        super().__init__(OpenAIProvider(), config=config)
        
        # Register custom tool
        self.register_tool(
            {
                "type": "function",
                "function": {
                    "name": "get_weather",
                    "description": "Get current weather",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "location": {"type": "string"}
                        },
                        "required": ["location"]
                    }
                }
            },
            self.get_weather
        )
    
    async def get_weather(self, location: str) -> str:
        # Implement weather fetching
        return f"Weather in {location}: Sunny, 72°F"

File Handling

The SDK automatically handles various file types:

# Images are analyzed with vision models
# Audio files are automatically transcribed
# Text files have their content extracted

# File size limit: 10MB
# Supported image formats: PNG, JPEG, WEBP, GIF
# Supported audio formats: MP3, WAV, M4A, and more

Error Handling

from demiurg.exceptions import (
    DemiurgError,      # Base exception
    ConfigurationError,# Configuration issues
    MessagingError,    # Messaging failures
    ProviderError,     # LLM provider errors
    FileError,         # File operation failures
    ToolError          # Tool execution errors
)

try:
    response = await agent.process_message(message)
except ProviderError as e:
    # Handle LLM provider issues
    logger.error(f"Provider error: {e}")
except DemiurgError as e:
    # Handle other Demiurg errors
    logger.error(f"Agent error: {e}")

Environment Variables

Required environment variables:

# Core Configuration
DEMIURG_BACKEND_URL=http://backend:3000  # Backend API URL
DEMIURG_AGENT_TOKEN=your_token          # Authentication token
DEMIURG_AGENT_ID=your_agent_id          # Unique agent identifier

# Provider Keys
OPENAI_API_KEY=your_openai_key          # For OpenAI provider

# Composio Integration (optional)
COMPOSIO_API_KEY=your_composio_key      # For external tools
COMPOSIO_TOOLS=TWITTER,GITHUB,GMAIL    # Comma-separated toolkits

# Advanced Settings
DEMIURG_USER_ID=builder_user_id        # Builder's user ID (for billing)
TOOL_PROVIDER=composio                  # Tool provider selection

Advanced Features

Message Queue System

The SDK includes automatic message queuing to prevent race conditions:

# Messages are automatically queued per conversation
# Prevents issues when multiple messages arrive simultaneously
# No additional configuration needed - it just works!

Multimodal Capabilities

# Process images with vision models
if message contains image:
    # Automatically analyzed with GPT-4V
    
# Handle audio messages
if message contains audio:
    # Automatically transcribed with Whisper
    
# Text file processing
if message contains text file:
    # Content extracted and provided to LLM

Production Deployment

# Health check endpoint
@app.get("/health")
async def health_check():
    return await agent.health_check()

# Queue status monitoring
@app.get("/queue-status")
async def queue_status():
    return await agent.get_queue_status()

Architecture

The SDK follows a modular architecture:

  • Agent: Core class that orchestrates everything
  • Providers: LLM integrations (OpenAI, etc.)
  • Tools: External service integrations (Composio)
  • Messaging: Communication with Demiurg platform
  • Utils: File handling, audio processing, etc.

Best Practices

  1. Always use async/await - The SDK is built for async operations
  2. Handle errors gracefully - Use try/except blocks with specific exceptions
  3. Configure billing appropriately - Choose who pays for API calls
  4. Set up Composio auth configs - Store in composio-tools.txt
  5. Enable progress indicators - Better UX for long operations
  6. Use appropriate models - GPT-4o for complex tasks, GPT-3.5 for simple ones

Migration Guide

From v0.1.10 to v0.1.11

# Old way
from demiurg import Agent, Config

config = Config(name="My Agent")
agent = Agent(config)

# New way (backward compatible)
from demiurg import Agent, OpenAIProvider

agent = Agent(OpenAIProvider())

Support

License

Copyright © 2024 Demiurg AI. All rights reserved.

This is proprietary software. See LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

demiurg-0.1.17.tar.gz (33.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

demiurg-0.1.17-py3-none-any.whl (37.9 kB view details)

Uploaded Python 3

File details

Details for the file demiurg-0.1.17.tar.gz.

File metadata

  • Download URL: demiurg-0.1.17.tar.gz
  • Upload date:
  • Size: 33.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.2

File hashes

Hashes for demiurg-0.1.17.tar.gz
Algorithm Hash digest
SHA256 32b73425903d323b8e257621127040ea0dff0efcddcb794848c9339a299356ef
MD5 298e95cbac2f2168e4a3c58bbff78b8f
BLAKE2b-256 54eb4166044358171714a080182d51ff03a0a4124b15bef147b60cb731ed19aa

See more details on using hashes here.

File details

Details for the file demiurg-0.1.17-py3-none-any.whl.

File metadata

  • Download URL: demiurg-0.1.17-py3-none-any.whl
  • Upload date:
  • Size: 37.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.2

File hashes

Hashes for demiurg-0.1.17-py3-none-any.whl
Algorithm Hash digest
SHA256 f3c62f74a9d3839c242b63558551e86c0a6c2bc02d8cd54759d0f8c14345817c
MD5 59f926189874c386d933d5a43f483073
BLAKE2b-256 00bbbc6595e9a25e6f3f6f46892698e256d8961df74be38b16d673cd435d7067

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page