Skip to main content

A client library for interacting with the Agents API

Project description

Agents Client Library

Overview

The Agents Client Library provides a simple interface for interacting with the Agents API. It handles authentication, request management, and provides convenient methods for managing chatbots and agents.

Installation

From PyPI

pip install agents-client

From Source

git clone https://github.com/Levangie-Laboratories/agents-client.git
cd agents-client
pip install -r requirements.txt

Configuration

The client library uses a config.json file for API settings. You can either use the default configuration or provide your own:

from agents.client import AgentClient

# Using default configuration
client = AgentClient()

# Using custom configuration file
client = AgentClient(config_path='path/to/config.json')

# Override configuration programmatically
client = AgentClient(base_url='https://api.example.com', api_version='v2')

Configuration Options

  • base_url: API base URL
  • version: API version
  • timeout: Request timeout in seconds
  • retry_attempts: Number of retry attempts
  • retry_delay: Delay between retries in seconds

See config.json for all available options.

Quick Start

Basic Usage

from agents.client import AgentClient

# Initialize client
client = AgentClient("http://localhost:8000")

# Get API key
api_key_data = client.get_quick_api_key()
print(f"API Key: {api_key_data['api_key']}")

# Create a chatbot
config = {
    "behavior": "friendly",
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 500
}
chatbot = client.create_chatbot(name="MyBot", model="gpt-4", config=config)

# Make an inference
response = client.infer_chatbot(chatbot["id"], "Hello, how are you?")

Async Streaming Example

from agents.client import AgentClient
import asyncio

async def main():
    # Initialize client with async context manager
    async with AgentClient("http://localhost:8000") as client:
        client.set_api_key("your-api-key")

        # Create an agent with API execution mode
        config = {
            "behavior": "task-focused",
            "model": "gpt-4",
            "api_mode": True  # Enable API execution mode
        }
        agent = await client.create_agent_with_tools(
            name="FileManager",
            model="gpt-4",
            tools=FileTools(),  # Your tool class instance
            config=config
        )

        # Stream interactions with the agent
        async for event in client.process_agent_request(agent["id"], "Update debug mode in config.json"):
            if event["type"] == "function_call":
                print(f"Executing function: {event['data']['function']}")
                # Function is automatically executed by the client
            elif event["type"] == "execution_status":
                print(f"Execution result: {event['data']}")
            elif event["type"] == "completion":
                print(f"Task completed: {event['data']}")
            elif event["type"] == "error":
                print(f"Error: {event['data']}")

# Run the async client
asyncio.run(main())

State Management Example

async with AgentClient("http://localhost:8000") as client:
    # State is automatically synchronized
    async for event in client.process_agent_request(agent_id, message):
        if event["type"] == "state_update":
            print(f"Agent state updated: {event['data']}")
        elif event["type"] == "function_call":
            # State is preserved across function calls
            result = await client.execute_function(event["data"])
            # State is automatically updated with function results
            await client.submit_result(agent_id, event["data"]["sequence_id"], result)

## Authentication
The client supports two authentication methods:
1. Quick API key generation
2. Manual API key setting

```python
# Method 1: Quick API key
api_key_data = client.get_quick_api_key()

# Method 2: Manual setting
client.set_api_key("your-api-key")

Chatbot Operations

Creating a Chatbot

config = {
    "behavior": "friendly",
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 500,
    "provider": "openai"
}

chatbot = client.create_chatbot(
    name="MyAssistant",
    model="gpt-4",
    config=config
)

Listing Chatbots

chatbots = client.list_chatbots()
for bot in chatbots:
    print(f"Bot: {bot['name']} (ID: {bot['id']})")

Making Inferences

response = client.infer_chatbot(
    chatbot_id=123,
    message="What's the weather like?"
)
print(response["response"])

Updating Chatbots

updated_config = {
    "temperature": 0.8,
    "max_tokens": 1000
}

updated_bot = client.update_chatbot(
    chatbot_id=123,
    name="UpdatedBot",
    model="gpt-4",
    config=updated_config
)

Deleting Chatbots

result = client.delete_chatbot(chatbot_id=123)

Agent Operations

Creating an Agent

config = {
    "tool_config": {...},
    "behavior": "task-focused"
}

agent = client.create_agent(
    name="TaskAgent",
    model="gpt-4",
    class_instance="MyAgentClass",
    config=config
)

Listing Agents

agents = client.list_agents()
for agent in agents:
    print(f"Agent: {agent['name']} (ID: {agent['id']})")

Command Execution System

The client now includes an automatic command execution system using the ClientInterpreter:

from client import AgentClient
from client.command_handler import ToolConfigGenerator

# Define your tools
class FileTools:
    def read_file(self, file_path: str) -> str:
        """Read content from a file"""
        with open(file_path, 'r') as f:
            return f.read()

    def write_file(self, file_path: str, content: str) -> str:
        """Write content to a file"""
        with open(file_path, 'w') as f:
            f.write(content)
        return f"Successfully wrote to {file_path}"

# Initialize client and tools
client = AgentClient()
tools = FileTools()

# Register tools with the interpreter
tool_config = ToolConfigGenerator.extract_command_config(tools)
client.interpreter.register_command_instance(tools, tool_config)

# Interact with agent - commands are executed automatically
response = client.interact(
    agent_id,
    "Update the config file"
)

# The interpreter automatically:
# 1. Executes any commands in the response
# 2. Collects the results
# 3. Sends them back to the agent
# 4. Returns the final response

The new system simplifies command execution by:


Key features of the new command system:
- Automatic command execution and result handling
- Built-in command validation and safety checks
- Simplified tool registration using decorators
- Automatic result mapping in responses
- Support for both synchronous and asynchronous operations
- Comprehensive error handling and reporting

### Supported Commands
The client can execute various commands locally:

```python
# File operations
commands = [
    {"view_file": {"file_path": "config.json"}},
    {"smart_replace": {
        "file_path": "config.json",
        "old_text": "debug: false",
        "new_text": "debug: true"
    }},
    {"create_file": {
        "file_path": "new_file.txt",
        "content": "Hello, world!"
    }}
]

# Execute commands with safety checks
results = client.execute_commands(commands, context={})

Command Execution Safety

  • File path validation
  • Comprehensive error handling
  • Safe text replacement
  • Automatic retries for network issues
# Example with error handling
try:
    results = client.execute_commands(commands, context={})
    if any(r["status"] == "error" for r in results["command_results"]):
        print("Some commands failed to execute")
        for result in results["command_results"]:
            if result["status"] == "error":
                print(f"Error: {result['error']}")
except Exception as e:
    print(f"Execution failed: {str(e)}")

Streaming Operations

Basic Streaming

async with AgentClient("http://localhost:8000") as client:
    # Stream responses from agent
    async for event in client.interact_stream(agent_id, message):
        if event["type"] == "function_call":
            # Handle function execution
            result = await client.execute_function(event["data"])
            await client.submit_result(agent_id, event["data"]["sequence_id"], result)
        elif event["type"] == "completion":
            print(f"Completed: {event['data']}")

Concurrent Command Execution

async def process_commands(client, commands, instance_id):
    # Commands are executed concurrently
    results = await client.execute_commands(commands, instance_id)
    return results

Error Handling

The client includes comprehensive error handling with streaming support:

Streaming Error Handling

async with AgentClient("http://localhost:8000") as client:
    try:
        async for event in client.interact_stream(agent_id, message):
            if event["type"] == "error":
                print(f"Error occurred: {event['data']}")
                break
            elif event["type"] == "function_call":
                try:
                    result = await client.execute_function(event["data"])
                    await client.submit_result(
                        agent_id,
                        event["data"]["sequence_id"],
                        result
                    )
                except Exception as e:
                    print(f"Function execution error: {e}")
    except Exception as e:
        print(f"Stream error: {e}")

Command Execution Errors

try:
    results = client.execute_commands(commands, context)
    for result in results['command_results']:
        if result['status'] == 'error':
            print(f"Command {result['command']} failed: {result['error']}")
except client.CommandExecutionError as e:
    print(f"Execution error: {str(e)}")

API Errors

try:
    chatbot = client.get_chatbot(999)
except Exception as e:
    print(f"API error: {str(e)}")

Best Practices

  1. Always handle API errors in production code
  2. Store API keys securely
  3. Use appropriate timeouts for API calls
  4. Monitor rate limits
  5. Implement proper error handling
  6. Validate file paths before operations
  7. Use context information for better error tracking
  8. Implement proper retry strategies

Error Handling Best Practices

# Comprehensive error handling example
try:
    # Initial interaction
    response = client.interact_with_agent(agent_id, message)
    
    if response['status'] == 'pending_execution':
        try:
            # Execute commands with safety checks
            results = client.execute_commands(
                response['commands'],
                response.get('context', {})
            )
            
            # Check individual command results
            failed_commands = [
                r for r in results['command_results']
                if r['status'] == 'error'
            ]
            
            if failed_commands:
                print("Some commands failed:")
                for cmd in failed_commands:
                    print(f"- {cmd['command']}: {cmd['error']}")
            
            # Continue interaction with results
            final_response = client.interact_with_agent(
                agent_id,
                message,
                execution_results=results
            )
            
        except client.CommandExecutionError as e:
            print(f"Command execution failed: {e}")
            # Handle command execution failure
            
except Exception as e:
    print(f"Interaction failed: {e}")
    # Handle interaction failure

Advanced Usage

Custom Headers

client = AgentClient(
    base_url="http://localhost:8000",
    headers={"Custom-Header": "value"}
)

Batch Operations

# Create multiple chatbots
configs = [
    {"name": "Bot1", "model": "gpt-4", "config": {...}},
    {"name": "Bot2", "model": "gpt-4", "config": {...}}
]

chatbots = []
for config in configs:
    bot = client.create_chatbot(**config)
    chatbots.append(bot)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agents_client-0.1.0.tar.gz (9.4 kB view details)

Uploaded Source

Built Distribution

agents_client-0.1.0-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file agents_client-0.1.0.tar.gz.

File metadata

  • Download URL: agents_client-0.1.0.tar.gz
  • Upload date:
  • Size: 9.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for agents_client-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b6eaed5caf9e522730471cd49c67666456863499897ec5969a64a86b33d01e58
MD5 b41a89bde71f3dbfaad86c7cc0e40b78
BLAKE2b-256 7ff9617d5a3c718d2df7edde281ac25af7097b8dea5f2d79f1ad0cddb40b289b

See more details on using hashes here.

File details

Details for the file agents_client-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agents_client-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ad5ef75152d03c3f101ecd282ae7306300e3e07d5c972e3be8bd27da856df314
MD5 bac9e00d300fa979bd7d7f187efd35e8
BLAKE2b-256 51b9b8feb0bcf2b9593494eaaa3b83786c7b7cabf042d1f058378201d5879de8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page