Skip to main content

A fast framework for building Docker-wrapped AI Agents

Project description

AgentSteam ๐Ÿค–

A fast and powerful framework for building Docker-wrapped AI Agents with multi-LLM support, local tools, and MCP integration.

โœจ Features

  • ๐Ÿค– Multi-LLM Support: Easy adapter system for Claude, GPT, Gemini, and more
  • ๐Ÿ”ง Local Tools: Simple decorator-based tool system for custom functionality
  • ๐ŸŒ MCP Client: Connect to go-backend and other MCP servers seamlessly
  • ๐Ÿ“ฆ Docker Ready: One-command PyInstaller-based packaging for production
  • ๐Ÿ“ Rich Logging: Comprehensive logs and outputs for debugging
  • โšก Fast Setup: Get an agent running in minutes
  • ๐ŸŽฏ Flexible Entry Points: Support for custom entrypoint functions
  • ๐Ÿ”„ Pre/Post Processing: Built-in hooks for input/output processing
  • ๐ŸŒ Global Variables: Share data between tools and components

๐Ÿš€ Quick Start

Installation

pip install agent-steam

Basic Agent

Create a simple agent with custom tools:

# agent.py
import asyncio
from agent_steam import AgentSteam, LocalTool

@AgentSteam.Tool
class CalculatorTool(LocalTool):
    name = "calculator"
    description = "Perform basic mathematical calculations"
    
    async def execute(self, expression: str) -> str:
        try:
            result = eval(expression)  # Use safely in production
            return f"Result: {result}"
        except Exception as e:
            return f"Error: {e}"

@AgentSteam.entrypoint
async def main():
    agent = AgentSteam(
        system_prompt="You are a helpful calculator assistant."
    )
    return await agent.run()

if __name__ == "__main__":
    asyncio.run(main())

Package as Docker Container

# Package the agent
python -m agent_steam package --source-dir . --agent-name my-agent

# Build Docker container  
cd docker-build
docker build -t my-agent .

# Run the agent
docker run -v ./input:/input -v ./outputs:/outputs my-agent my-agent-cli run

๐Ÿ“š Core Concepts

1. AgentSteam Class Decorators

AgentSteam provides several class-level decorators for different purposes:

@AgentSteam.Tool - Custom Tools

Register custom tools that the agent can use:

@AgentSteam.Tool
class FileProcessorTool(LocalTool):
    name = "process_file"
    description = "Process a file and return analysis"
    
    async def execute(self, file_path: str, operation: str = "analyze") -> str:
        # Your tool logic here
        with open(file_path, 'r') as f:
            content = f.read()
        
        if operation == "analyze":
            return f"File contains {len(content)} characters"
        elif operation == "summary":
            return f"Summary of {file_path}: {content[:100]}..."

@AgentSteam.entrypoint - Docker Entry Point

Mark a function as the main entry point for Docker containers:

@AgentSteam.entrypoint
async def main():
    # This function will be called when the Docker container runs
    agent = AgentSteam(system_prompt="Your prompt here")
    return await agent.run()

@AgentSteam.preProcess - Input Processing

Process input data before the agent runs:

@AgentSteam.preProcess
def preprocess_input(input_dir: str) -> str:
    """Process input folder and return initial user message"""
    input_path = Path(input_dir)
    
    # Find and process input files
    files = list(input_path.glob("*.txt"))
    if files:
        content = files[0].read_text()
        return f"Please analyze this content: {content}"
    
    return "No input files found, please provide data."

@AgentSteam.postProcess - Output Processing

Process or export outputs after the agent completes:

@AgentSteam.postProcess
def postprocess_output(outputs_dir: str) -> None:
    """Export additional outputs to folder"""
    outputs_path = Path(outputs_dir)
    
    # Create summary report
    summary_file = outputs_path / "analysis_summary.txt"
    with open(summary_file, "w") as f:
        f.write("Analysis completed successfully\\n")
        f.write(f"Global variables: {AgentSteam._global_variables}\\n")

2. Global Variables

Share data between tools and components:

# Set global variables
AgentSteam.GlobalVariable(
    api_key="your_key_here",
    config={"model": "advanced", "threshold": 0.8}
)

# Update individual variables
AgentSteam.setGlobalVariable("processing_mode", "batch")

# Access in tools
@AgentSteam.Tool
class ConfigurableTool(LocalTool):
    async def execute(self, data: str) -> str:
        api_key = AgentSteam.getGlobalVariable("api_key", "default")
        config = AgentSteam.getGlobalVariable("config", {})
        # Use variables in your logic
        return f"Processed with config: {config}"

3. Predefined Tools

AgentSteam includes built-in tools. You can select which ones to include:

agent = AgentSteam(
    system_prompt="Your prompt",
    predefined_tools=["read", "write", "bash", "edit"]  # Only these tools
)

Available predefined tools:

  • read - Read file contents
  • write - Write files
  • edit - Edit files
  • bash - Execute bash commands
  • ls - List directory contents
  • glob - Find files by pattern
  • grep - Search file contents
  • web_fetch - Fetch web content
  • duckduckgo_search - Web search
  • ask_for_clarification - Ask user for input
  • summary - Summarize content
  • file_tree - Show directory structure

4. Local Tool Development

Create custom tools by extending LocalTool:

@AgentSteam.Tool
class DatabaseTool(LocalTool):
    name = "db_query"
    description = "Query the database for information"
    
    async def execute(self, query: str, table: str = "users") -> str:
        """
        Execute a database query
        
        @param query: SQL query to execute
        @param table: Target table name
        """
        # Access logger if needed
        if self.logger:
            self.logger.info(f"Executing query on {table}: {query}")
        
        # Your database logic here
        # result = execute_query(query, table)
        
        return f"Query executed on {table}: {query}"

๐Ÿณ Docker Packaging

Packaging Command

python -m agent_steam package [OPTIONS]

Options:

  • --source-dir - Directory containing agent code (default: .)
  • --output-dir - Output directory for package (default: ./docker-build)
  • --agent-name - Name for the agent executable (default: my-agent)
  • --base-image - Docker base image (default: ubuntu:22.04)
  • --additional-packages - Extra Python packages to include

Example Packaging

# Basic packaging
python -m agent_steam package

# Advanced packaging
python -m agent_steam package \\
  --source-dir ./my-agent \\
  --output-dir ./packaged-agent \\
  --agent-name data-processor \\
  --additional-packages pandas numpy requests

Generated Structure

packaged-agent/
โ”œโ”€โ”€ dist/data-processor/      # PyInstaller binary
โ”‚   โ”œโ”€โ”€ data-processor        # Main executable
โ”‚   โ””โ”€โ”€ _internal/           # Dependencies
โ”œโ”€โ”€ Dockerfile               # Container definition
โ”œโ”€โ”€ docker-compose.yml      # Easy deployment
โ”œโ”€โ”€ data-processor.spec     # PyInstaller spec
โ””โ”€โ”€ build-requirements.txt  # Build dependencies

Docker Commands

# Build container
cd packaged-agent
docker build -t data-processor .

# Run with help
docker run data-processor

# Run agent with mounted directories
docker run \\
  -v ./input:/input \\
  -v ./outputs:/outputs \\
  -v ./logs:/logs \\
  data-processor data-processor-cli run

# Use docker-compose
docker-compose up

๐Ÿ”ง Configuration

Environment Variables

# LLM Configuration
export AGENT_STEAM_LLM_PROVIDER=claude  # claude, gpt, gemini
export ANTHROPIC_API_KEY=your_key_here
export OPENAI_API_KEY=your_key_here

# MCP Configuration (optional)
export MCP_SERVER_URL=http://localhost:8080
export MCP_AUTH_TOKEN=your_token_here

# Agent Configuration
export SYSTEM_PROMPT="Custom system prompt"
export AGENT_STEAM_ROOT_DIR=/custom/root  # Change default paths

Directory Structure

Default directory structure in containers:

/input/          # Input files (read-only)
/outputs/        # Agent outputs
/logs/           # Log files

You can customize paths when creating the agent:

agent = AgentSteam(
    system_prompt="Your prompt",
    input_dir="/custom/input",
    output_dir="/custom/outputs", 
    logs_dir="/custom/logs",
    root_dir="/app"  # Changes default root for all paths
)

๐Ÿ“– Complete Examples

Simple Calculator Agent

# examples/simple-agent/agent.py
import asyncio
from agent_steam import AgentSteam, LocalTool

@AgentSteam.Tool
class CalculatorTool(LocalTool):
    name = "calculator"
    description = "Perform mathematical calculations"
    
    async def execute(self, expression: str) -> str:
        try:
            result = eval(expression)
            return f"Result: {result}"
        except Exception as e:
            return f"Error: {e}"

@AgentSteam.entrypoint
async def main():
    agent = AgentSteam(
        system_prompt="You are a helpful calculator. Use the calculator tool for math."
    )
    return await agent.run()

if __name__ == "__main__":
    asyncio.run(main())

Advanced Data Processing Agent

# examples/advanced-agent/agent.py
import json
from pathlib import Path
from agent_steam import AgentSteam, LocalTool

@AgentSteam.Tool
class DataAnalyzer(LocalTool):
    name = "analyze_data"
    description = "Analyze data using global configuration"
    
    async def execute(self, data: str) -> str:
        # Use logger
        if self.logger:
            self.logger.info(f"Analyzing data: {data[:50]}...")
        
        # Get global variables
        config = AgentSteam.getGlobalVariable("analysis_config", {})
        
        # Perform analysis
        result = f"Analysis completed with config: {config}"
        return result

@AgentSteam.preProcess
def preprocess_input(input_dir: str) -> str:
    """Process input files and return initial message"""
    input_path = Path(input_dir)
    
    json_files = list(input_path.glob("*.json"))
    if json_files:
        with open(json_files[0]) as f:
            data = json.load(f)
        return f"Please analyze this data: {json.dumps(data, indent=2)}"
    
    return "No JSON files found for analysis."

@AgentSteam.postProcess
def postprocess_output(outputs_dir: str) -> None:
    """Create summary report"""
    outputs_path = Path(outputs_dir)
    
    summary_file = outputs_path / "analysis_summary.txt"
    with open(summary_file, "w") as f:
        f.write("Data analysis completed\\n")
        f.write(f"Configuration used: {AgentSteam._global_variables}\\n")

@AgentSteam.entrypoint
async def main():
    # Set global configuration
    AgentSteam.GlobalVariable(
        analysis_config={
            "model": "advanced",
            "threshold": 0.8,
            "output_format": "detailed"
        }
    )
    
    # Create agent with selected tools
    agent = AgentSteam(
        system_prompt="You are a data analysis expert. Use the analyze_data tool.",
        predefined_tools=["read", "write"]  # Only basic file operations
    )
    
    return await agent.run()

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

๐Ÿ”— MCP Integration

Connect to MCP servers (like go-backend) for remote tools:

# Environment setup
export MCP_SERVER_URL=http://localhost:8080
export MCP_AUTH_TOKEN=your_token

# Agent automatically connects to MCP server
agent = AgentSteam(
    system_prompt="You have access to both local and remote MCP tools.",
    mcp_server_url="http://localhost:8080",  # Optional: override env
    mcp_auth_token="your_token"             # Optional: override env
)

๐Ÿ› ๏ธ Development

Setup Development Environment

git clone <repository>
cd agent-steam
pip install -e ".[dev]"

Project Structure

agent_steam/
โ”œโ”€โ”€ __init__.py
โ”œโ”€โ”€ core.py              # Main AgentSteam class
โ”œโ”€โ”€ cli.py               # Command line interface
โ”œโ”€โ”€ adapters/            # LLM adapters
โ”‚   โ”œโ”€โ”€ base.py         # Base adapter interface
โ”‚   โ”œโ”€โ”€ claude.py       # Claude integration
โ”‚   โ””โ”€โ”€ registry.py     # Adapter registry
โ”œโ”€โ”€ tools/              # Tool system
โ”‚   โ”œโ”€โ”€ base.py         # Base tool classes
โ”‚   โ”œโ”€โ”€ registry.py     # Tool registry
โ”‚   โ””โ”€โ”€ predefined/     # Built-in tools
โ”œโ”€โ”€ mcp/                # MCP client
โ”œโ”€โ”€ io/                 # Input/output management
โ”œโ”€โ”€ docker/             # Docker packaging
โ””โ”€โ”€ utils/              # Utilities

Running Tests

pytest tests/

๐Ÿ“„ License

MIT License - see LICENSE file for details.


AgentSteam - Build powerful AI agents, fast. ๐Ÿš€

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_steam-0.1.1.tar.gz (42.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_steam-0.1.1-py3-none-any.whl (49.4 kB view details)

Uploaded Python 3

File details

Details for the file agent_steam-0.1.1.tar.gz.

File metadata

  • Download URL: agent_steam-0.1.1.tar.gz
  • Upload date:
  • Size: 42.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for agent_steam-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d22bd80a2253f1020db215a950ab31c4e5753a269978cea91cf1cd4938cd3924
MD5 c1e1da48664d2ea83ffe9c146b8b6121
BLAKE2b-256 74f13875a34915068cf8af2738d356a864498dcd4109d1f5e76ea99f537c35c7

See more details on using hashes here.

File details

Details for the file agent_steam-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: agent_steam-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 49.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for agent_steam-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d1494648a663cfb2656e7f4e7a5ac0f35c689fcc2ed940c6ec233772b90b6f60
MD5 ed025558da83642eacbe15156caeb18f
BLAKE2b-256 f545c135c1fdb48371bb2cb880fe168ca085fa663041e57f948455234485ab69

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page