Skip to main content

Framework for building automation-focused AI agents with observability

Project description

agent-workshop

Cost-effective framework for building automation-focused AI agents with full observability.

Python License: MIT

๐Ÿ“ฆ Install as a Package: Users should install agent-workshop via PyPI (uv add agent-workshop or pip install agent-workshop), not clone this repository. This repo is for framework development only. See Quick Start for the correct workflow.

Features

๐Ÿš€ Dual-Provider Architecture

  • Development: Claude Agent SDK ($20/month flat rate)
  • Production: Anthropic API (pay-per-token)
  • Automatic switching based on environment

๐Ÿ“Š Full Observability

  • Langfuse integration out of the box
  • Automatic tracing of all LLM calls
  • Cost tracking and token estimation
  • Performance metrics

๐Ÿ•ธ๏ธ LangGraph Support

  • Multi-step agent workflows
  • State management
  • Conditional routing
  • Iterative refinement

โšก Fast Setup with UV

  • Modern dependency management
  • 10-100x faster than pip/poetry
  • Reproducible environments

Quick Start

Important: Users should install agent-workshop as a package, not clone this repository. This repo is for framework development only.

Installation

# Create your project
mkdir my-research-agents
cd my-research-agents

# Initialize with UV
uv init

# Install agent-workshop from PyPI
uv add agent-workshop

# Or with pip
pip install agent-workshop

Simple Agent (80% use case)

from agent_workshop import Agent, Config

class DeliverableValidator(Agent):
    async def run(self, content: str) -> dict:
        messages = [{
            "role": "user",
            "content": f"Validate this deliverable:\n\n{content}"
        }]
        result = await self.complete(messages)
        return {"validation": result}

# Usage
config = Config()  # Auto-detects dev/prod environment
validator = DeliverableValidator(config)
result = await validator.run(report_content)

LangGraph Workflow (15% use case)

from agent_workshop.workflows import LangGraphAgent
from langgraph.graph import StateGraph, END

class ValidationPipeline(LangGraphAgent):
    def build_graph(self):
        workflow = StateGraph(dict)

        workflow.add_node("scan", self.quick_scan)
        workflow.add_node("verify", self.verify)

        workflow.add_edge("scan", "verify")
        workflow.add_edge("verify", END)
        workflow.set_entry_point("scan")

        return workflow.compile()

    async def quick_scan(self, state):
        result = await self.provider.complete([{
            "role": "user",
            "content": f"Quick scan: {state['content']}"
        }])
        return {"scan_result": result, **state}

    async def verify(self, state):
        result = await self.provider.complete([{
            "role": "user",
            "content": f"Verify: {state['scan_result']}"
        }])
        return {"final_result": result}

# Usage (still single invocation!)
pipeline = ValidationPipeline(Config())
result = await pipeline.run({"content": report})

Configuration

Environment Setup

In your project directory, create .env.development or .env.production:

# .env.development
AGENT_WORKSHOP_ENV=development

# Claude Agent SDK (development)
CLAUDE_SDK_ENABLED=true
CLAUDE_MODEL=sonnet  # opus, sonnet, haiku

# Anthropic API (production - optional in dev)
ANTHROPIC_API_KEY=your_api_key_here
ANTHROPIC_MODEL=claude-sonnet-4-20250514

# Langfuse Observability (optional but recommended)
LANGFUSE_ENABLED=true
LANGFUSE_PUBLIC_KEY=your_public_key
LANGFUSE_SECRET_KEY=your_secret_key
LANGFUSE_HOST=https://cloud.langfuse.com

For a complete example, see the .env.example in the repository.

Provider Switching

The framework automatically switches providers based on environment:

  • Development (AGENT_WORKSHOP_ENV=development): Uses Claude Agent SDK
  • Production (AGENT_WORKSHOP_ENV=production): Uses Anthropic API

Design Philosophy

Single-Message Pattern

agent-workshop focuses on single-message automation (input โ†’ output), NOT streaming conversations.

Perfect for:

  • โœ… Automated validations
  • โœ… Batch processing
  • โœ… Scheduled jobs
  • โœ… CI/CD pipelines

Not designed for:

  • โŒ ChatGPT-like interfaces
  • โŒ Streaming conversations
  • โŒ Real-time chat

Simple Agent vs LangGraph

Use Case Recommended Approach
Single validation check Simple Agent
Multi-step validation pipeline LangGraph
Batch processing Simple Agent
Iterative refinement LangGraph
One-shot classification Simple Agent
Multi-agent collaboration LangGraph

Example Usage

Complete User Workflow

# 1. Create your project
mkdir my-research-agents
cd my-research-agents

# 2. Initialize and install
uv init
uv add agent-workshop

# 3. Create .env.development file with your keys

# 4. Create your first agent
cat > agents/validator.py << 'EOF'
from agent_workshop import Agent, Config

class DeliverableValidator(Agent):
    async def run(self, content: str) -> dict:
        messages = [{
            "role": "user",
            "content": f"Validate this deliverable:\n\n{content}"
        }]
        result = await self.complete(messages)
        return {"validation": result}
EOF

# 5. Run your agent
python -c "
import asyncio
from agents.validator import DeliverableValidator
from agent_workshop import Config

async def main():
    validator = DeliverableValidator(Config())
    result = await validator.run('Sample deliverable content')
    print(result)

asyncio.run(main())
"

Reference Examples

For complete examples, see the repository:

Note: These examples are for reference only. Build your agents in your own project, not by cloning the framework repository.

Building Your Own Agents

Users: You build agents in your own project by installing agent-workshop as a dependency. See the Complete User Workflow above.

Your project structure should look like:

my-research-agents/              # Your project
โ”œโ”€โ”€ pyproject.toml               # dependencies = ["agent-workshop"]
โ”œโ”€โ”€ .env.development
โ”œโ”€โ”€ .env.production
โ”œโ”€โ”€ agents/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ deliverable_validator.py
โ”‚   โ””โ”€โ”€ analysis_checker.py
โ””โ”€โ”€ main.py

For detailed guidance, see the Building Agents Guide.

Contributing to the Framework

This section is for contributors who want to improve the agent-workshop framework itself.

Development Setup

# Clone the framework repository (contributors only)
git clone https://github.com/trentleslie/agent-workshop.git
cd agent-workshop

# Install with dev dependencies
uv sync --all-extras

# Run tests
uv run pytest

# Format code
uv run ruff format

# Type check
uv run mypy src/

Cost Comparison

Environment Provider Cost Model Best For
Development Claude Agent SDK $20/month flat Unlimited experimentation
Production Anthropic API $3/1M input tokens
$15/1M output tokens
Production workloads

Example: 1,000 validations/day with ~500 tokens each

  • Development: $20/month (unlimited)
  • Production: ~$30-50/month (depending on response length)

Architecture

User's Project (your own repo)
โ”œโ”€โ”€ pyproject.toml
โ”‚   โ””โ”€โ”€ dependencies: ["agent-workshop"]  โ† Install as package
โ”œโ”€โ”€ agents/
โ”‚   โ”œโ”€โ”€ deliverable_validator.py
โ”‚   โ””โ”€โ”€ analysis_checker.py
โ””โ”€โ”€ .env.development

        โ†“ imports from

agent-workshop Package (from PyPI)
โ”œโ”€โ”€ Agent (simple agents)
โ”œโ”€โ”€ LangGraphAgent (workflows)
โ”œโ”€โ”€ Providers (Claude SDK, Anthropic API)
โ””โ”€โ”€ Langfuse Integration

        โ†“ traces to

Langfuse Dashboard
โ”œโ”€โ”€ Traces
โ”œโ”€โ”€ Metrics
โ”œโ”€โ”€ Costs
โ””โ”€โ”€ Performance

Key Point: Users install agent-workshop via uv add agent-workshop or pip install agent-workshop, they do NOT clone the repository.

Documentation

Full documentation available in the repository:

Contributing

Contributions welcome! Please see CONTRIBUTING.md for guidelines.

To contribute to the framework itself, see the Contributing to the Framework section above.

License

MIT License - see LICENSE for details.

Acknowledgments

Built with:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_workshop-0.1.0.tar.gz (154.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_workshop-0.1.0-py3-none-any.whl (21.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_workshop-0.1.0.tar.gz.

File metadata

  • Download URL: agent_workshop-0.1.0.tar.gz
  • Upload date:
  • Size: 154.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.9

File hashes

Hashes for agent_workshop-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6de5d0e149633ef0cb117e707f1d1ffe2a41dbbe16b36a8fdb75a3a0fcb22795
MD5 da74b63d73fb7252c518a0f8fb6d2b63
BLAKE2b-256 047db59e4c3998da60e99c58058f6c9ea5bc8b50cf232919e00f93e44abea66a

See more details on using hashes here.

File details

Details for the file agent_workshop-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_workshop-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 53c3bf028259b5047fc307e1d86ff31bf10b32234423f451a8fc00f88b8aa825
MD5 3783f9188d046ada93f184f5a69715c6
BLAKE2b-256 3404026fe040d17988de855467e0a98f360f19d37cd8ef50d32d0278f485ad42

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page