AgentWerkstatt, a minimalistic agentic framework
Project description
AgentWerkstatt ๐ค
A minimalistic agentic framework for building AI agents with tool calling capabilities. Why do we need another agentic framework? I felt that all other frameworks were too complex and had too many dependencies, I wanted to build a framework that was easy to understand and use, and that was also easy to extend. The main goal here isn't on production scenarios (yet), but on understanding and prototyping agentic systems. Therefore, the name "AgentWerkstatt" is a play on the term Agent and the German word "Werkstatt" (workshop).
Overview
AgentWerkstatt is a lightweight, extensible framework for creating AI agents. It is powered by Claude (Anthropic), but it is highly extensible and can be used with other LLMs. It features a modular architecture with pluggable LLM providers and tools, making it easy to build conversational agents with access to external capabilities like web search. More LLMs will be supported in the future.
Features
- ๐ง Modular LLM Support - Built with extensible LLM abstraction (currently supports Claude)
- ๐ง Tool System - Pluggable tool architecture with automatic tool discovery
- ๐ฌ Conversation Management - Built-in conversation history and context management
- ๐งฎ Persistent Memory - Optional mem0 integration for long-term memory and context retention
- ๐ Web Search - Integrated Tavily API for real-time web information retrieval
- ๐ Observability - Optional Langfuse integration for comprehensive tracing and analytics
- ๐ฅ๏ธ CLI Interface - Ready-to-use command-line interface
- ๐ณ 3rd Party Services - Docker Compose stack with PostgreSQL, Neo4j, and other services
- โก Lightweight - Minimal dependencies and clean architecture
Quick Start
Prerequisites
- Python 3.10 or higher
- An Anthropic API key for Claude
- (Optional) A Tavily API key for web search
- (Optional) An OpenAI API key for mem0 memory system
Installation
-
Clone the repository:
git clone https://github.com/hanneshapke/agentwerkstatt.git cd agentwerkstatt
-
Install dependencies:
# Basic installation uv sync # With optional features uv sync --extra tracing # Langfuse tracing support uv sync --extra memory # mem0 memory support uv sync --all-extras # All optional features
-
Set up environment variables:
# Create a .env file echo "ANTHROPIC_API_KEY=your_anthropic_api_key_here" >> .env echo "TAVILY_API_KEY=your_tavily_api_key_here" >> .env # Optional for web search echo "OPENAI_API_KEY=your_openai_api_key_here" >> .env # Optional for mem0 memory
3rd Party Services (Optional)
AgentWerkstatt includes a Docker Compose stack with integrated services:
- mem0 - AI memory management system for persistent context
- Langfuse - Observability and tracing platform
- PostgreSQL - Database with pgvector for embeddings
- Neo4j - Graph database for memory relationships
- Redis - Caching and session storage
- MinIO - S3-compatible object storage
To start the services:
# Start all services
docker compose -f 3rd_party/docker-compose.yaml up -d
# Or start specific services
docker compose -f 3rd_party/docker-compose.yaml up -d mem0 neo4j postgres
For detailed setup instructions, see:
- MEM0_SETUP.md - Memory system setup
- LANGFUSE_SETUP.md - Observability setup
- LANGFUSE_INTEGRATION.md - Integration guide
API Keys Setup
Anthropic API Key (Required)
- Sign up at console.anthropic.com
- Generate an API key
- Add it to your
.envfile asANTHROPIC_API_KEY
Tavily API Key (Optional, for web search)
- Sign up at app.tavily.com
- Get your API key (1,000 free searches/month)
- Add it to your
.envfile asTAVILY_API_KEY
OpenAI API Key (Optional, for mem0 memory)
- Sign up at platform.openai.com
- Generate an API key
- Add it to your
.envfile asOPENAI_API_KEY
Usage
Command Line Interface
Run the interactive CLI:
# Using default configuration (agent_config.yaml)
python agent.py
# Using a custom configuration file
python agent.py --config /path/to/your/config.yaml
Example conversation:
๐ค AgentWerkstatt
==================================================
Loading config from: agent_config.yaml
I'm an example AgentWerkstatt assistant with web search capabilities!
Ask me to search the web for information.
Commands: 'quit'/'exit' to quit, 'clear' to reset, 'status' to check conversation state.
You: What's the latest news about AI developments?
๐ค Agent is thinking...
๐ค Agent: I'll search for the latest AI developments for you.
[Search results and AI summary will be displayed here]
You: clear # Clears conversation history
๐งน Conversation history cleared!
You: quit
๐ Goodbye!
Programmatic Usage
from agentwerkstatt import Agent, AgentConfig
# Initialize with default config
config = AgentConfig.from_yaml("agent_config.yaml")
agent = Agent(config)
# Or customize the configuration
config = AgentConfig(
model="claude-sonnet-4-20250514",
tools_dir="./tools",
verbose=True,
agent_objective="You are a helpful assistant with web search capabilities."
)
agent = Agent(config)
# Process a request
response = agent.process_request("Search for recent Python releases")
print(response)
# Clear conversation history
agent.llm.clear_history()
Command Line Options
The CLI supports the following command line arguments:
--config- Path to the agent configuration file (default:agent_config.yaml)--help- Show help message and available options
Examples:
# Use default configuration
python agent.py
# Use custom configuration file
python agent.py --config my_custom_config.yaml
# Show help
python agent.py --help
Architecture
Core Components
AgentWerkstatt/
โโโ agent.py # Main agent implementation and CLI
โโโ agent_config.yaml # Default configuration
โโโ llms/ # LLM provider modules
โ โโโ base.py # Base LLM abstraction
โ โโโ claude.py # Claude implementation
โ โโโ __init__.py
โโโ tools/ # Tool modules
โ โโโ base.py # Base tool abstraction
โ โโโ discovery.py # Automatic tool discovery
โ โโโ websearch.py # Tavily web search tool
โ โโโ __init__.py
โโโ 3rd_party/ # Third-party service integrations
โ โโโ docker-compose.yaml # Service orchestration
โ โโโ Dockerfile.mem0 # Custom mem0 build
โ โโโ mem0-config.yaml # Memory system config
โ โโโ MEM0_SETUP.md # Memory setup guide
โ โโโ LANGFUSE_SETUP.md # Observability setup
โ โโโ LANGFUSE_INTEGRATION.md # Integration guide
โโโ pyproject.toml # Project configuration
LLM Providers
The framework uses a base BaseLLM class that can be extended for different providers:
- Claude (Anthropic) - Full support with tool calling
- Future providers - Easy to add by extending
BaseLLM
Tools
Tools are modular components that extend agent capabilities:
- Web Search - Tavily API integration for real-time information retrieval
- Automatic Discovery - Tools are automatically discovered from the tools directory
- Extensible - Add new tools by implementing
BaseTool
Memory System
Optional mem0 integration provides:
- Persistent Context - Long-term memory across conversations
- Semantic Search - Vector-based memory retrieval
- Graph Relationships - Knowledge graph storage in Neo4j
- REST API - Direct access to memory operations
Agent System
The Agent class orchestrates:
- LLM interactions
- Tool execution and discovery
- Conversation management
- Response generation
- Memory persistence (when enabled)
Configuration
Environment Variables
Core
ANTHROPIC_API_KEY- Required for Claude API accessTAVILY_API_KEY- Optional, for web search functionality
Memory (mem0)
OPENAI_API_KEY- Required for mem0 memory system (LLM and embeddings)
Observability (Langfuse)
LANGFUSE_PUBLIC_KEY- Optional, for Langfuse tracing integrationLANGFUSE_SECRET_KEY- Optional, for Langfuse tracing integrationLANGFUSE_HOST- Optional, Langfuse host URL (defaults to cloud.langfuse.com)
Configuration File
Default configuration in agent_config.yaml:
# LLM Model Configuration
model: "claude-sonnet-4-20250514"
# Tools Configuration
tools_dir: "./tools"
# Logging Configuration
verbose: true
# Memory Configuration (Optional)
memory:
enabled: false # Set to true to enable mem0 integration
model_name: "gpt-4o-mini" # Model for memory processing
server_url: "http://localhost:8000" # mem0 server endpoint
# Langfuse Configuration (Optional)
langfuse:
enabled: true # Set to false to disable tracing
project_name: "agentwerkstatt"
# Agent Objective/System Prompt
agent_objective: |
You are a helpful assistant with web search capabilities.
You can search the web for current information and provide accurate, helpful responses.
Always be conversational and helpful in your responses.
Memory Configuration
To enable persistent memory with mem0:
- Install memory dependencies:
uv sync --extra memory - Start the mem0 service:
docker compose -f 3rd_party/docker-compose.yaml up -d mem0 - Set your OpenAI API key for memory operations
- Enable memory in your configuration:
memory: enabled: true model_name: "gpt-4o-mini" server_url: "http://localhost:8000"
Model Configuration
To use a different model programmatically:
config = AgentConfig(model="claude-sonnet-4-20250514")
agent = Agent(config)
Observability with Langfuse
AgentWerkstatt includes optional integration with Langfuse for comprehensive observability:
- Automatic Tracing: All agent interactions, LLM calls, and tool executions are automatically traced
- Performance Monitoring: Track costs, latency, and token usage
- Debugging: Detailed execution flow for troubleshooting
- Analytics: Historical data and performance insights
To enable Langfuse tracing:
- Install the tracing dependencies:
uv sync --extra tracing - Set up your Langfuse credentials (see Environment Variables)
- Enable tracing in your configuration:
langfuse: enabled: true project_name: "your-project-name"
Note: Langfuse is completely optional. AgentWerkstatt works perfectly without it.
For detailed setup instructions, see LANGFUSE_INTEGRATION.md.
Development
Adding a New LLM Provider
- Create a new file in
llms/(e.g.,openai.py) - Implement the
BaseLLMinterface:
from .base import BaseLLM
class OpenAILLM(BaseLLM):
def __init__(self, model_name: str, tools: list, agent_objective: str = ""):
super().__init__(model_name, tools, agent_objective)
self.api_key = os.getenv("OPENAI_API_KEY")
# Set other provider-specific configurations
def make_api_request(self, messages: list[dict]) -> dict:
# Implement API request logic
pass
def process_request(self, messages: list[dict]) -> tuple[list[dict], list[dict]]:
# Implement request processing
pass
- Update
llms/__init__.pyto export the new provider
Adding a New Tool
- Create a new file in
tools/(e.g.,weather.py) - Implement the
BaseToolinterface:
from .base import BaseTool
from typing import Any
class WeatherTool(BaseTool):
def _get_name(self) -> str:
return "Weather Tool"
def _get_description(self) -> str:
return "Get weather information for a location"
def get_schema(self) -> dict[str, Any]:
return {
"name": self.get_name(),
"description": self.get_description(),
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or location name"
}
},
"required": ["location"]
}
}
def execute(self, **kwargs) -> dict[str, Any]:
# Implement tool logic
location = kwargs.get("location")
# Your weather API logic here
return {"weather": f"Sunny in {location}"}
- The tool will be automatically discovered by the
ToolRegistry- no manual registration needed!
Development Setup
# Clone and setup
git clone https://github.com/hanneshapke/agentwerkstatt.git
cd agentwerkstatt
uv sync --dev
# Code formatting and linting
uv run ruff check --fix
uv run ruff format
# Type checking
uv run mypy .
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=agentwerkstatt --cov-report=html --cov-report=term
Quality Assurance
The project uses modern Python development tools:
- Ruff - Fast Python linter and formatter (replaces black, flake8, isort)
- MyPy - Static type checking
- Pytest - Testing framework
- Pre-commit - Git hooks for code quality
Dependencies
Core dependencies:
httpx- Modern HTTP client for API requestspython-dotenv- Environment variable managementabsl-py- Google's Python common librariesPyYAML- YAML configuration file support
Optional dependencies:
langfuse- Observability and tracing (with--extra tracing)mem0ai- Memory system integration (with--extra memory)
Roadmap
Check out our ROADMAP.md to see what's planned for future releases, including:
- ๐ง Multi-LLM Support - OpenAI, Google AI, and local model integration
- โ Memory & Persistence - mem0 integration (โ COMPLETED)
- โ 3rd Party Integrations - Observability tools and database services (โ COMPLETED)
- ๐ ๏ธ Advanced Tools - API discovery, file operations, and code execution
- ๐ค Agent Intelligence - Self-reflection, planning, and reasoning capabilities
We welcome feedback and contributions to help shape the project's direction!
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run the quality checks:
uv run ruff check --fix uv run ruff format uv run mypy . uv run pytest
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
License
The license is still under development.
Acknowledgments
- Anthropic for the Claude API
- Tavily for web search capabilities
- mem0 for AI memory management
- Langfuse for observability and tracing
- The open-source community for inspiration and tools
Support
- ๐ Documentation
- ๐ Bug Reports
- ๐ฌ Discussions
- ๐ง MEM0 Setup Guide
- ๐ Langfuse Integration Guide
AgentWerkstatt - Building intelligent agents, one tool at a time. ๐
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentwerkstatt-0.1.6.tar.gz.
File metadata
- Download URL: agentwerkstatt-0.1.6.tar.gz
- Upload date:
- Size: 154.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f62db2adbd213ea3be39fdcc12f9e097601acfe8b1d3d6564674ee5983019e2
|
|
| MD5 |
dabeb43d387d3e8a42ca97989ac27639
|
|
| BLAKE2b-256 |
b01dc3623d21c18e149f23c68ba2bb58ccf92349889c7251a86dcfbb7a663673
|
File details
Details for the file agentwerkstatt-0.1.6-py3-none-any.whl.
File metadata
- Download URL: agentwerkstatt-0.1.6-py3-none-any.whl
- Upload date:
- Size: 39.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9ccc54296b48513d5a577b85d11f841fc9932fb3d8fb34d5f4a404ba2175aa9
|
|
| MD5 |
1af6edb03a547d41cec5e6057dbc399e
|
|
| BLAKE2b-256 |
70c853e5bb2b8b8ca66ba47518144a2b3eddb70b8975764c2e4915ab48140759
|