Modular AI agent framework — build, configure, and run LLM agents
Project description
AgentsFlowCompiler
A modular Python framework for building, configuring, and running LLM-powered AI agents. Define agents in YAML, equip them with tools, and run them with a single function call.
Installation
# Core (production runtime only)
pip install AgentsFlowCompiler-lib
# With dev tools (agent management, monitoring, CRUD API)
pip install AgentsFlowCompiler-lib[dev]
# With specific LLM providers
pip install AgentsFlowCompiler-lib[openai]
pip install AgentsFlowCompiler-lib[anthropic]
pip install AgentsFlowCompiler-lib[google]
pip install AgentsFlowCompiler-lib[ollama]
# Everything
pip install AgentsFlowCompiler-lib[all]
Quick Start
Production — Load & Run
from agentsflow import load_agents
# Load all agents from a directory
agents = load_agents("/path/to/my_project")
# Run an agent
result = agents["analyzer"].run("What is the GDP of France?")
print(result)
# With a .env file for API keys
agents = load_agents("/path/to/my_project", env_path="/path/to/.env")
Development — Create & Manage
from agentsflow.dev import create_project, create_agent, add_tool
# Create a project
create_project("My AI Project", dev_path="/path/to/dev", prod_path="/path/to/prod")
# Create an agent
create_agent("/path/to/dev", "researcher",
model="gpt-4o",
description="Research assistant that finds and summarizes information",
instruction_prompt="You are a research assistant. Be thorough and cite sources.",
think_prompt="Break complex questions into sub-questions before answering.",
temperature=0.3)
# Add a tool to the agent
add_tool("/path/to/dev", "researcher", "web_search",
tool_path="tools/search.py",
function_name="search",
description="Search the web for current information",
parameters={
"query": {"type": "string", "description": "Search query", "required": True},
"max_results": {"type": "number", "description": "Max results", "required": False, "default": 5}
})
Core Concepts
What is an Agent?
An agent is an LLM-powered unit that:
- Receives a user prompt
- Optionally preprocesses the input (custom Python function)
- Sends a system prompt + user message to an LLM
- Can use tools — the LLM decides when to call them, executes them, and feeds results back
- Optionally postprocesses the output (custom Python function)
- Returns the final answer
User Input → [Preprocess] → LLM ⇄ Tools → [Postprocess] → Final Output
Project Structure
A typical project looks like this:
my_project/
├── project.json # Project metadata & env config
├── config/
│ └── agents.yaml # Agent manifest (lists all agents)
├── agents/
│ ├── researcher/
│ │ ├── config.yaml # Agent configuration
│ │ ├── instruction.md # System instruction prompt
│ │ ├── think.md # Thinking guidelines (optional)
│ │ ├── return.md # Output format instructions (optional)
│ │ ├── example.md # Few-shot examples (optional)
│ │ ├── custom_tools.py # Custom tool functions (optional)
│ │ └── logs/
│ │ ├── input_output/ # Daily I/O logs (JSON)
│ │ └── audit/ # Audit trail
│ └── writer/
│ ├── config.yaml
│ ├── instruction.md
│ └── logs/
└── tools/ # Shared built-in tools
├── calculator/
│ ├── tool.yaml
│ └── tool.py
└── web_search/
├── tool.yaml
└── tool.py
Agent Configuration (YAML)
Each agent is defined by a config.yaml file:
researcher:
# Identity
agent_id: "node_001"
description: "Research assistant"
# Model
model: gpt-4o # or claude-sonnet-4-20250514, gemini-pro, llama3, etc.
provider: openai # auto-detected if not set
temperature: 0.3
max_tokens: 4096
# Prompts (relative paths to .md files)
instruction_path: instruction.md
think_path: think.md # optional: thinking guidelines
return_path: return.md # optional: output format rules
example_path: example.md # optional: few-shot examples
# Pre/Post Processing (optional)
preprocess_path: preprocess.py
preprocess_function_name: preprocess
postprocess_path: postprocess.py
postprocess_function_name: postprocess
# Output Format
return_format: text # text | json | json_object | markdown
json_schema_path: schema.json # optional: for structured JSON output
# Tools
tools:
- name: calculator
custom: false
- name: company_lookup
custom: true
description: "Look up company info"
path: custom_tools.py
function_name: lookup
parameters:
query:
type: string
description: "Company name or ticker"
required: true
System Prompt Assembly
The agent's system prompt is assembled from multiple files in this order:
┌─────────────────┐
│ instruction.md │ ← Main system instruction (required)
├─────────────────┤
│ think.md │ ← How the agent should reason (optional)
├─────────────────┤
│ return.md │ ← Output format guidelines (optional)
├─────────────────┤
│ example.md │ ← Few-shot examples (optional)
└─────────────────┘
↓
Combined System Prompt → sent to LLM
This modular approach lets you reuse and swap prompt sections independently.
Tools
Tools give agents the ability to perform actions — search the web, calculate math, call APIs, read files, and anything else you can write in Python.
Built-in Tools
Built-in tools are shared across all agents. Each is a folder inside tools/:
| Tool | Category | Description |
|---|---|---|
calculator |
math | Evaluate math expressions safely (sqrt, log, sin, +, -, etc.) |
web_search |
search | Search the web for current information |
Using a built-in tool:
tools:
- name: calculator
custom: false
Custom Tools
Custom tools are Python functions specific to an agent.
Step 1: Write the function:
# agents/researcher/custom_tools.py
def lookup_company(query: str) -> dict:
"""Look up company information."""
# your logic here
return {"name": "Apple", "sector": "Technology", "market_cap": "3.4T"}
Step 2: Define in YAML:
tools:
- name: company_lookup
custom: true
description: "Look up company information by name or ticker"
category: finance
path: custom_tools.py
function_name: lookup_company
parameters:
query:
type: string
description: "Company name or stock ticker"
required: true
returns:
type: object
description: "Company info with name, sector, market_cap"
Tool Parameter Fields
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | ✅ | string, number, boolean, array, object |
description |
string | ✅ | What this parameter does (the LLM reads this) |
required |
boolean | ❌ | Default: false |
default |
any | ❌ | Default value if not provided |
enum |
array | ❌ | List of allowed values |
How Tool Calling Works at Runtime
1. Agent loads → ToolRegistry reads YAML, imports Python functions
2. Agent.run() called → LLM receives tool schemas in API request
3. LLM wants a tool → Returns tool_calls: [{name, arguments}]
4. Agent executes → ToolRegistry.execute(name, args) → runs Python function
5. Result sent back → Added to messages as role: "tool"
6. LLM sees result → Calls another tool or returns final answer
7. Loop limit → Max 10 rounds (prevents infinite loops)
Pre/Post Processing
Preprocess
A Python function that transforms user input before it reaches the LLM:
# preprocess.py
def preprocess(user_input: str) -> str:
"""Add context, clean input, augment with RAG results, etc."""
context = fetch_relevant_docs(user_input)
return f"Context:\n{context}\n\nQuestion: {user_input}"
Postprocess
A Python function that transforms LLM output after the response:
# postprocess.py
def postprocess(llm_output):
"""Parse, validate, save to DB, trigger notifications, etc."""
data = json.loads(llm_output)
save_to_database(data)
return data
LLM Providers
The framework auto-detects the provider based on model name. You can also set it explicitly via provider in the config.
| Provider | Models | API Key Env Var |
|---|---|---|
| OpenAI | gpt-4, gpt-4o, gpt-4o-mini, o1, o3 | OPENAI_API_KEY |
| Anthropic | claude-sonnet-4-20250514, claude-3-haiku, claude-3-opus | ANTHROPIC_API_KEY |
| gemini-pro, gemini-1.5-flash, gemini-2.0 | GOOGLE_API_KEY |
|
| Ollama | llama3, mistral, phi, qwen, deepseek, codellama | No key needed (local) |
API Reference
PROD API
Available with base install:
from agentsflow import load_agents
| Function | Description |
|---|---|
load_agents(agents_dir, env_path=None) |
Load all agents → dict[str, Agent] |
DEV API
Available with pip install AgentsFlowCompiler-lib[dev]:
from agentsflow.dev import create_agent, add_tool, ...
Project Management
| Function | Description |
|---|---|
create_project(name, dev_path, prod_path, ...) |
Create project.json |
edit_project(config_path, **updates) |
Edit project fields (deep merge) |
get_project(config_path) |
Read project config |
Agent CRUD
| Function | Description |
|---|---|
create_agent(base_dir, name, model, ...) |
Create agent directory + config + prompts |
edit_agent_config(base_dir, name, **updates) |
Edit any config field |
delete_agent(base_dir, name) |
Delete agent + remove from manifest |
duplicate_agent(base_dir, source, new_name) |
Deep copy with new name |
validate_agent(base_dir, name) |
Check files exist, YAML valid, model recognized |
get_agent_config(base_dir, name) |
Get full config as dict |
get_all_agents(project_path) |
List all agents (name, description, model) |
Tool Management
| Function | Description |
|---|---|
add_tool(base_dir, agent, name, path, ...) |
Add custom tool to agent |
edit_tool(base_dir, agent, name, **updates) |
Edit tool fields |
remove_tool(base_dir, agent, name) |
Remove tool from agent |
get_custom_tools(base_dir, agent) |
List custom tools |
get_agent_builtin_tools(base_dir, agent) |
List used built-in tools |
get_all_builtin_tools() |
List all available built-in tools |
Pre/Post Processing
| Function | Description |
|---|---|
add_preprocess(base_dir, agent, path, func) |
Attach preprocess function |
edit_preprocess(base_dir, agent, ...) |
Update preprocess config |
remove_preprocess(base_dir, agent) |
Remove preprocess |
add_postprocess(base_dir, agent, path, func) |
Attach postprocess function |
edit_postprocess(base_dir, agent, ...) |
Update postprocess config |
remove_postprocess(base_dir, agent) |
Remove postprocess |
Monitoring & History
| Function | Description |
|---|---|
get_prompt_history(base_dir, agent) |
History of system prompt changes |
get_run_history(base_dir, agent, from_date, to_date) |
I/O logs with date filtering |
get_token_usage(base_dir, agent, from_date, to_date) |
Token stats aggregate |
Architecture
AgentsFlowCompiler-lib
├── agentsflow/ Python package (import name)
│ ├── __init__.py PROD entry: load_agents()
│ ├── _prod.py Production loader + .env support
│ │
│ ├── agent/ Core agent runtime
│ │ ├── agent.py Agent class (run loop)
│ │ ├── config.py Path resolution
│ │ ├── prompts.py Prompt assembly & pre/post process
│ │ ├── tools.py Tool execution wrapper
│ │ ├── stats.py Logging & token tracking
│ │ └── _utils.py Shared helpers
│ │
│ ├── llm/ LLM provider abstraction
│ │ ├── base.py LLMClient abstract interface
│ │ ├── openai_client.py OpenAI implementation
│ │ ├── anthropic_client.py Anthropic implementation
│ │ ├── google_client.py Google Gemini implementation
│ │ ├── ollama_client.py Ollama (local) implementation
│ │ └── factory.py Auto-detect & create client
│ │
│ ├── schema/ Pydantic data models
│ │ ├── agent_config_schema.py AgentConfig
│ │ ├── tool_config_schema.py ToolConfig
│ │ └── tool_schema.py ToolParameterConfig
│ │
│ ├── tools/ Tool registry
│ │ └── registry.py Load, register, execute tools
│ │
│ ├── builder/ Agent construction
│ │ └── agents_builder.py YAML manifest → Agent instances
│ │
│ └── dev/ DEV API (25 functions)
│ ├── project_api.py
│ ├── agent_api.py
│ ├── tool_api.py
│ ├── processing_api.py
│ └── monitoring_api.py
│
└── tests/ 101 tests
├── micro/ Unit tests (99)
│ ├── agent/
│ └── api/
└── macro/ Integration tests (2)
Design Principles (SOLID)
| Principle | How It's Applied |
|---|---|
| Single Responsibility | Each file does one thing (one LLM provider per file, one schema per file) |
| Open/Closed | Add a new LLM provider = new file + 2 lines in factory.py, no existing code changes |
| Liskov Substitution | All providers implement LLMClient ABC and are fully interchangeable |
| Interface Segregation | LLMClient has a minimal interface: chat() + provider_name |
| Dependency Inversion | Agent depends on the LLMClient abstraction, never on concrete providers |
Testing
# All tests (101)
pytest tests/
# Unit tests only
pytest tests/micro/
# Integration tests only
pytest tests/macro/
# Specific module
pytest tests/micro/agent/
pytest tests/micro/api/
Full Example
from agentsflow.dev import (
create_project, create_agent, add_tool,
add_preprocess, validate_agent, get_all_agents,
)
# 1. Set up project
create_project("Financial Analysis", dev_path="/home/user/fin_project")
# 2. Create agent
create_agent("/home/user/fin_project", "analyst",
model="gpt-4o",
description="Financial research and analysis agent",
instruction_prompt="""You are a financial analyst.
Use available tools to research companies and calculate metrics.
Always show your reasoning.""",
think_prompt="Break analysis into: data gathering → calculation → conclusion",
temperature=0.2)
# 3. Add tools
add_tool("/home/user/fin_project", "analyst", "stock_lookup",
tool_path="tools/finance.py", function_name="get_stock_data",
description="Get current stock price and basic metrics",
parameters={
"ticker": {"type": "string", "description": "Stock ticker (AAPL, MSFT)", "required": True}
})
# 4. Add preprocessing
add_preprocess("/home/user/fin_project", "analyst",
function_path="preprocess.py", function_name="add_market_context")
# 5. Validate
result = validate_agent("/home/user/fin_project", "analyst")
print(result) # {"valid": True, "errors": [], "warnings": []}
# 6. List all agents
agents = get_all_agents("/home/user/fin_project")
# [{"name": "analyst", "description": "Financial research...", "model": "gpt-4o"}]
Then in production:
from agentsflow import load_agents
agents = load_agents("/home/user/fin_project", env_path="/home/user/.env")
report = agents["analyst"].run("Analyze Apple's Q4 earnings and compare to Microsoft")
print(report)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentsflowcompiler_lib-0.1.2.tar.gz.
File metadata
- Download URL: agentsflowcompiler_lib-0.1.2.tar.gz
- Upload date:
- Size: 50.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
08caedb384744146664a40f997f772298e1f8d285d25f0c2a4ce4f8a9ac8546e
|
|
| MD5 |
062dd91f099a945257779bb53d018447
|
|
| BLAKE2b-256 |
bab5eb2bac44ec873e24d81830c067a6dd28da83e6cfc1b35f89ba898176097e
|
File details
Details for the file agentsflowcompiler_lib-0.1.2-py3-none-any.whl.
File metadata
- Download URL: agentsflowcompiler_lib-0.1.2-py3-none-any.whl
- Upload date:
- Size: 70.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8dca171b432d7b0f554a098e737ab6158748fc070712c453903b01984a8f3827
|
|
| MD5 |
ecbd82073e22869b5bca0ea605f15114
|
|
| BLAKE2b-256 |
ce6c816fa56bc438f483bb63ce6e6567bc2dbaaed80078146969708a1f94d112
|