Modular AI agent framework — build, configure, and run LLM agents
Project description
AgentsFlowCompiler
A modular Python framework for building, configuring, and running LLM-powered AI agents. Define agents in YAML, equip them with tools, and run them with a single function call.
Installation
# Core (production runtime only)
pip install AgentsFlowCompiler-lib
# With dev tools (agent management, monitoring, CRUD API)
pip install AgentsFlowCompiler-lib[dev]
# With specific LLM providers
pip install AgentsFlowCompiler-lib[openai]
pip install AgentsFlowCompiler-lib[anthropic]
pip install AgentsFlowCompiler-lib[google]
pip install AgentsFlowCompiler-lib[ollama]
# Everything
pip install AgentsFlowCompiler-lib[all]
Quick Start
Production — Load & Run
from agentsflow import load_agents, AgentsFlowConfig
# Load all agents from a directory
agents = load_agents("/path/to/my_project")
# Run an agent
result = agents["analyzer"].run("What is the GDP of France?")
print(result.output) # The answer
print(result.token_input, result.token_output) # Token metadata
# Agent metadata properties
print(agent.name) # "analyzer"
print(agent.model) # "gpt-4o"
print(agent.provider) # "openai"
# With a .env file for API keys
agents = load_agents("/path/to/my_project", env_path="/path/to/.env")
# With SDK config (log level, network silencing, structured logging)
config = AgentsFlowConfig(log_level="INFO", silence_network_loggers=True, log_format="json")
agents = load_agents("/path/to/my_project", config=config)
Development — Create & Manage
from agentsflow.dev import create_project, create_agent, add_tool
from agentsflow import AgentModelConfig, Prompt, ToolIdentityConfig, ToolConfig
# Create a project
create_project("My AI Project", dev_path="/path/to/dev", prod_path="/path/to/prod")
# Create an agent
create_agent(
base_dir="/path/to/dev",
agent_name="researcher",
model_config=AgentModelConfig(model="gpt-4o", temperature=0.3),
description="Research assistant that finds and summarizes information",
prompts=Prompt(
instruction="You are a research assistant. Be thorough and cite sources.",
think="Break complex questions into sub-questions before answering.",
),
)
# Add a tool (script stored at tools/custom_tools/web_search/tool.py)
add_tool(
base_dir="/path/to/dev",
script="def search(query: str, max_results: int = 5):\n return []",
identity=ToolIdentityConfig(
name="web_search",
description="Search the web for current information",
category="search",
),
config=ToolConfig(
function_name="search",
parameters={
"query": {"type": "string", "description": "Search query", "required": True},
"max_results": {"type": "number", "description": "Max results", "required": False, "default": 5},
},
returns={"type": "array", "description": "Search results"},
),
agent_name="researcher",
)
Core Concepts
What is an Agent?
An agent is an LLM-powered unit that:
- Receives a user prompt
- Optionally preprocesses the input (custom Python function)
- Sends a system prompt + user message to an LLM
- Can use tools — the LLM decides when to call them, executes them, and feeds results back
- Optionally postprocesses the output (custom Python function)
- Returns a
RunResultobject containing the output and metadata
User Input → [Preprocess] → LLM ⇄ Tools → [Postprocess] → Final Output
Project Structure
A typical project looks like this:
my_project/
├── MyProject.afproj # Optional project metadata & env config
├── config/
│ └── agents.yaml # Agent manifest (lists all agents)
├── agents/
│ ├── researcher/
│ │ ├── config.yaml # Agent configuration
│ │ ├── instruction.md # System instruction prompt
│ │ ├── think.md # Thinking guidelines (optional)
│ │ ├── return.md # Output format instructions (optional)
│ │ ├── example.md # Few-shot examples (optional)
│ │ ├── custom_tools.py # Custom tool functions (optional)
│ │ └── logs/
│ │ ├── run_history/ # Run logs
│ │ ├── audit_logs/ # Audit trail
│ │ ├── prompt_history/ # Prompt history
│ │ └── token_usage/ # Token usage logs
│ └── writer/
│ ├── config.yaml
│ ├── instruction.md
│ └── logs/
└── tools/ # Shared built-in tools
├── calculator/
│ ├── tool.yaml
│ └── tool.py
└── web_search/
├── tool.yaml
└── tool.py
Agent Configuration (YAML)
Each agent is defined by a config.yaml file:
researcher:
# Identity
agent_id: "node_001"
description: "Research assistant"
# Model
model: gpt-4o # or claude-sonnet-4-20250514, gemini-pro, llama3, etc.
provider: openai # auto-detected if not set
temperature: 0.3
max_tokens: 4096
# Prompts (relative paths to .md files)
instruction_path: instruction.md
think_path: think.md # optional: thinking guidelines
return_path: return.md # optional: output format rules
example_path: example.md # optional: few-shot examples
# Pre/Post Processing (optional)
preprocess_path: preprocess.py
preprocess_function_name: preprocess
postprocess_path: postprocess.py
postprocess_function_name: postprocess
# Output Format
return_format: text # text | json | json_object | markdown
json_schema_path: schema.json # optional: for structured JSON output
# Tools
tools:
- name: calculator
custom: false
- name: company_lookup
custom: true
description: "Look up company info"
path: custom_tools.py
function_name: lookup
parameters:
query:
type: string
description: "Company name or ticker"
required: true
System Prompt Assembly
The agent's system prompt is assembled from multiple files in this order:
┌─────────────────┐
│ instruction.md │ ← Main system instruction (required)
├─────────────────┤
│ think.md │ ← How the agent should reason (optional)
├─────────────────┤
│ return.md │ ← Output format guidelines (optional)
├─────────────────┤
│ example.md │ ← Few-shot examples (optional)
└─────────────────┘
↓
Combined System Prompt → sent to LLM
This modular approach lets you reuse and swap prompt sections independently.
Tools
Tools give agents the ability to perform actions — search the web, calculate math, call APIs, read files, and anything else you can write in Python.
Built-in Tools
Built-in tools are shared across all agents. Each is a folder inside tools/:
| Tool | Category | Description |
|---|---|---|
calculator |
math | Evaluate math expressions safely (sqrt, log, sin, +, -, etc.) |
web_search |
search | Search the web for current information |
Using a built-in tool:
tools:
- name: calculator
custom: false
Custom Tools
Custom tools are Python functions specific to an agent.
Step 1: Write the function:
# agents/researcher/custom_tools.py
def lookup_company(query: str) -> dict:
"""Look up company information."""
# your logic here
return {"name": "Apple", "sector": "Technology", "market_cap": "3.4T"}
Step 2: Define in YAML:
tools:
- name: company_lookup
custom: true
description: "Look up company information by name or ticker"
category: finance
path: custom_tools.py
function_name: lookup_company
parameters:
query:
type: string
description: "Company name or stock ticker"
required: true
returns:
type: object
description: "Company info with name, sector, market_cap"
Tool Parameter Fields
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | ✅ | string, number, boolean, array, object |
description |
string | ✅ | What this parameter does (the LLM reads this) |
required |
boolean | ❌ | Default: false |
default |
any | ❌ | Default value if not provided |
enum |
array | ❌ | List of allowed values |
How Tool Calling Works at Runtime
1. Agent loads → ToolRegistry reads YAML, imports Python functions
2. Agent.run() called → LLM receives tool schemas in API request
3. LLM wants a tool → Returns tool_calls: [{name, arguments}]
4. Agent executes → ToolRegistry.execute(name, args) → runs Python function
5. Result sent back → Added to messages as role: "tool"
6. LLM sees result → Calls another tool or returns final answer
7. Loop limit → Max 10 rounds (prevents infinite loops)
Pre/Post Processing
Preprocess
A Python function that transforms user input before it reaches the LLM:
# preprocess.py
def preprocess(user_input: str) -> str:
"""Add context, clean input, augment with RAG results, etc."""
context = fetch_relevant_docs(user_input)
return f"Context:\n{context}\n\nQuestion: {user_input}"
Postprocess
A Python function that transforms LLM output after the response:
# postprocess.py
def postprocess(llm_output):
"""Parse, validate, save to DB, trigger notifications, etc."""
data = json.loads(llm_output)
save_to_database(data)
return data
LLM Providers
The framework auto-detects the provider based on model name. You can also set it explicitly via provider in the config.
| Provider | Models | API Key Env Var |
|---|---|---|
| OpenAI | gpt-4, gpt-4o, gpt-4o-mini, o1, o3 | OPENAI_API_KEY |
| Anthropic | claude-sonnet-4-20250514, claude-3-haiku, claude-3-opus | ANTHROPIC_API_KEY |
| gemini-pro, gemini-1.5-flash, gemini-2.0 | GOOGLE_API_KEY |
|
| Ollama | llama3, mistral, phi, qwen, deepseek, codellama | No key needed (local) |
API Reference
See helper/API_REFERENCE.md for the complete reference. Summary:
PROD API
from agentsflow import load_agents, AgentsFlowConfig
| Function | Description |
|---|---|
load_agents(agents_dir, env_path=None, config=None) |
Load all agents → dict[str, Agent]. config: optional AgentsFlowConfig |
AgentsFlowConfig |
SDK-wide config: log_level, silence_network_loggers, log_format |
DEV API
Available with pip install AgentsFlowCompiler-lib[dev]. Project metadata lives in an optional .afproj file; agent/tool/processing/monitoring APIs operate directly on the DEV base_dir.
Key data classes (AgentConfig, RunResult, Prompt, Tool, etc.), monitoring types (AuditLogEntry, RunHistoryEntry), and all errors can be imported directly from the top-level package:
from agentsflow import (
AgentConfig, RunResult, Prompt, Tool,
AuditLogEntry, RunHistoryEntry,
LLMRateLimitError, MaxToolRoundsError
)
Project
| Function | Description |
|---|---|
create_project(project_name, dev_path, prod_path, ...) |
Create optional .afproj metadata file |
edit_project(project_config_path, **updates) |
Edit project fields |
get_project(project_config_path) |
Read project config |
Agent
| Function | Description |
|---|---|
create_agent(base_dir, agent_name, model_config, ...) |
Create agent directory + config + prompts |
edit_agent(base_dir, agent_id/name, ...) |
Edit config fields (pass only what changes) |
delete_agent(base_dir, agent_id/name) |
Delete agent + remove from manifest |
duplicate_agent(base_dir, new_name, source_name/id) |
Deep copy with new name |
validate_agent(base_dir, agent_id/name) |
Raise AgentsFlowConfigError on invalid config |
get_agent_config(base_dir, agent_id/name) |
Get full AgentConfig |
get_all_agents(base_dir) |
List all registered agents |
Tools
| Function | Description |
|---|---|
add_tool(base_dir, script, identity, config, agent_name/id) |
Add custom tool (ToolIdentityConfig + ToolConfig) |
edit_tool(base_dir, tool_name/id, agent_name/id, ...) |
Edit tool fields |
remove_tool(base_dir, tool_name/id, agent_name/id) |
Remove tool from agent |
get_custom_tools(base_dir, agent_name/id) |
List custom tools on agent |
get_agent_builtin_tools(base_dir, agent_name/id) |
List built-in tools used by agent |
get_all_builtin_tools(tools_dir) |
List all available built-in tools |
get_full_script_tool(base_dir, agent_name/id, tool_name/id) |
Read tool Python script from disk |
Processing
| Function | Description |
|---|---|
add_preprocess(base_dir, agent_name/id) |
Add preprocess (creates default script) |
edit_preprocess(base_dir, agent_name/id, script) |
Replace preprocess script |
remove_preprocess(base_dir, agent_name/id) |
Remove preprocess |
add_postprocess(base_dir, agent_name/id) |
Add postprocess (creates default script) |
edit_postprocess(base_dir, agent_name/id, script) |
Replace postprocess script |
remove_postprocess(base_dir, agent_name/id) |
Remove postprocess |
get_preprocess_script(base_dir, agent_name/id) |
Read preprocess script from disk |
get_postprocess_script(base_dir, agent_name/id) |
Read postprocess script from disk |
Monitoring
| Function | Description |
|---|---|
get_prompt_history(base_dir, agent_name) |
History of system prompt changes |
get_prompt_from_hash(base_dir, hash, agent_name/id) |
Read stored prompt by hash |
get_run_history(base_dir, agent_name, from_date, to_date) |
I/O logs with date filtering |
get_run_details(base_dir, rid, agent_name/id) |
Single run by run ID |
get_token_usage(base_dir, agent_name, from_date, to_date) |
Token stats aggregate |
get_audit_logs(base_dir, agent_name/id) |
Audit log entries |
get_audit_log_from_timestamp(base_dir, timestamp, agent_name/id) |
Single audit entry by timestamp |
Architecture
AgentsFlowCompiler-lib
├── agentsflow/ Python package (import name)
│ ├── __init__.py PROD entry: load_agents()
│ ├── _prod.py Production loader + .env support
│ │
│ ├── agent/ Core agent runtime
│ │ ├── agent.py Agent class (run loop)
│ │ ├── config.py Path resolution
│ │ ├── prompts.py Prompt assembly & pre/post process
│ │ ├── tools.py Tool execution wrapper
│ │ ├── stats.py Logging & token tracking
│ │ └── _utils.py Shared helpers
│ │
│ ├── llm/ LLM provider abstraction
│ │ ├── base.py LLMClient abstract interface
│ │ ├── openai_client.py OpenAI implementation
│ │ ├── anthropic_client.py Anthropic implementation
│ │ ├── google_client.py Google Gemini implementation
│ │ ├── ollama_client.py Ollama (local) implementation
│ │ └── factory.py Auto-detect & create client
│ │
│ ├── schema/ Pydantic data models
│ │ ├── agent_config_schema.py AgentConfig
│ │ ├── tool_config_schema.py ToolConfig
│ │ └── tool_schema.py ToolParameterConfig
│ │
│ ├── tools/ Tool registry
│ │ └── registry.py Load, register, execute tools
│ │
│ ├── builder/ Agent construction
│ │ └── agents_builder.py YAML manifest → Agent instances
│ │
│ └── dev/ DEV API (25 functions)
│ ├── project_api.py
│ ├── agent_api.py
│ ├── tool_api.py
│ ├── processing_api.py
│ └── monitoring_api.py
│
└── tests/ 101 tests
├── micro/ Unit tests (99)
│ ├── agent/
│ └── api/
└── macro/ Integration tests (2)
Design Principles (SOLID)
| Principle | How It's Applied |
|---|---|
| Single Responsibility | Each file does one thing (one LLM provider per file, one schema per file) |
| Open/Closed | Add a new LLM provider = new file + 2 lines in factory.py, no existing code changes |
| Liskov Substitution | All providers implement LLMClient ABC and are fully interchangeable |
| Interface Segregation | LLMClient has a minimal interface: chat() + provider_name |
| Dependency Inversion | Agent depends on the LLMClient abstraction, never on concrete providers |
Testing
# All tests (101)
pytest tests/
# Unit tests only
pytest tests/micro/
# Integration tests only
pytest tests/macro/
# Specific module
pytest tests/micro/agent/
pytest tests/micro/api/
Full Example
from agentsflow.dev import (
create_project, create_agent, add_tool,
add_preprocess, edit_preprocess, validate_agent, get_all_agents,
)
# 1. Set up project
create_project("Financial Analysis", dev_path="/home/user/fin_project", dev_env_strategy="project_local")
# 2. Create agent
from agentsflow import (
AgentModelConfig, Prompt,
ToolIdentityConfig, ToolConfig
)
create_agent(
base_dir="/home/user/fin_project",
agent_name="analyst",
model_config=AgentModelConfig(model="gpt-4o", temperature=0.2),
description="Financial research and analysis agent",
prompts=Prompt(
instruction="""You are a financial analyst.
Use available tools to research companies and calculate metrics.
Always show your reasoning.""",
think="Break analysis into: data gathering → calculation → conclusion",
),
)
# 3. Add tool (script stored at tools/custom_tools/<name>/tool.py)
add_tool(
base_dir="/home/user/fin_project",
script="def get_stock_data(ticker: str):\n return {}",
identity=ToolIdentityConfig(name="stock_lookup", description="Get stock price and metrics", category="finance"),
config=ToolConfig(
function_name="get_stock_data",
parameters={"ticker": {"type": "string", "description": "Stock ticker (AAPL, MSFT)", "required": True}},
returns={"type": "object", "description": "Stock data"},
),
agent_name="analyst",
)
# 4. Add preprocess (creates default script), then edit its content
add_preprocess("/home/user/fin_project", agent_name="analyst")
edit_preprocess("/home/user/fin_project", agent_name="analyst", script="def preprocess(prompt: str):\n return prompt.strip()")
# 5. Validate
validate_agent("/home/user/fin_project", agent_name="analyst")
# 6. List all agents
agents = get_all_agents("/home/user/fin_project")
# [AgentConfig(...)]
Then in production:
from agentsflow import load_agents
agents = load_agents("/home/user/fin_project", env_path="/home/user/.env")
result = agents["analyst"].run("Analyze Apple's Q4 earnings and compare to Microsoft")
print(result.output)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentsflowcompiler_lib-0.1.7.tar.gz.
File metadata
- Download URL: agentsflowcompiler_lib-0.1.7.tar.gz
- Upload date:
- Size: 98.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
da9a1a31d78648237003a9480e909efedd871655da3313d53a493bd50c6c427d
|
|
| MD5 |
363b11cd66b07579e2c1c6e902bb2976
|
|
| BLAKE2b-256 |
56da5d87d78b1bb7a03a4bf0a35dd5ea6438e0307da0226e5751bb95adf7835b
|
File details
Details for the file agentsflowcompiler_lib-0.1.7-py3-none-any.whl.
File metadata
- Download URL: agentsflowcompiler_lib-0.1.7-py3-none-any.whl
- Upload date:
- Size: 137.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
67286e4ea23d160962457a935938d1162b6e0b8cb677b772e7022da2653fc1ab
|
|
| MD5 |
5c271eb433b4d8746a083ce03b117229
|
|
| BLAKE2b-256 |
fc37cd52957d0ca4ab7a3e34058413804dd5c44b12087663926c629b701e3a38
|