Skip to main content

Modular AI agent framework — build, configure, and run LLM agents

Project description

AgentsFlowCompiler

A modular Python framework for building, configuring, and running LLM-powered AI agents. Define agents in YAML, equip them with tools, and run them with a single function call.


Installation

# Core (production runtime only)
pip install AgentsFlowCompiler-lib

# With dev tools (agent management, monitoring, CRUD API)
pip install AgentsFlowCompiler-lib[dev]

# With specific LLM providers
pip install AgentsFlowCompiler-lib[openai]
pip install AgentsFlowCompiler-lib[anthropic]
pip install AgentsFlowCompiler-lib[google]
pip install AgentsFlowCompiler-lib[ollama]

# Everything
pip install AgentsFlowCompiler-lib[all]

Quick Start

Production — Load & Run

from agentsflow import load_agents, AgentsFlowConfig

# Load all agents from a directory
agents = load_agents("/path/to/my_project")

# Run an agent
result = agents["analyzer"].run("What is the GDP of France?")
print(result.output)  # The answer
print(result.token_input, result.token_output)  # Token metadata

# Agent metadata properties
print(agent.name)      # "analyzer"
print(agent.model)     # "gpt-4o"
print(agent.provider)  # "openai"

# With a .env file for API keys
agents = load_agents("/path/to/my_project", env_path="/path/to/.env")

# With SDK config (log level, network silencing, structured logging)
config = AgentsFlowConfig(log_level="INFO", silence_network_loggers=True, log_format="json")
agents = load_agents("/path/to/my_project", config=config)

Development — Create & Manage

from agentsflow.dev import create_project, create_agent, add_tool
from agentsflow import AgentModelConfig, Prompt, ToolIdentityConfig, ToolConfig

# Create a project
create_project("My AI Project", dev_path="/path/to/dev", prod_path="/path/to/prod")

# Create an agent
create_agent(
    base_dir="/path/to/dev",
    agent_name="researcher",
    model_config=AgentModelConfig(model="gpt-4o", temperature=0.3),
    description="Research assistant that finds and summarizes information",
    prompts=Prompt(
        instruction="You are a research assistant. Be thorough and cite sources.",
        think="Break complex questions into sub-questions before answering.",
    ),
)

# Add a tool (script stored at tools/custom_tools/web_search/tool.py)
add_tool(
    base_dir="/path/to/dev",
    script="def search(query: str, max_results: int = 5):\n    return []",
    identity=ToolIdentityConfig(
        name="web_search",
        description="Search the web for current information",
        category="search",
    ),
    config=ToolConfig(
        function_name="search",
        parameters={
            "query": {"type": "string", "description": "Search query", "required": True},
            "max_results": {"type": "number", "description": "Max results", "required": False, "default": 5},
        },
        returns={"type": "array", "description": "Search results"},
    ),
    agent_name="researcher",
)

Core Concepts

What is an Agent?

An agent is an LLM-powered unit that:

  1. Receives a user prompt
  2. Optionally preprocesses the input (custom Python function)
  3. Sends a system prompt + user message to an LLM
  4. Can use tools — the LLM decides when to call them, executes them, and feeds results back
  5. Optionally postprocesses the output (custom Python function)
  6. Returns a RunResult object containing the output and metadata
User Input → [Preprocess] → LLM ⇄ Tools → [Postprocess] → Final Output

Project Structure

A typical project looks like this:

my_project/
├── MyProject.afproj                # Optional project metadata & env config
├── config/
│   └── agents.yaml                 # Agent manifest (lists all agents)
├── agents/
│   ├── researcher/
│   │   ├── config.yaml             # Agent configuration
│   │   ├── instruction.md          # System instruction prompt
│   │   ├── think.md                # Thinking guidelines (optional)
│   │   ├── return.md               # Output format instructions (optional)
│   │   ├── example.md              # Few-shot examples (optional)
│   │   ├── custom_tools.py         # Custom tool functions (optional)
│   │   └── logs/
│   │       ├── run_history/        # Run logs
│   │       ├── audit_logs/         # Audit trail
│   │       ├── prompt_history/     # Prompt history
│   │       └── token_usage/        # Token usage logs
│   └── writer/
│       ├── config.yaml
│       ├── instruction.md
│       └── logs/
└── tools/                          # Shared built-in tools
    ├── calculator/
    │   ├── tool.yaml
    │   └── tool.py
    └── web_search/
        ├── tool.yaml
        └── tool.py

Agent Configuration (YAML)

Each agent is defined by a config.yaml file:

researcher:
  # Identity
  agent_id: "node_001"
  description: "Research assistant"

  # Model
  model: gpt-4o                    # or claude-sonnet-4-20250514, gemini-pro, llama3, etc.
  provider: openai                 # auto-detected if not set
  temperature: 0.3
  max_tokens: 4096

  # Prompts (relative paths to .md files)
  instruction_path: instruction.md
  think_path: think.md             # optional: thinking guidelines
  return_path: return.md           # optional: output format rules
  example_path: example.md         # optional: few-shot examples

  # Pre/Post Processing (optional)
  preprocess_path: preprocess.py
  preprocess_function_name: preprocess
  postprocess_path: postprocess.py
  postprocess_function_name: postprocess

  # Output Format
  return_format: text              # text | json | json_object | markdown
  json_schema_path: schema.json   # optional: for structured JSON output

  # Tools
  tools:
    - name: calculator
      custom: false
    - name: company_lookup
      custom: true
      description: "Look up company info"
      path: custom_tools.py
      function_name: lookup
      parameters:
        query:
          type: string
          description: "Company name or ticker"
          required: true

System Prompt Assembly

The agent's system prompt is assembled from multiple files in this order:

┌─────────────────┐
│  instruction.md  │  ← Main system instruction (required)
├─────────────────┤
│  think.md        │  ← How the agent should reason (optional)
├─────────────────┤
│  return.md       │  ← Output format guidelines (optional)
├─────────────────┤
│  example.md      │  ← Few-shot examples (optional)
└─────────────────┘
        ↓
  Combined System Prompt → sent to LLM

This modular approach lets you reuse and swap prompt sections independently.


Tools

Tools give agents the ability to perform actions — search the web, calculate math, call APIs, read files, and anything else you can write in Python.

Built-in Tools

Built-in tools are shared across all agents. Each is a folder inside tools/:

Tool Category Description
calculator math Evaluate math expressions safely (sqrt, log, sin, +, -, etc.)
web_search search Search the web for current information

Using a built-in tool:

tools:
  - name: calculator
    custom: false

Custom Tools

Custom tools are Python functions specific to an agent.

Step 1: Write the function:

# agents/researcher/custom_tools.py
def lookup_company(query: str) -> dict:
    """Look up company information."""
    # your logic here
    return {"name": "Apple", "sector": "Technology", "market_cap": "3.4T"}

Step 2: Define in YAML:

tools:
  - name: company_lookup
    custom: true
    description: "Look up company information by name or ticker"
    category: finance
    path: custom_tools.py
    function_name: lookup_company
    parameters:
      query:
        type: string
        description: "Company name or stock ticker"
        required: true
    returns:
      type: object
      description: "Company info with name, sector, market_cap"

Tool Parameter Fields

Field Type Required Description
type string string, number, boolean, array, object
description string What this parameter does (the LLM reads this)
required boolean Default: false
default any Default value if not provided
enum array List of allowed values

How Tool Calling Works at Runtime

1. Agent loads         → ToolRegistry reads YAML, imports Python functions
2. Agent.run() called  → LLM receives tool schemas in API request
3. LLM wants a tool    → Returns tool_calls: [{name, arguments}]
4. Agent executes      → ToolRegistry.execute(name, args) → runs Python function
5. Result sent back    → Added to messages as role: "tool"
6. LLM sees result     → Calls another tool or returns final answer
7. Loop limit          → Max 10 rounds (prevents infinite loops)

Pre/Post Processing

Preprocess

A Python function that transforms user input before it reaches the LLM:

# preprocess.py
def preprocess(user_input: str) -> str:
    """Add context, clean input, augment with RAG results, etc."""
    context = fetch_relevant_docs(user_input)
    return f"Context:\n{context}\n\nQuestion: {user_input}"

Postprocess

A Python function that transforms LLM output after the response:

# postprocess.py
def postprocess(llm_output):
    """Parse, validate, save to DB, trigger notifications, etc."""
    data = json.loads(llm_output)
    save_to_database(data)
    return data

LLM Providers

The framework auto-detects the provider based on model name. You can also set it explicitly via provider in the config.

Provider Models API Key Env Var
OpenAI gpt-4, gpt-4o, gpt-4o-mini, o1, o3 OPENAI_API_KEY
Anthropic claude-sonnet-4-20250514, claude-3-haiku, claude-3-opus ANTHROPIC_API_KEY
Google gemini-pro, gemini-1.5-flash, gemini-2.0 GOOGLE_API_KEY
Ollama llama3, mistral, phi, qwen, deepseek, codellama No key needed (local)

API Reference

See helper/API_REFERENCE.md for the complete reference. Summary:

PROD API

from agentsflow import load_agents, AgentsFlowConfig
Function Description
load_agents(agents_dir, env_path=None, config=None) Load all agents → dict[str, Agent]. config: optional AgentsFlowConfig
AgentsFlowConfig SDK-wide config: log_level, silence_network_loggers, log_format

DEV API

Available with pip install AgentsFlowCompiler-lib[dev]. Project metadata lives in an optional .afproj file; agent/tool/processing/monitoring APIs operate directly on the DEV base_dir.

Key data classes (AgentConfig, RunResult, Prompt, Tool, etc.), monitoring types (AuditLogEntry, RunHistoryEntry), and all errors can be imported directly from the top-level package:

from agentsflow import (
    AgentConfig, RunResult, Prompt, Tool, 
    AuditLogEntry, RunHistoryEntry,
    LLMRateLimitError, MaxToolRoundsError
)

Project

Function Description
create_project(project_name, dev_path, prod_path, ...) Create optional .afproj metadata file
edit_project(project_config_path, **updates) Edit project fields
get_project(project_config_path) Read project config

Agent

Function Description
create_agent(base_dir, agent_name, model_config, ...) Create agent directory + config + prompts
edit_agent(base_dir, agent_id/name, ...) Edit config fields (pass only what changes)
delete_agent(base_dir, agent_id/name) Delete agent + remove from manifest
duplicate_agent(base_dir, new_name, source_name/id) Deep copy with new name
validate_agent(base_dir, agent_id/name) Raise AgentsFlowConfigError on invalid config
get_agent_config(base_dir, agent_id/name) Get full AgentConfig
get_all_agents(base_dir) List all registered agents
get_agent_prompts(base_dir, agent_id/name) Retrieve full Prompt object from disk

Tools

Function Description
add_tool(base_dir, script, identity, config, agent_name/id) Add custom tool (ToolIdentityConfig + ToolConfig)
edit_tool(base_dir, tool_name/id, agent_name/id, ...) Edit tool fields
remove_tool(base_dir, tool_name/id, agent_name/id) Remove tool from agent
get_custom_tools(base_dir, agent_name/id) List custom tools on agent
get_agent_builtin_tools(base_dir, agent_name/id) List built-in tools used by agent
get_all_builtin_tools(tools_dir) List all available built-in tools
get_full_script_tool(base_dir, agent_name/id, tool_name/id) Read tool Python script from disk

Processing

Function Description
add_preprocess(base_dir, agent_name/id) Add preprocess (creates default script)
edit_preprocess(base_dir, agent_name/id, script) Replace preprocess script
remove_preprocess(base_dir, agent_name/id) Remove preprocess
add_postprocess(base_dir, agent_name/id) Add postprocess (creates default script)
edit_postprocess(base_dir, agent_name/id, script) Replace postprocess script
remove_postprocess(base_dir, agent_name/id) Remove postprocess
get_preprocess_script(base_dir, agent_name/id) Read preprocess script from disk
get_postprocess_script(base_dir, agent_name/id) Read postprocess script from disk

Monitoring

Function Description
get_prompt_history(base_dir, agent_name) History of system prompt changes
get_prompt_from_hash(base_dir, hash, agent_name/id) Read stored prompt by hash
get_run_history(base_dir, agent_name, from_date, to_date) I/O logs with date filtering
get_run_details(base_dir, rid, agent_name/id) Single run by run ID
get_token_usage(base_dir, agent_name, from_date, to_date) Token stats aggregate
get_audit_logs(base_dir, agent_name/id) Audit log entries
get_audit_log_from_timestamp(base_dir, timestamp, agent_name/id) Single audit entry by timestamp

Architecture

AgentsFlowCompiler-lib
├── agentsflow/             Python package (import name)
│   ├── __init__.py         PROD entry: load_agents()
│   ├── _prod.py            Production loader + .env support
│   │
│   ├── agent/              Core agent runtime
│   │   ├── agent.py            Agent class (run loop)
│   │   ├── config.py           Path resolution
│   │   ├── prompts.py          Prompt assembly & pre/post process
│   │   ├── tools.py            Tool execution wrapper
│   │   ├── stats.py            Logging & token tracking
│   │   └── _utils.py           Shared helpers
│   │
│   ├── llm/                LLM provider abstraction
│   │   ├── base.py             LLMClient abstract interface
│   │   ├── openai_client.py    OpenAI implementation
│   │   ├── anthropic_client.py Anthropic implementation
│   │   ├── google_client.py    Google Gemini implementation
│   │   ├── ollama_client.py    Ollama (local) implementation
│   │   └── factory.py          Auto-detect & create client
│   │
│   ├── schema/             Pydantic data models
│   │   ├── agent_config_schema.py   AgentConfig
│   │   ├── tool_config_schema.py    ToolConfig
│   │   └── tool_schema.py          ToolParameterConfig
│   │
│   ├── tools/              Tool registry
│   │   └── registry.py         Load, register, execute tools
│   │
│   ├── builder/            Agent construction
│   │   └── agents_builder.py   YAML manifest → Agent instances
│   │
│   └── dev/                DEV API (25 functions)
│       ├── project_api.py
│       ├── agent_api.py
│       ├── tool_api.py
│       ├── processing_api.py
│       └── monitoring_api.py
│
└── tests/                  101 tests
    ├── micro/                  Unit tests (99)
    │   ├── agent/
    │   └── api/
    └── macro/                  Integration tests (2)

Design Principles (SOLID)

Principle How It's Applied
Single Responsibility Each file does one thing (one LLM provider per file, one schema per file)
Open/Closed Add a new LLM provider = new file + 2 lines in factory.py, no existing code changes
Liskov Substitution All providers implement LLMClient ABC and are fully interchangeable
Interface Segregation LLMClient has a minimal interface: chat() + provider_name
Dependency Inversion Agent depends on the LLMClient abstraction, never on concrete providers

Testing

# All tests (101)
pytest tests/

# Unit tests only
pytest tests/micro/

# Integration tests only
pytest tests/macro/

# Specific module
pytest tests/micro/agent/
pytest tests/micro/api/

Full Example

from agentsflow.dev import (
    create_project, create_agent, add_tool,
    add_preprocess, edit_preprocess, validate_agent, get_all_agents,
)
# 1. Set up project
create_project("Financial Analysis", dev_path="/home/user/fin_project", dev_env_strategy="project_local")

# 2. Create agent
from agentsflow import (
    AgentModelConfig, Prompt, 
    ToolIdentityConfig, ToolConfig
)

create_agent(
    base_dir="/home/user/fin_project",
    agent_name="analyst",
    model_config=AgentModelConfig(model="gpt-4o", temperature=0.2),
    description="Financial research and analysis agent",
    prompts=Prompt(
        instruction="""You are a financial analyst.
    Use available tools to research companies and calculate metrics.
    Always show your reasoning.""",
        think="Break analysis into: data gathering → calculation → conclusion",
    ),
)

# 3. Add tool (script stored at tools/custom_tools/<name>/tool.py)
add_tool(
    base_dir="/home/user/fin_project",
    script="def get_stock_data(ticker: str):\n    return {}",
    identity=ToolIdentityConfig(name="stock_lookup", description="Get stock price and metrics", category="finance"),
    config=ToolConfig(
        function_name="get_stock_data",
        parameters={"ticker": {"type": "string", "description": "Stock ticker (AAPL, MSFT)", "required": True}},
        returns={"type": "object", "description": "Stock data"},
    ),
    agent_name="analyst",
)

# 4. Add preprocess (creates default script), then edit its content
add_preprocess("/home/user/fin_project", agent_name="analyst")
edit_preprocess("/home/user/fin_project", agent_name="analyst", script="def preprocess(prompt: str):\n    return prompt.strip()")

# 5. Validate
validate_agent("/home/user/fin_project", agent_name="analyst")

# 6. List all agents
agents = get_all_agents("/home/user/fin_project")
# [AgentConfig(...)]

Then in production:

from agentsflow import load_agents

agents = load_agents("/home/user/fin_project", env_path="/home/user/.env")
result = agents["analyst"].run("Analyze Apple's Q4 earnings and compare to Microsoft")
print(result.output)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentsflowcompiler_lib-0.2.0.tar.gz (119.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentsflowcompiler_lib-0.2.0-py3-none-any.whl (187.3 kB view details)

Uploaded Python 3

File details

Details for the file agentsflowcompiler_lib-0.2.0.tar.gz.

File metadata

  • Download URL: agentsflowcompiler_lib-0.2.0.tar.gz
  • Upload date:
  • Size: 119.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for agentsflowcompiler_lib-0.2.0.tar.gz
Algorithm Hash digest
SHA256 3caef875329593bd7c015243a8a1916e7db21f4ce2abff2c6f62a55bb43dc459
MD5 9ed1525bc6fa4520c4fc7778d66344b1
BLAKE2b-256 cdd1edfb3110f14c39283bd45efb1cc4127fa9aa2b035f5c8ae75fea221e80a3

See more details on using hashes here.

File details

Details for the file agentsflowcompiler_lib-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agentsflowcompiler_lib-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 adc5d825702236c8a7e27128724f92323dde61e07ffbc11a6e7373803120e251
MD5 7809eda5ba7c63066fc6641c4293499a
BLAKE2b-256 d78e1b9c9f2a84c5f02105dc11dde9d5b275c8253493b87119fab667f073d238

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page