Visual Agent Workflow Engine — run AI agent pipelines from YAML configuration
Project description
🔀 AgentsFlow
Visual Agent Workflow Engine — Run AI agent pipelines from YAML configuration.
Define your agents, connect them in a graph, and let AgentsFlow handle the rest. Works with OpenAI, Anthropic, Google Gemini, and local models via Ollama.
Installation
pip install agentsflow
With additional providers:
pip install agentsflow[anthropic] # + Claude support
pip install agentsflow[google] # + Gemini support
pip install agentsflow[all] # All providers
Quick Start
1. Define your agent in YAML
# agents/analyzer/config.yaml
analyzer:
agent_id: node_001
description: Analyzes input text for key themes
model: gpt-4
temperature: 0.0
max_tokens: 4096
instruction_path: instruction.md
think_path: think.md
return_format: json
json_schema_path: output_schema.json
2. Create your prompt files
<!-- agents/analyzer/instruction.md -->
You are an expert text analyzer.
Given input text, identify the key themes and return structured analysis.
3. Run it
from agentsflow.schema import AgentConfig
from agentsflow.agent import Agent
from pathlib import Path
import yaml
# Load config
with open("agents/analyzer/config.yaml") as f:
raw = yaml.safe_load(f)
config = AgentConfig.from_yaml("analyzer", raw["analyzer"])
# Create and run agent
agent = Agent(
base_agents_dir=Path("."),
config=config,
api_key="sk-..."
)
result = agent.run("Analyze the impact of remote work on productivity")
print(result)
Multi-Provider Support
AgentsFlow auto-detects the provider from the model name:
# OpenAI
model: gpt-4
model: gpt-4o
# Anthropic
model: claude-sonnet-4-20250514
# Google
model: gemini-2.0-flash
# Local (Ollama)
model: llama3.1
model: deepseek-coder
Or force a provider explicitly:
model: my-custom-model
provider: openai
Environment Variables
Set the API key for your provider:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=AI...
# Ollama needs no key
Agent Pipeline
Each agent supports a full processing pipeline:
input → preprocess() → LLM → postprocess() → output
Configure preprocessing and postprocessing in YAML:
analyzer:
agent_id: node_001
model: gpt-4
instruction_path: instruction.md
preprocess_path: preprocess.py
preprocess_function_name: clean_input
postprocess_path: postprocess.py
postprocess_function_name: format_output
# agents/analyzer/preprocess.py
def clean_input(raw_input: str) -> str:
return raw_input.strip().lower()
# agents/analyzer/postprocess.py
def format_output(llm_output: dict) -> dict:
llm_output["processed"] = True
return llm_output
Logging
AgentsFlow automatically logs everything per agent:
agents/
analyzer/
logs/
input_output/
2026-02-09.json # Full pipeline IO per day
audit/
2026-02-09.log # Token usage per day
system_prompt.json # Prompt version history (hash-based)
IO Log Format (input_output/2026-02-09.json)
[
{
"date": "2026-02-09 14:30:00",
"system_prompt_hash": "a3f2b8c1...",
"input": "raw user input",
"preprocess_output": "cleaned input",
"llm_output": {"analysis": "..."},
"postprocess_output": {"analysis": "...", "processed": true},
"token_input": 342,
"token_output": 128
}
]
Fields preprocess_output and postprocess_output appear only when those functions are configured.
System Prompt Log (system_prompt.json)
Tracks prompt changes over time. A new entry is written only when the prompt hash changes:
[
{
"date": "2026-02-09 14:00:00",
"system_prompt_hash": "a3f2b8c1...",
"instruction_prompt": "You are an expert...",
"think_prompt": "Consider the following...",
"return_prompt": "Return JSON with...",
"next_model_rule_prompt": null,
"example_prompt": "Input: ... Output: ..."
}
]
Building Multiple Agents
Use agents_builder to load all agents from a directory:
from agentsflow.agents_builder import agents_configuration
from pathlib import Path
# Expects: agents_dir/config/agents.yaml listing all agents
agents = agents_configuration(Path("./my_project"))
# Run a specific agent
result = agents["analyzer"].run("Hello world")
Directory Structure
my_project/
config/
agents.yaml # Manifest listing all agents
agents/
analyzer/
config.yaml # Agent config
instruction.md # System prompt
think.md # Thinking guidelines
output_schema.json # JSON schema for output
logs/ # Auto-created
optimizer/
config.yaml
instruction.md
preprocess.py
postprocess.py
logs/
Manifest (config/agents.yaml)
agents:
- name: analyzer
config_path: agents/analyzer/config.yaml
- name: optimizer
config_path: agents/optimizer/config.yaml
API Reference
AgentConfig
Pydantic model for agent configuration. Key fields:
| Field | Type | Default | Description |
|---|---|---|---|
name |
str |
required | Agent name |
agent_id |
str |
"" |
Unique node ID |
model |
str |
"gpt-4" |
LLM model identifier |
provider |
str? |
auto-detect | Force provider |
temperature |
float |
0.0 |
Sampling temperature (0-2) |
max_tokens |
int? |
None |
Max output tokens |
instruction_path |
Path? |
None |
System prompt file |
think_path |
Path? |
None |
Thinking guidelines |
return_path |
Path? |
None |
Output format instructions |
example_path |
Path? |
None |
Few-shot examples |
preprocess_path |
Path? |
None |
Preprocess module |
postprocess_path |
Path? |
None |
Postprocess module |
return_format |
str |
"text" |
text / json_object / json / markdown |
json_schema_path |
Path? |
None |
Structured output schema |
Agent
agent = Agent(base_agents_dir, config, api_key, base_url=None)
result = agent.run(user_prompt)
summary = agent.get_tokens_summary()
create_llm_client
from agentsflow.llm_client import create_llm_client
client = create_llm_client(model="claude-sonnet-4-20250514", api_key="sk-ant-...")
response = client.chat(messages=[...], temperature=0.0)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentsflow_complier-0.1.0.tar.gz.
File metadata
- Download URL: agentsflow_complier-0.1.0.tar.gz
- Upload date:
- Size: 14.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6819ae4b0c9a343da41e0c224e1e97b994e868b20ce646fd90bc8184b1bcc8db
|
|
| MD5 |
67f1cea3d5cffa1e6d6607a8dd369cc1
|
|
| BLAKE2b-256 |
3a94af8c30f0cc40120bc4660fcff5a50ff74803929e24910046284a6ebec2c2
|
File details
Details for the file agentsflow_complier-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agentsflow_complier-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eeff0ff526411aaef606c9d05e3797954b4da568a1c4225499a5232bd4712a62
|
|
| MD5 |
4b11c476aebf1bcf52973be8685d7c0d
|
|
| BLAKE2b-256 |
316dde73a1c7b63051070086083b8ba5fb1a7fd1f03d8a016b6b3c8efc70f644
|