A easy way to create structured AI agents
Project description
Flo AI ๐
Build production-ready AI agents with structured outputs, tool integration, and multi-LLM support
GitHub
โข
Website
โข
Documentation
๐ What is Flo AI?
Flo AI is a Python framework that makes building production-ready AI agents and teams as easy as writing YAML. Think "Kubernetes for AI Agents" - compose complex AI architectures using pre-built components while maintaining the flexibility to create your own.
โจ Key Features
- ๐ Truly Composable: Build complex AI systems by combining smaller, reusable components
- ๐๏ธ Production-Ready: Built-in best practices and optimizations for production deployments
- ๐ YAML-First: Define your entire agent architecture in simple YAML
- ๐ง LLM-Powered Routing: Intelligent routing decisions made by LLMs, no code required
- ๐ง Flexible: Use pre-built components or create your own
- ๐ค Team-Oriented: Create and manage teams of AI agents working together
- ๐ OpenTelemetry Integration: Built-in observability with automatic instrumentation
๐ Table of Contents
- ๐ Quick Start
- ๐จ Flo AI Studio - Visual Workflow Designer
- ๐ง Core Features
- ๐ Agent Orchestration with Arium
- ๐ OpenTelemetry Integration
- ๐ Examples & Documentation
- ๐ Why Flo AI?
- ๐ค Contributing
๐ Quick Start
Installation
pip install flo-ai
# or using poetry
poetry add flo-ai
# or using uv
uv add flo-ai
Your First Agent (30 seconds)
import asyncio
from flo_ai.agent import AgentBuilder
from flo_ai.llm import OpenAI
async def main():
# Create a simple conversational agent
agent = (
AgentBuilder()
.with_name('Math Tutor')
.with_prompt('You are a helpful math tutor.')
.with_llm(OpenAI(model='gpt-4o-mini'))
.build()
)
response = await agent.run('What is the formula for the area of a circle?')
print(f'Response: {response}')
asyncio.run(main())
Tool-Using Agent
import asyncio
from flo_ai.agent import AgentBuilder
from flo_ai.tool import flo_tool
from flo_ai.llm import Anthropic
@flo_tool(description="Perform mathematical calculations")
async def calculate(operation: str, x: float, y: float) -> float:
"""Calculate mathematical operations between two numbers."""
operations = {
'add': lambda: x + y,
'subtract': lambda: x - y,
'multiply': lambda: x * y,
'divide': lambda: x / y if y != 0 else 0,
}
return operations.get(operation, lambda: 0)()
async def main():
agent = (
AgentBuilder()
.with_name('Calculator Assistant')
.with_prompt('You are a math assistant that can perform calculations.')
.with_llm(Anthropic(model='claude-3-5-sonnet-20240620'))
.with_tools([calculate.tool])
.build()
)
response = await agent.run('Calculate 5 plus 3')
print(f'Response: {response}')
asyncio.run(main())
Structured Output Agent
import asyncio
from pydantic import BaseModel, Field
from flo_ai.agent import AgentBuilder
from flo_ai.llm import OpenAI
class MathSolution(BaseModel):
solution: str = Field(description="Step-by-step solution")
answer: str = Field(description="Final answer")
confidence: float = Field(description="Confidence level (0-1)")
async def main():
agent = (
AgentBuilder()
.with_name('Math Solver')
.with_llm(OpenAI(model='gpt-4o'))
.with_output_schema(MathSolution)
.build()
)
response = await agent.run('Solve: 2x + 5 = 15')
print(f'Structured Response: {response}')
asyncio.run(main())
๐ง Core Features
LLM Providers
Flo AI supports multiple LLM providers with consistent interfaces:
# OpenAI
from flo_ai.llm import OpenAI
llm = OpenAI(model='gpt-4o', temperature=0.7)
# Anthropic Claude
from flo_ai.llm import Anthropic
llm = Anthropic(model='claude-3-5-sonnet-20240620', temperature=0.7)
# Google Gemini
from flo_ai.llm import Gemini
llm = Gemini(model='gemini-2.5-flash', temperature=0.7)
# Google VertexAI
from flo_ai.llm import VertexAI
llm = VertexAI(model='gemini-2.5-flash', project='your-project')
# Ollama (Local)
from flo_ai.llm import Ollama
llm = Ollama(model='llama2', base_url='http://localhost:11434')
Tools & @flo_tool Decorator
Create custom tools easily with the @flo_tool decorator:
from flo_ai.tool import flo_tool
@flo_tool(description="Get current weather for a city")
async def get_weather(city: str, country: str = None) -> str:
"""Get weather information for a specific city."""
# Your weather API implementation
return f"Weather in {city}: sunny, 25ยฐC"
# Use in agent
agent = (
AgentBuilder()
.with_name('Weather Assistant')
.with_llm(OpenAI(model='gpt-4o-mini'))
.with_tools([get_weather.tool])
.build()
)
Variables System
Dynamic variable resolution in agent prompts using <variable_name> syntax:
# Create agent with variables
agent = (
AgentBuilder()
.with_name('Data Analyst')
.with_prompt('Analyze <dataset_path> and focus on <key_metric>. Generate insights for <target_audience>.')
.with_llm(OpenAI(model='gpt-4o-mini'))
.build()
)
# Define variables at runtime
variables = {
'dataset_path': '/data/sales_q4_2024.csv',
'key_metric': 'revenue growth',
'target_audience': 'executive team'
}
result = await agent.run(
'Please provide a comprehensive analysis with actionable recommendations.',
variables=variables
)
Document Processing
Process PDF and TXT documents with AI agents:
from flo_ai.models.document import DocumentMessage, DocumentType
# Create document message
document = DocumentMessage(
document_type=DocumentType.PDF,
document_file_path='business_report.pdf'
)
# Process with agent
agent = (
AgentBuilder()
.with_name('Document Analyzer')
.with_prompt('Analyze the provided document and extract key insights.')
.with_llm(OpenAI(model='gpt-4o-mini'))
.build()
)
result = await agent.run([document])
Output Formatting
Use Pydantic models for structured outputs:
from pydantic import BaseModel, Field
class AnalysisResult(BaseModel):
summary: str = Field(description="Executive summary")
key_findings: list = Field(description="List of key findings")
recommendations: list = Field(description="Actionable recommendations")
agent = (
AgentBuilder()
.with_name('Business Analyst')
.with_llm(OpenAI(model='gpt-4o'))
.with_output_schema(AnalysisResult)
.build()
)
Error Handling
Built-in retry mechanisms and error recovery:
agent = (
AgentBuilder()
.with_name('Robust Agent')
.with_llm(OpenAI(model='gpt-4o'))
.with_retries(3) # Retry up to 3 times on failure
.build()
)
๐ Agent Orchestration with Arium
Arium is Flo AI's powerful workflow orchestration engine for creating complex multi-agent workflows.
Simple Agent Chains
from flo_ai.arium import AriumBuilder
from flo_ai.agent import Agent
from flo_ai.llm import OpenAI
async def simple_chain():
llm = OpenAI(model='gpt-4o-mini')
# Create agents
analyst = Agent(
name='content_analyst',
system_prompt='Analyze the input and extract key insights.',
llm=llm
)
summarizer = Agent(
name='summarizer',
system_prompt='Create a concise summary based on the analysis.',
llm=llm
)
# Build and run workflow
result = await (
AriumBuilder()
.add_agents([analyst, summarizer])
.start_with(analyst)
.connect(analyst, summarizer)
.end_with(summarizer)
.build_and_run(["Analyze this complex business report..."])
)
return result
Conditional Routing
from flo_ai.arium.memory import BaseMemory
def route_by_type(memory: BaseMemory) -> str:
"""Route based on classification result"""
messages = memory.get()
last_message = str(messages[-1]) if messages else ""
if "technical" in last_message.lower():
return "tech_specialist"
else:
return "business_specialist"
# Build workflow with conditional routing
result = await (
AriumBuilder()
.add_agents([classifier, tech_specialist, business_specialist, final_agent])
.start_with(classifier)
.add_edge(classifier, [tech_specialist, business_specialist], route_by_type)
.connect(tech_specialist, final_agent)
.connect(business_specialist, final_agent)
.end_with(final_agent)
.build_and_run(["How can we optimize our database performance?"])
)
YAML-Based Workflows
Define entire workflows in YAML:
metadata:
name: "content-analysis-workflow"
version: "1.0.0"
description: "Multi-agent content analysis pipeline"
arium:
agents:
- name: "analyzer"
role: "Content Analyst"
job: "Analyze the input content and extract key insights."
model:
provider: "openai"
name: "gpt-4o-mini"
- name: "summarizer"
role: "Content Summarizer"
job: "Create a concise summary based on the analysis."
model:
provider: "anthropic"
name: "claude-3-5-sonnet-20240620"
workflow:
start: "analyzer"
edges:
- from: "analyzer"
to: ["summarizer"]
end: ["summarizer"]
# Run YAML workflow
result = await (
AriumBuilder()
.from_yaml(yaml_str=workflow_yaml)
.build_and_run(["Analyze this quarterly business report..."])
)
LLM-Powered Routers
Define intelligent routing logic directly in YAML:
routers:
- name: "content_type_router"
type: "smart" # Uses LLM for intelligent routing
routing_options:
technical_writer: "Technical content, documentation, tutorials"
creative_writer: "Creative writing, storytelling, fiction"
marketing_writer: "Marketing copy, sales content, campaigns"
model:
provider: "openai"
name: "gpt-4o-mini"
ReflectionRouter & PlanExecuteRouter
ReflectionRouter for AโBโAโC feedback patterns:
routers:
- name: "reflection_router"
type: "reflection"
flow_pattern: [writer, critic, writer] # A โ B โ A pattern
model:
provider: "openai"
name: "gpt-4o-mini"
PlanExecuteRouter for Cursor-style plan-and-execute workflows:
routers:
- name: "plan_router"
type: "plan_execute"
agents:
planner: "Creates detailed execution plans"
developer: "Implements features according to plan"
tester: "Tests implementations and validates functionality"
reviewer: "Reviews and approves completed work"
settings:
planner_agent: planner
executor_agent: developer
reviewer_agent: reviewer
๐ OpenTelemetry Integration
Built-in observability for production monitoring:
from flo_ai import configure_telemetry, shutdown_telemetry
# Configure at startup
configure_telemetry(
service_name="my_ai_app",
service_version="1.0.0",
console_export=True # For debugging
)
# Your application code here...
# Shutdown to flush data
shutdown_telemetry()
๐ Complete Telemetry Guide โ
๐ Examples & Documentation
Examples Directory
Check out the examples/ directory for comprehensive examples:
agent_builder_usage.py- Basic agent creation patternsyaml_agent_example.py- YAML-based agent configurationoutput_formatter.py- Structured output examplesmulti_tool_example.py- Multi-tool agent examplesdocument_processing_example.py- Document processing with PDF and TXT files
Documentation
Visit our website to know more
Additional Resources:
- @flo_tool Decorator Guide - Complete guide to the
@flo_tooldecorator - Examples Directory - Ready-to-run code examples
- Contributing Guide - How to contribute to Flo AI
๐ Why Flo AI?
For Developers
- Simple Setup: Get started in minutes with minimal configuration
- Flexible: Use YAML or code-based configuration
- Production Ready: Built-in error handling and retry mechanisms
- Multi-LLM: Switch between providers easily
For Teams
- Maintainable: YAML-first approach makes configurations versionable
- Testable: Each component can be tested independently
- Scalable: From simple agents to complex multi-tool systems
Use Cases
- ๐ค Customer Service Automation
- ๐ Data Analysis and Processing
- ๐ Content Generation and Summarization
- ๐ Research and Information Retrieval
- ๐ฏ Task-Specific AI Assistants
- ๐ง Email Analysis and Classification
๐ค Contributing
We love your input! Check out our Contributing Guide to get started. Ways to contribute:
- ๐ Report bugs
- ๐ก Propose new features
- ๐ Improve documentation
- ๐ง Submit PRs
๐ License
Flo AI is MIT Licensed.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file flo_ai-1.1.3.tar.gz.
File metadata
- Download URL: flo_ai-1.1.3.tar.gz
- Upload date:
- Size: 92.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d43b5c49f2a16a02818e0e672f04df8986c431b24c97f914ee69c0b000686cf5
|
|
| MD5 |
5d7eb35229df157fb33dee1c76c4fc8b
|
|
| BLAKE2b-256 |
396420f5af2233f07b961eae7500951a1a11a568dcb85fba8e0a5cb16b17299d
|
File details
Details for the file flo_ai-1.1.3-py3-none-any.whl.
File metadata
- Download URL: flo_ai-1.1.3-py3-none-any.whl
- Upload date:
- Size: 119.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
887714ec5a853339edef207e7e3d334269ad2c80449055c4fd1c5aa99a3810ba
|
|
| MD5 |
be45b4e402a30c649141d7bcf4c0977b
|
|
| BLAKE2b-256 |
0be748b7b394bd61e6308d819cdc1f79391b954bd27b81460b610543d717332f
|