A framework for orchestrating flow-based agentic workflows using LLMs.
Project description
Fluxion
Fluxion is a Python library for building and managing agentic workflows. Designed with modularity, scalability, and extensibility in mind, Fluxion simplifies the integration of locally or remotely hosted language models (LLMs) powered by Ollama. By leveraging its robust architecture, developers can create intelligent systems capable of natural conversation, contextual reasoning, tool invocation, and autonomous decision-making, enabling seamless orchestration of complex, dynamic workflows.
Table of Contents
- Features
- Installation
- Setting Up Ollama
- Usage
- Architecture Overview
- Examples
- Contributing
- Roadmap
- License
Features
Fluxion provides a powerful suite of tools and functionalities to enable the development of intelligent, flow-based workflows with ease and flexibility:
🧩 Modular and Extensible Design
- Build dynamic workflows by combining modular components that can easily be extended or customized for specific use cases.
- Integrate with various language models (LLMs), such as those powered by Ollama, while maintaining flexibility for custom implementations.
🤖 Agent-Based Framework
- Create and manage intelligent agents with capabilities for:
- Conversational Interactions: Enable rich dialogue-based workflows.
- Decision-Making: Empower agents to reason and make autonomous decisions.
- Delegation: Assign tasks to specialized agents or fallback to a generic agent when required.
⚙️ Tool Calling
- Seamlessly invoke tools or external functions directly from LLM responses, including:
- Dynamic tool selection based on task requirements.
- Robust input validation and schema enforcement for safe and reliable tool usage.
📚 Plan-Based Execution
- Generate structured task plans with support for:
- High-Level Task Planning: Automatically create step-by-step plans for complex workflows.
- Step Interpretation and Execution: Execute tasks step-by-step, handling dependencies and failures gracefully.
🧠 Contextual Reasoning
- Leverage contextual information to improve decision-making and task execution:
- Pass relevant task context to agents and tools.
- Enhance LLM interactions with additional metadata, agent schemas, and task history.
🚦 Agent Coordination and Delegation
- Orchestrate multi-agent workflows:
- Use the CoordinationAgent to dynamically assign tasks to appropriate agents.
- Employ the DelegationAgent to delegate tasks intelligently based on task descriptions, available agents, and fallback strategies.
🌐 LLM Integration
- Connect seamlessly with locally or remotely hosted LLMs using:
- High-performance APIs powered by Ollama.
- Support for advanced LLM capabilities, including tool calls, chat, and query modules.
🛠️ Built for Developers
- Comprehensive and well-documented API with support for:
- Easy-to-understand abstractions for workflows and agents.
- Fine-grained control over execution flows and error handling.
- A growing suite of examples and pre-built agents to accelerate development.
Installation
Prerequisites
- Python 3.11+
- Anaconda or a virtual environment
- Ollama installed for hosting LLMs
- System Dependencies:
- Ubuntu/Debian:
sudo apt install portaudio19-dev graphviz
- MacOS:
brew install portaudio graphviz
Steps
GitHub Installation
-
Clone the Repository:
git clone https://github.com/ymitiku/fluxion.git cd fluxion pip install -e .
-
Set Up the Environment:
bash scripts/setup_env.sh -
Run Unit Tests:
bash scripts/run_tests.sh -
Build Documentation (Optional):
bash scripts/build_docs.sh
PyPI Installation
pip install fluxion-ai-python
Setting Up Ollama
Fluxion uses Ollama to connect to locally hosted LLMs. Here's how to set it up:
-
Install Ollama: Follow the Ollama Installation Guide.
-
Download a Model:
ollama pull llama3.2
-
Start the Ollama Server:
ollama serve -
Verify the Server: Confirm it's running at
http://localhost:11434.
Usage
Calling Locally Hosted LLMs
Use Fluxion to interact with locally hosted LLMs. For example:
Using generation endpoint:
from fluxion_ai.core.modules.llm_modules import LLMQueryModule
# Initialize the LLMQueryModule
llm_query = LLMQueryModule(endpoint="http://localhost:11434/api/generate", model="llama3.2")
response = llm_query.execute(prompt="What is the capital of France?")
print("Query Response:", response)
# Query Response: The capital of France is Paris.
Using chat endpoint:
from fluxion_ai.core.modules.llm_modules import LLMChatModule
# Initialize the LLMChatModule
llm_chat = LLMChatModule(endpoint="http://localhost:11434/api/chat", model="llama3.2")
messages = [
{"role": "user", "content": "Hello!"},
]
response = llm_chat.execute(messages=messages)
print("Chat Response:", response)
# Chat Response: {'role': 'assistant', 'content': 'How can I assist you today?'}
Using Agents for Chat and Tool Calls
Agents can perform tool calls dynamically:
from fluxion_ai.core.agents.llm_agent import LLMChatAgent
from fluxion_ai.core.modules.llm_modules import LLMChatModule
from fluxion_ai.models.message_model import Message, MessageHistory
# Define a tool function
def get_weather(city_name: str) -> dict:
return {"temperature": 20, "description": "sunny"}
# Initialize the LLMChatModule
llm_module = LLMChatModule(endpoint="http://localhost:11434/api/chat", model="llama3.2")
# Initialize the agent
llm_agent = LLMChatAgent(name="WeatherAgent", llm_module=llm_module)
llm_agent.register_tool(get_weather)
# Execute a conversation
messages = MessageHistory(messages = [Message(role="user", content="What's the weather in Paris?")])
response = llm_agent.execute(messages=messages)
print("Chat with Tool Call Response:", response)
Using call_agent with LLMChatAgent
call_agent allows dynamic invocation of agents, enabling seamless communication between agents in a modular workflow. It supports retries, backoff strategies, and fallback logic for robust execution.
Here’s how to integrate it with an LLMChatAgent:
from fluxion_ai.core.registry.tool_registry import call_agent
from fluxion_ai.core.agents.llm_agent import LLMChatAgent
from fluxion_ai.core.modules.llm_modules import LLMChatModule
# Define a tool function
def get_weather(city_name: str) -> dict:
return {"temperature": 20, "description": "sunny"}
# Initialize the LLMChatModule
llm_module = LLMChatModule(endpoint="http://localhost:11434/api/chat", model="llama3.2")
# Initialize the agent
llm_agent = LLMChatAgent(name="WeatherAgent", llm_module=llm_module)
llm_agent.register_tool(get_weather)
# Register call_agent for dynamic agent invocation
llm_agent.register_tool(call_agent)
Advanced Features
-
Retries:
- Specify the maximum number of retries for an agent call.
- Use
retry_backoffto introduce a delay between retries.
-
Fallback Logic:
- Define a fallback function to handle cases where retries are exhausted.
-
Example: Using Retries and Fallback
def fallback_logic(inputs):
return {"message": "Unable to complete request. Please try again later."}
messages = MessageHistory(messages = [Message(role="user", content="What's capital of Ireland?")])
try:
result = call_agent(
agent_name="mock_agent",
messages=messages,
max_retries=3,
retry_backoff=0.5,
fallback=fallback_logic,
)
print("Agent Result:", result)
except RuntimeError as e:
print("Error calling agent:", e)
What This Does:
- Retries the agent call up to three times.
- Uses a backoff delay of 0.5 seconds between retries.
- Executes the fallback logic if all retries fail.
When to Use call_agent
- Inter-Agent Communication:
- When agents need to invoke other agents dynamically in workflows.
- Error Recovery:
- To handle transient failures with retries or provide fallback results for irrecoverable errors.
- Modular Workflow Design:
- Enables complex workflows with minimal coupling between agents.
Building Workflows
Fluxion supports creating workflows using agent nodes:
from fluxion_ai.core.agents.llm_agent import LLMChatAgent
from fluxion_ai.core.modules.llm_modules import LLMChatModule
from fluxion_ai.workflows.agent_node import AgentNode
from fluxion_ai.workflows.abstract_workflow import AbstractWorkflow
from fluxion_ai.models.message_model import Message, MessageHistory
class CustomWorkflow(AbstractWorkflow):
def define_workflow(self):
module = LLMChatModule(endpoint="http://localhost:11434/api/chat", model="llama3.2")
node1 = AgentNode(name="Node1", agent=LLMChatAgent("Agent1", llm_module=module))
node2 = AgentNode(name="Node2", agent=LLMChatAgent("Agent2", llm_module=module))
self.add_node(node1)
self.add_node(node2)
workflow = CustomWorkflow(name="ExampleWorkflow")
workflow.define_workflow()
inputs = {"messages": MessageHistory(messages=[Message(role="user", content="Hello!")])}
results = workflow.execute(inputs=inputs)
print("Workflow Results:", results)
Architecture Overview
Fluxion is designed with a modular and extensible architecture that emphasizes scalability, interoperability, and flexibility. The following key components form the foundation of Fluxion's ecosystem:
1. Core Components
-
Agents
- The primary building blocks for creating intelligent workflows. Each agent is specialized for a particular task or functionality.
- Types of agents include:
- LLMQueryAgent: Executes single-turn tasks and queries using a language model.
- LLMChatAgent: Handles multi-turn conversations with memory and context support.
- PlanningAgent: Generates structured plans and executes workflows step-by-step.
- CoordinationAgent: Dynamically orchestrates tasks by selecting and calling appropriate agents or tools.
- DelegationAgent: Delegates tasks intelligently to other agents or handles them directly using fallback mechanisms.
-
Tool Registry
- A centralized registry for managing tools that can be called by agents or workflows.
- Ensures input validation, metadata management, and consistent integration with LLMs.
-
Agent Registry
- Tracks all available agents in the system, allowing agents to dynamically discover and collaborate with each other.
- Provides metadata for agents, including input/output schemas and descriptions, to facilitate coordination.
2. Modules
-
LLM Modules
- Abstractions for connecting to locally or remotely hosted language models, such as those powered by Ollama.
- Modules include:
- LLMQueryModule: Executes single-turn queries.
- LLMChatModule: Manages multi-turn conversational interactions, tool calls, and delegation logic.
-
IR modules
- EmbeddingApiModule: EmbeddingApiModule is a module that provides an interface to the embedding API. It can be used to get embeddings for a given text.
- IndexingModule: IndexingModule is a module that provides functionalities to build semantic search indexes which can be used to search for similar documents.
- RetrievalModule: RetrievalModule is a module that provides functionalities to retrieve documents from a given index.
3. Workflow Execution
Fluxion enables the construction, execution, and monitoring of agent-based workflows. It integrates key components like AgentNodes, AbstractWorkflows, and orchestration adapters for external platforms like Flyte.
- AgentNode: Represents a workflow step with an agent, its dependencies, inputs, and outputs. Ensures proper data flow and validates outputs.
- AbstractWorkflow: A base class for managing workflows. It handles node addition, dependency validation, execution order, and workflow execution, with visualization support for better clarity.
- FlyteWorkflowAdapter: Adapts Fluxion workflows for Flyte, converting
AbstractWorkflowinto Flyte-compatible workflows and enabling external orchestration. - WorkflowProgressTracker: Tracks node status and execution times, offering progress updates for workflow monitoring.
Fluxion workflows are validated for dependencies and executed in the correct order. Nodes exchange data seamlessly, enabling smooth execution for both local and orchestrated environments.
4. Extensibility
Fluxion is designed to be easily extendable:
- Add new agents with custom logic and integrate them seamlessly into workflows.
- Register custom tools with the Tool Registry to expand functionality.
- Extend the architecture by creating new modules, agents, or registries for specific domains or applications.
5. Key Design Principles
- Modularity: All components are self-contained and can be used independently or as part of a larger workflow.
- Interoperability: Agents and tools communicate through well-defined interfaces and schemas.
- Flexibility: Supports a wide range of workflows, from simple task execution to complex multi-agent orchestration.
- Scalability: Built to handle diverse tasks and workloads without bottlenecks.
Examples
For complete examples and tutorials, visit the Fluxion Documentation.
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Vision and Goals
Fluxion aims to become the comprehensive framework for building agentic workflows, empowering developers to:
- Design modular, extensible, and scalable intelligent systems.
- Seamlessly integrate advanced LLM capabilities for decision-making and automation.
- Simplify complex workflows through intuitive APIs, dynamic task orchestration, and robust execution monitoring.
Future updates will continue to focus on innovation, developer experience, and real-world applicability.
License
This project is licensed under the Apache License 2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fluxion_ai_python-1.1.0.tar.gz.
File metadata
- Download URL: fluxion_ai_python-1.1.0.tar.gz
- Upload date:
- Size: 103.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
34ec59c58b91eb97a5775aefc0912df238ee0b7d2cb98de154856184461e8d9f
|
|
| MD5 |
742aa21a0b1cc59050c439b3e00ea87c
|
|
| BLAKE2b-256 |
ad6845dc1a0643c3cd3c8bd47650555652e0b08151c0d27b5c88625e45ebd9f3
|
File details
Details for the file fluxion_ai_python-1.1.0-py3-none-any.whl.
File metadata
- Download URL: fluxion_ai_python-1.1.0-py3-none-any.whl
- Upload date:
- Size: 57.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb38d13ecd40f1db7f74de2b61311d10c25a4b8536ec75454a9839ea410828a8
|
|
| MD5 |
505bd46289a501725666b96f23b27b9e
|
|
| BLAKE2b-256 |
b66137783c6b68b2b0cc81de4257dfb02dfe7ec5308b6e6c966ef42cf7d6259f
|