A library for managing nodes and agent workflows
Project description
HLR (Hierarchical LLM Routing)
HLR is a flexible Python library for creating and managing hierarchical workflows driven by nodes and powered by Large Language Models (LLMs).
Each node represents a distinct step or action in your process. It can execute custom Python functions, update a shared context, and intelligently decide the next node to transition to, either programmatically or by leveraging an LLM's understanding. This makes HLR ideal for applications needing sequential task processing with dynamic, context-aware routing.
Features
- Node-Based Workflows: Structure complex processes into manageable, reusable nodes.
- Hierarchical Flow: Design workflows where nodes can branch and converge based on logic or LLM decisions.
- Non-Linear Dependencies: Support for circular dependencies between nodes for complex workflows.
- Dynamic Routing:
- Nodes can explicitly return the ID of the next node.
- Alternatively, let an integrated LLM choose the next node based on descriptions and the current context.
- Shared Context: Maintain state across nodes using a simple dictionary (
agent.context). Nodes can read and write data (like logs, intermediate results, or the execution path).agent.context["info"]: Commonly used for accumulating logs or data.agent.context["route"]: Automatically tracks the sequence of nodes visited.
- LLM Integration: Seamlessly uses Google's Gemini models (
gemini-2.0-flash,gemini-1.5-flash,gemini-1.5-flash-8b) and OpenAI (gpt-4o) for routing decisions. - Flexible Message Handling: Pass user messages at runtime through the
run()method for dynamic execution. - Robust Validation: Ensures required parameters are provided during initialization and runtime, preventing common errors.
Installation
Install HLR directly from PyPI:
pip install hlr-agent
(Note: PyPI normalizes names, so pip install hlr_agent also works).
Usage Example
Here's how to define nodes, functions, and run the agent:
1. Define Node Functions
Each function associated with a node receives the agent instance, allowing it to access and modify agent.context.
# example.py
from hlr_agent import Node, Agent
import time
# Example context keys used: "info", "route", "emails_to_send"
def func_database(agent):
print("Querying database...")
found_emails = [# Simulate database query result
"user1@example.com",
"test.user@sample.net",
"another@domain.org"
]
print(f"Database: Found {len(found_emails)} emails.")
agent.context["info"] += "\nQuery result:\n"+ str(found_emails)
def func_mailing(agent):
print("Emails has been formatted.")
redacted_emails = [ # Simulate redaction of emails
["Subject1", "Message1", "user1@example.com"],
["Subject2", "Message2", "test.user@sample.net"],
["Subject3", "Message3", "another@domain.org"],
]
agent.context["info"] += "\nRedacted emails:\n" + str(redacted_emails)
print("Emails has been sent.")
def func_output(agent):
if "info" in agent.context:
print("LOGS:\n" + agent.context["info"])
if "route" in agent.context:
print(agent.context["route"])
2. Configure Nodes
Define the structure of your workflow using Node objects. Nodes can have circular dependencies for complex workflows:
# example.py (continued)
nodes = [
Node("Input", children=["Database", "Mailing", "None"]),
Node("Database", children=["Mailing","Output"], func=func_database, description="Select this if the user wants to use a database"),
Node("Mailing", children=["Database","Output"], func=func_mailing, description="Select this if the user wants to use a mailing related functionality."),
Node("None", children=["Output"], description="Select this node if the rest o the nodes are not valid for the request"),
Node("Output", children=None, func=func_output),
]
3. Initialize and Run the Agent
Create an Agent instance and execute the workflow using agent.run(). The user message is now passed directly to the run() method:
# example.py (continued)
# --- Agent Initialization ---
agent = Agent(
nodes=nodes,
start_node_id="Input",
end_node_id="Output",
model="gemini-1.5-flash-8b", # Available models: gemini-2.0-flash, gemini-1.5-flash, gemini-1.5-flash-8b, gpt-4o
api_key="YOUR_GEMINI_API_KEY" # Replace with your actual API key
)
# --- Run the Agent ---
user_request = "Please get the emails from the database and send them a welcome message."
print("--- Starting Agent Run ---")
try:
agent.run(user_message=user_request) # Message is passed here, not in init
except ValueError as e:
print(f"Agent Error: {e}")
print("\n--- Agent Run Finished ---")
How It Works
-
Initialization (
Agent(...)):- Validates all required parameters (
nodes,start_node_id,end_node_id,model,api_key). - Checks for duplicate
node_ids. - Builds an internal dictionary of nodes for quick access.
- Initializes
agent.context(e.g., setting the initialroute).
- Validates all required parameters (
-
Execution (
agent.run(user_message=...)):- Takes the
user_messagefor the current run. - Starts at the
start_node_id. - Enters a loop that continues until a terminal node is reached or the step limit is exceeded.
- In each step:
- Executes the
funcof the current node (if defined). This function can modifyagent.context. - Determines the
next_node_id:- Explicit: If
funcreturns a valid node ID, use that. - Implicit/LLM: If
funcreturnsNone:- If the current node has no children, the flow ends.
- If it has one valid child (with description or is
end_node_id), transition to it. - If multiple valid children exist, call the LLM (
get_next_node).- The LLM receives: the list of valid child IDs and their descriptions, the
user_messagefor the run, andextra_context(a string combiningagent.context["route"]andagent.context["info"]). - The LLM returns the chosen child ID.
- The LLM receives: the list of valid child IDs and their descriptions, the
- Explicit: If
- Updates
agent.context["route"]by appending thenext_node_id. - Sets the
current_idto thenext_node_idfor the next iteration.
- Executes the
- Takes the
-
Termination: The loop ends when
current_idbecomesNone(either explicitly set by a node, reaching a node with no children, or encountering an error like an invalid explicit return). Thefunc_output(or the function of the last node) typically handles final reporting.
What's New in v0.3.0
- Runtime Message Passing: User messages are now passed to
agent.run(user_message="...")instead of during agent initialization, allowing for more flexible and dynamic execution. - Additional Gemini Models: Added support for
gemini-1.5-flashandgemini-1.5-flash-8balongside the existinggemini-2.0-flashandgpt-4o. - Enhanced Non-Linear Workflows: Improved support for circular dependencies between nodes, enabling more complex workflow patterns where nodes can reference each other in cycles.
- Better Circular Dependency Handling: The routing system now better handles cases where nodes have bidirectional relationships for sophisticated task management.
Contributing
Contributions, issues, and feature requests are welcome! Feel free to check the issues page or submit a pull request.
License
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.
Contact
David Serrano Díaz – davidsd.2704@gmail.com
Project Link: https://github.com/DavidFraifer/HLR
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hlr_agent-0.3.0.tar.gz.
File metadata
- Download URL: hlr_agent-0.3.0.tar.gz
- Upload date:
- Size: 8.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1eeb3e5b2de427a2b5f109bc7f7b84a8893bba6731cd85bfae4570abdaa9be85
|
|
| MD5 |
b837f5717a498b85aa37d7594b642a39
|
|
| BLAKE2b-256 |
6821fe11fec5f4c89bd6d51cac214e62343fc3530643ba0623755d797c81d87b
|
File details
Details for the file hlr_agent-0.3.0-py3-none-any.whl.
File metadata
- Download URL: hlr_agent-0.3.0-py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4a365fe5f2f5d001860f59db7c2a87070ce158d5b0b02d571dcf6544d8d2949c
|
|
| MD5 |
edca5d6e6380df6ba80ea28311b7e1fd
|
|
| BLAKE2b-256 |
55b0c007ad4b011d94fab61c7a8493f4d423332c4626b87b234c98873349b4f3
|