Orchestrate LLM tasks. Define workflows, run and track progress.
Project description
LLM Workflow Orchestrator
This library provides a framework for creating and executing workflows based on LLM (Large Language Model) prompting templates.
Workflows can consist of chained tasks, each defined by a task template that contains user and system prompts.
These workflows are executed using various LLM strategies, and the results are chained as context to the next task.
Results can be stored, updated, and finalized using a repository for traceability.
Features (alpha)
-
Template Creation: Define templates with System and User prompty; including an inference strategy.
-
Workflow Creation: Define workflows by chaining tasks based on predefined templates.
-
Task Execution: Execute workflows using a flexible executor, leveraging LLM-based prompts.
-
Result Handling: Automatically finalize results using a customizable task finalizer.
-
Storage Support: Store and manage workflows in a MongoDB repository.
-
Modular and Extendable: Easily extend with custom prompt providers, task finalizers, custom executors (context management) and alternate storage repositories.
Key Components
1. WorkflowTaskTemplate
Defines a template for a task, including user and system prompts, strategy (e.g., LLM model), and model configuration.
Attributes:
-
description: A short description of the task.
-
name: The name of the task.
-
priority: The priority of the task (for order of execution).
-
user_prompt_default: The default prompt to provide to the user.
-
system_prompt_default: The default prompt for the system.
-
strategy: The model strategy (e.g., ollama).
-
model: The specific model to use (e.g., llama3.2).
2. WorkflowTask
Represents an instance of a task in a workflow. It ties a WorkflowTaskTemplate with a specific result and status.
Attributes:
-
result_module_name: The name of the module where the result is stored.
-
result_class_name: The name of the result class.
-
template: The associated WorkflowTaskTemplate.
-
status: The current status of the task (e.g., PENDING, COMPLETED).
-
result: The actual result produced by the task after execution.
3. Workflow
A collection of WorkflowTask instances that are executed in sequence.
Attributes:
-
name: The name of the workflow.
-
description: A brief description of the workflow.
-
tasks: A list of tasks that make up the workflow.
-
status: The current status of the workflow (e.g., PENDING, COMPLETED).
4. WorkflowExecutor
Orchestrates the execution of tasks within a workflow, handling task dependencies, execution order, and result collection.
Attributes:
-
llm_config: The configuration for the LLM model.
-
prompt_provider: The provider responsible for fetching the prompt templates.
-
task_finalizer: The finalizer that handles the results after the workflow is completed.
5. MongoWorkflowRepository
Provides persistence for workflows and task results using MongoDB.
Attributes:
-
update_workflow(): Saves the workflow in the repository.
-
close(): Closes the repository connection.
Examples
Install the library
pip install llm-orchestrator
Example - Create templates
- Template to summarize text
- Template to enrich the text based on a summary
Add the templates to a repository (mongodb)
from pydantic import BaseModel, Field
from llm_unified_orchestrator.data_store.repository import MongoWorkflowRepository
from llm_unified_orchestrator.core.task import WorkflowTask, WorkflowTaskTemplate, TaskStatus
from llm_unified_orchestrator.core.workflow import Workflow, WorkflowStatus
## Summarize text
template_summarize = WorkflowTaskTemplate(
description="Call Ollama for initial summarization",
name="llama3_summary",
user_prompt_id=None,
system_prompt_id=None,
user_prompt_default="Summarize the following: '{text}'",
system_prompt_default="You are a concise summarizer.",
strategy="ollama",
model="llama3.2",
mcp_tools=None,
)
## Enrich the summarized text
template_enrich = WorkflowTaskTemplate(
description="Call Llama3.2 for enrichment",
name="llama3_enrich",
user_prompt_id=None,
system_prompt_id=None,
user_prompt_default="Enrich the summary. Original text: {text}",
system_prompt_default="You are an assistant that expands ideas.",
strategy="ollama",
model="llama3.2",
mcp_tools=None,
)
# Add the templates to the repository
repo = MongoWorkflowRepository()
repo.update_template(template_summarize)
repo.update_template(template_enrich)
Example - Create a Workflow
Create a workflow using the templates.
- Create tasks (attach to a template)
- Define the expected result type for the task
- Assign a priority for the
A scheduler will execute pending workflows; based on the assigned priority and strategy.
def build_example_workflow(template_summarize: WorkflowTaskTemplate, template_enrich: WorkflowTaskTemplate) -> Workflow:
# Expected summary result
class ResultSummary(BaseModel):
summary: str = Field(
description=(
"The summary of the text"
)
)
# Expected enrichment result
class ResultSummaryEnriched(BaseModel):
summary: str = Field(
description=(
"The summary of the text"
)
)
enriched_summary: str = Field(
description=(
"The enriched summary"
)
)
# Create a summarize task (based on the template)
task_summarize = WorkflowTask(
priority=1,
result_module_name="__main__",
result_class_name=ResultSummary.__name__,
template_name=template_summarize.name,
template=template_summarize,
status=TaskStatus.PENDING,
result=None,
)
# Create an enrichment task (based on the template)
task_enrich = WorkflowTask(
priority=0,
result_module_name="__main__",
result_class_name=ResultSummaryEnriched.__name__,
template=template_enrich,
template_name=template_enrich.name,
status=TaskStatus.PENDING,
result=None,
)
workflow = Workflow(
name="example_ollama_llama3_workflow",
description="Example workflow demonstrating Ollama and Llama3.2 tasks",
tasks=[task_summarize, task_enrich],
status=WorkflowStatus.PENDING
)
return workflow
Example - Start a workflow
from llm_unified_orchestrator.factories.prompt_provider_factory import PromptProviderFactory
from llm_unified_orchestrator.factories.task_finalizer_factory import TaskFinalizerFactory
from llm_unified_orchestrator.data_store.repository import MongoWorkflowRepository
from llm_unified_orchestrator.executors.generic_executor import WorkflowExecutor
from llm_unified_orchestrator.inference_api.llm_config import LlmConfig
from llm_unified_orchestrator.core.task import WorkflowTask, WorkflowTaskTemplate, TaskStatus
from llm_unified_orchestrator.core.workflow import Workflow, WorkflowStatus
# Create or Get the workflow from the database
repo = MongoWorkflowRepository()
workflow = repo.get_workflow("example_ollama_llama3_workflow")
# Dependencies
prompt_factory = PromptProviderFactory(mlflow_uri="http://localhost:5000")
prompt_provider = prompt_factory.create_mlflow_prompt_provider()
finalizer_factory = TaskFinalizerFactory(workflow_repository=repo)
finalizer = finalizer_factory.create()
llm_config = LlmConfig()
# Generic Workflow Executor
executor = WorkflowExecutor(llm_config=llm_config, prompt_provider=prompt_provider, task_finlizaer=finalizer)
# The text to summarize and enrich
kwargs = {'text': 'The essence of software engineering is similar to the detachment of an analyst'}
executor.execute_workflow(workflow=workflow, **kwargs)
Result
The result contains:
- snapshot of the template
- result for each task
- contexts assigned to tasks
{
"name": "example_ollama_llama3_workflow",
"description": "Example workflow demonstrating Ollama and Llama3.2 tasks",
"priority": 1,
"tasks": [
{
"status": "completed",
"context": {},
"result": "{\"summary\":\"This quote highlights the parallel between software engineering and analytical detachment, suggesting that both require objective decision-making.\"}",
"result_module_name": "__main__",
"result_class_name": "ResultSummary",
"template_name": "llama3_summary",
"template": {
"name": "llama3_summary",
"description": "Call Ollama for initial summarization",
"user_prompt_default": "Summarize the following: '{text}'",
"system_prompt_default": "You are a concise summarizer.",
"strategy": "ollama",
"model": "llama3.2"
}
},
{
"status": "completed",
"context": {
"PreviousTask": "llama3_summary_Call Ollama for initial summarization",
"PreviousTask_Result": "{\"summary\":\"This quote highlights the parallel between software engineering and analytical detachment, suggesting that both require objective decision-making.\"}"
},
"result": "{\"summary\":\"The essence of software engineering is reminiscent of the analytical detachment characteristic of analysts, where both professions require a blend of objective reasoning and detached decision-making. This parallel suggests that, just as an analyst must separate personal biases from data-driven insights to provide unbiased recommendations, a software engineer must similarly detach themselves from emotional attachment to code, focusing on objective problem-solving and evidence-based design principles. By adopting this mindset, software engineers can foster creativity, reduce technical debt, and improve the overall quality and reliability of their systems, ultimately delivering value to users with precision and accuracy.\",\"enriched_summary\":\"This passage weaves together concepts from both the analytical detachment of analysts and the detached decision-making of software engineers, illustrating the importance of objectivity in these fields. The author's notion that a software engineer must separate personal biases from code is a powerful metaphor for the challenges of objective problem-solving.\"}",
"result_module_name": "__main__",
"result_class_name": "ResultSummaryEnriched",
"template_name": "llama3_enrich",
"template": {
"name": "llama3_enrich",
"description": "Call Llama3.2 for enrichment",
"user_prompt_default": "Enrich the summary. Oringal text: {text}",
"system_prompt_default": "You are an assistant that expands ideas.",
"strategy": "ollama",
"model": "llama3.2"
}
}
],
"status": "completed"
}
License
Copyright (C) 2025 Paul Eger
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_unified_orchestrator-1.0.2.tar.gz.
File metadata
- Download URL: llm_unified_orchestrator-1.0.2.tar.gz
- Upload date:
- Size: 22.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2955e531be5993e9ca21080fca044e30bbb7e0ec2a5a2ab1d83db3e2f2829ab
|
|
| MD5 |
6362e04b27b44e77790d550fe55652be
|
|
| BLAKE2b-256 |
956e01355dba970fa40d0db1e6582467fa492791dca729ba0b775cb09a3918c0
|
File details
Details for the file llm_unified_orchestrator-1.0.2-py3-none-any.whl.
File metadata
- Download URL: llm_unified_orchestrator-1.0.2-py3-none-any.whl
- Upload date:
- Size: 26.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92504d0063f7d024565057216112a6ff511923c29ad519b61dc764c06b32a810
|
|
| MD5 |
3e6104f175efbdc4096d9ce7e363836c
|
|
| BLAKE2b-256 |
42b6db87d86042fd49a72c3d9b7ebbde767c1b5f531564112502836b564a55db
|