Skip to main content

Orchestrate LLM tasks. Define workflows, run and track progress.

Project description

LLM Workflow Orchestrator

This library provides a framework for creating and executing workflows based on LLM (Large Language Model) prompting templates.

Workflows can consist of chained tasks, each defined by a task template that contains user and system prompts.

These workflows are executed using various LLM strategies, and the results are chained as context to the next task.

Results can be stored, updated, and finalized using a repository for traceability.

Features (alpha)

  • Template Creation: Define templates with System and User prompty; including an inference strategy.

  • Workflow Creation: Define workflows by chaining tasks based on predefined templates.

  • Task Execution: Execute workflows using a flexible executor, leveraging LLM-based prompts.

  • Result Handling: Automatically finalize results using a customizable task finalizer.

  • Storage Support: Store and manage workflows in a MongoDB repository.

  • Modular and Extendable: Easily extend with custom prompt providers, task finalizers, custom executors (context management) and alternate storage repositories.

Key Components

1. WorkflowTaskTemplate

Defines a template for a task, including user and system prompts, strategy (e.g., LLM model), and model configuration.

Attributes:

  • description: A short description of the task.

  • name: The name of the task.

  • priority: The priority of the task (for order of execution).

  • user_prompt_default: The default prompt to provide to the user.

  • system_prompt_default: The default prompt for the system.

  • strategy: The model strategy (e.g., ollama).

  • model: The specific model to use (e.g., llama3.2).

2. WorkflowTask

Represents an instance of a task in a workflow. It ties a WorkflowTaskTemplate with a specific result and status.

Attributes:

  • result_module_name: The name of the module where the result is stored.

  • result_class_name: The name of the result class.

  • template: The associated WorkflowTaskTemplate.

  • status: The current status of the task (e.g., PENDING, COMPLETED).

  • result: The actual result produced by the task after execution.

3. Workflow

A collection of WorkflowTask instances that are executed in sequence.

Attributes:

  • name: The name of the workflow.

  • description: A brief description of the workflow.

  • tasks: A list of tasks that make up the workflow.

  • status: The current status of the workflow (e.g., PENDING, COMPLETED).

4. WorkflowExecutor

Orchestrates the execution of tasks within a workflow, handling task dependencies, execution order, and result collection.

Attributes:

  • llm_config: The configuration for the LLM model.

  • prompt_provider: The provider responsible for fetching the prompt templates.

  • task_finalizer: The finalizer that handles the results after the workflow is completed.

5. MongoWorkflowRepository

Provides persistence for workflows and task results using MongoDB.

Attributes:

  • update_workflow(): Saves the workflow in the repository.

  • close(): Closes the repository connection.

Examples

Install the library

pip install llm-orchestrator

Example - Create templates

  • Template to summarize text
  • Template to enrich the text based on a summary

Add the templates to a repository (mongodb)

from pydantic import BaseModel, Field
from llm_unified_orchestrator.data_store.repository import MongoWorkflowRepository
from llm_unified_orchestrator.core.task import WorkflowTask, WorkflowTaskTemplate, TaskStatus
from llm_unified_orchestrator.core.workflow import Workflow, WorkflowStatus

## Summarize text
template_summarize = WorkflowTaskTemplate(
    description="Call Ollama for initial summarization",
    name="llama3_summary",
    user_prompt_id=None,
    system_prompt_id=None,
    user_prompt_default="Summarize the following: '{text}'",
    system_prompt_default="You are a concise summarizer.",
    strategy="ollama",
    model="llama3.2",
    mcp_tools=None,
)

## Enrich the summarized text
template_enrich = WorkflowTaskTemplate(
    description="Call Llama3.2 for enrichment",
    name="llama3_enrich",
    user_prompt_id=None,
    system_prompt_id=None,
    user_prompt_default="Enrich the summary. Original text: {text}",
    system_prompt_default="You are an assistant that expands ideas.",
    strategy="ollama",
    model="llama3.2",
    mcp_tools=None,
)

# Add the templates to the repository
repo = MongoWorkflowRepository()

repo.update_template(template_summarize)
repo.update_template(template_enrich)

Example - Create a Workflow

Create a workflow using the templates.

  • Create tasks (attach to a template)
  • Define the expected result type for the task
  • Assign a priority for the

A scheduler will execute pending workflows; based on the assigned priority and strategy.

def build_example_workflow(template_summarize: WorkflowTaskTemplate, template_enrich: WorkflowTaskTemplate) -> Workflow:

    # Expected summary result
    class ResultSummary(BaseModel):
        summary: str = Field(
            description=(
                "The summary of the text"
            )
        )

    # Expected enrichment result
    class ResultSummaryEnriched(BaseModel):
        summary: str = Field(
            description=(
                "The summary of the text"
            )
        )
        enriched_summary: str = Field(
            description=(
                "The enriched summary"
            )
        )

    # Create a summarize task (based on the template)
    task_summarize = WorkflowTask(
        priority=1,
        result_module_name="__main__",
        result_class_name=ResultSummary.__name__,
        template_name=template_summarize.name,
        template=template_summarize,
        status=TaskStatus.PENDING,
        result=None,
    )
    
    # Create an enrichment task (based on the template)
    task_enrich = WorkflowTask(
        priority=0,
        result_module_name="__main__",
        result_class_name=ResultSummaryEnriched.__name__,
        template=template_enrich,
        template_name=template_enrich.name,
        status=TaskStatus.PENDING,
        result=None,
    )

    workflow = Workflow(
        name="example_ollama_llama3_workflow",
        description="Example workflow demonstrating Ollama and Llama3.2 tasks",
        tasks=[task_summarize, task_enrich],
        status=WorkflowStatus.PENDING
    )

    return workflow

Example - Start a workflow

from llm_unified_orchestrator.factories.prompt_provider_factory import PromptProviderFactory
from llm_unified_orchestrator.factories.task_finalizer_factory import TaskFinalizerFactory
from llm_unified_orchestrator.data_store.repository import MongoWorkflowRepository
from llm_unified_orchestrator.executors.generic_executor import WorkflowExecutor
from llm_unified_orchestrator.inference_api.llm_config import LlmConfig
from llm_unified_orchestrator.core.task import WorkflowTask, WorkflowTaskTemplate, TaskStatus
from llm_unified_orchestrator.core.workflow import Workflow, WorkflowStatus

# Create or Get the workflow from the database
repo = MongoWorkflowRepository()
workflow = repo.get_workflow("example_ollama_llama3_workflow")

# Dependencies
prompt_factory = PromptProviderFactory(mlflow_uri="http://localhost:5000")
prompt_provider = prompt_factory.create_mlflow_prompt_provider()
finalizer_factory = TaskFinalizerFactory(workflow_repository=repo)
finalizer = finalizer_factory.create()
llm_config = LlmConfig()

# Generic Workflow Executor
executor = WorkflowExecutor(llm_config=llm_config, prompt_provider=prompt_provider, task_finlizaer=finalizer)

# The text to summarize and enrich
kwargs = {'text': 'The essence of software engineering is similar to the detachment of an analyst'}

executor.execute_workflow(workflow=workflow, **kwargs)

Result

The result contains:

  • snapshot of the template
  • result for each task
  • contexts assigned to tasks
{
  "name": "example_ollama_llama3_workflow",
  "description": "Example workflow demonstrating Ollama and Llama3.2 tasks",
  "priority": 1,
  "tasks": [
    {
      "status": "completed",
      "context": {},
      "result": "{\"summary\":\"This quote highlights the parallel between software engineering and analytical detachment, suggesting that both require objective decision-making.\"}",
      "result_module_name": "__main__",
      "result_class_name": "ResultSummary",
      "template_name": "llama3_summary",
      "template": {
        "name": "llama3_summary",
        "description": "Call Ollama for initial summarization",
        "user_prompt_default": "Summarize the following: '{text}'",
        "system_prompt_default": "You are a concise summarizer.",
        "strategy": "ollama",
        "model": "llama3.2"
      }
    },
    {
      "status": "completed",
      "context": {
        "PreviousTask": "llama3_summary_Call Ollama for initial summarization",
        "PreviousTask_Result": "{\"summary\":\"This quote highlights the parallel between software engineering and analytical detachment, suggesting that both require objective decision-making.\"}"
      },
      "result": "{\"summary\":\"The essence of software engineering is reminiscent of the analytical detachment characteristic of analysts, where both professions require a blend of objective reasoning and detached decision-making. This parallel suggests that, just as an analyst must separate personal biases from data-driven insights to provide unbiased recommendations, a software engineer must similarly detach themselves from emotional attachment to code, focusing on objective problem-solving and evidence-based design principles. By adopting this mindset, software engineers can foster creativity, reduce technical debt, and improve the overall quality and reliability of their systems, ultimately delivering value to users with precision and accuracy.\",\"enriched_summary\":\"This passage weaves together concepts from both the analytical detachment of analysts and the detached decision-making of software engineers, illustrating the importance of objectivity in these fields. The author's notion that a software engineer must separate personal biases from code is a powerful metaphor for the challenges of objective problem-solving.\"}",
      "result_module_name": "__main__",
      "result_class_name": "ResultSummaryEnriched",
      "template_name": "llama3_enrich",
      "template": {
        "name": "llama3_enrich",
        "description": "Call Llama3.2 for enrichment",
        "user_prompt_default": "Enrich the summary. Oringal text: {text}",
        "system_prompt_default": "You are an assistant that expands ideas.",
        "strategy": "ollama",
        "model": "llama3.2"
      }
    }
  ],
  "status": "completed"
}

License

Copyright (C) 2025 Paul Eger

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_unified_orchestrator-1.0.1.tar.gz (22.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_unified_orchestrator-1.0.1-py3-none-any.whl (26.7 kB view details)

Uploaded Python 3

File details

Details for the file llm_unified_orchestrator-1.0.1.tar.gz.

File metadata

  • Download URL: llm_unified_orchestrator-1.0.1.tar.gz
  • Upload date:
  • Size: 22.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.16

File hashes

Hashes for llm_unified_orchestrator-1.0.1.tar.gz
Algorithm Hash digest
SHA256 b779405739f1d47eae57439e9cc9d73d777ba450bfd6b1cc4dced8a886f265c1
MD5 4c1aa90ce86837fe9d1b8b09abe16384
BLAKE2b-256 122287162cdc497534c99b9d670c4b7d8721e1e9f586dfaf45fc6ec9fcbec3fc

See more details on using hashes here.

File details

Details for the file llm_unified_orchestrator-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_unified_orchestrator-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e6b7d62e00a7ee2986dc91bf34d8beff38cefcc2b4aef4898f5193607ef7e7ba
MD5 b69dcea85791b0e3579043b68c68edd5
BLAKE2b-256 d4300e4e8392a665f91360f1d26886b4ae3bd799f658e384e69e0c9fba022732

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page