Skip to main content

A wrapper around ComfyUI to allow use by AI Horde.

Project description


PyPI Version Downloads GitHub license

Build Test Images Test Images All Models Release Changelog

Note: This project was formerly known as hordelib. The project namespace will be changed in the near future to reflect this change.

horde-engine is a wrapper around ComfyUI primarily to enable the AI Horde to run inference pipelines designed visually in the ComfyUI GUI.

The developers of horde-engine can be found in the AI Horde Discord server:

Note that horde-engine (previously known as hordelib) has been the default inference backend library of the AI Horde since hordelib v1.0.0.


The goal here is to be able to design inference pipelines in the excellent ComfyUI, and then call those inference pipelines programmatically. Whilst providing features that maintain compatibility with the existing horde implementation.


If being installed from pypi, use a requirements file of the form:


...your other dependencies...

Linux Installation

On Linux you will need to install the Nvidia CUDA Toolkit. Linux installers are provided by Nvidia at

Note if you only have 16GB of RAM and a default /tmp on tmpfs, you will likely need to increase the size of your temporary space to install the CUDA Toolkit or it may fail to extract the archive. One way to do that is just before installing the CUDA Toolkit:

sudo mount -o remount,size=16G /tmp

If you only have 16GB of RAM you will also need swap space. So if you typically run without swap, add some. You won't be able to run this library without it.


Horde payloads can be processed simply with (for example):

# import os
# Wherever your models are
# os.environ["AIWORKER_CACHE_HOME"] = "f:/ai/models" # Defaults to `models/` in the current working directory

import hordelib

hordelib.initialise()  # This must be called before any other hordelib functions

from hordelib.horde import HordeLib
from hordelib.shared_model_manager import SharedModelManager

generate = HordeLib()

if SharedModelManager.manager.compvis is None:
    raise Exception("Failed to load compvis model manager")


data = {
    "sampler_name": "k_dpmpp_2m",
    "cfg_scale": 7.5,
    "denoising_strength": 1.0,
    "seed": 123456789,
    "height": 512,
    "width": 512,
    "karras": False,
    "tiling": False,
    "hires_fix": False,
    "clip_skip": 1,
    "control_type": None,
    "image_is_control": False,
    "return_control_map": False,
    "prompt": "an ancient llamia monster",
    "ddim_steps": 25,
    "n_iter": 1,
    "model": "Deliberate",
pil_image = generate.basic_inference_single_image(data).image

if pil_image is None:
    raise Exception("Failed to generate image")"test.png")

Note that hordelib.initialise() will erase all command line arguments from argv. So make sure you parse them before you call that.

See tests/run_*.py for more standalone examples.


If you don't want hordelib to setup and control the logging configuration (we use loguru) initialise with:

import hordelib


hordelib depends on a large number of open source projects, and most of these dependencies are automatically downloaded and installed when you install hordelib. Due to the nature and purpose of hordelib some dependencies are bundled directly inside hordelib itself.


A powerful and modular stable diffusion GUI with a graph/nodes interface. Licensed under the terms of the GNU General Public License v3.0.

The entire purpose of hordelib is to access the power of ComfyUI.

Controlnet Preprocessors for ComfyUI

Custom nodes for ComfyUI providing Controlnet preprocessing capability. Licened under the terms of the Apache License 2.0.

ComfyUI Face Restore Node

Custom nodes for ComfyUI providing face restoration.


Nodes for generating QR codes



  • git (install git)
  • tox (pip install tox)
  • Set the environmental variable AIWORKER_CACHE_HOME to point to a model directory.

Note the model directory must currently be in the original AI Horde directory structure:


Running the Tests

Simply execute: tox (or tox -q for less noisy output)

This will take a while the first time as it installs all the dependencies.

If the tests run successfully images will be produced in the images/ folder.

Running a specific test file

tox -- -k <filename> for example tox -- -k test_initialisation

Running a specific predefined test suite

tox list

This will list all groups of tests which are involved in either the development, build or CI proccess. Tests which have the word 'fix' in them will automatically apply changes when run, such as to linting or formatting. You can do this by running:

tox -e [test_suite_name_here]

Directory Structure

hordelib/pipeline_designs/ Contains ComfyUI pipelines in a format that can be opened by the ComfyUI web app. These saved directly from the web app.

hordelib/pipelines/ Contains the above pipeline JSON files converted to the format required by the backend pipeline processor. These are converted from the web app, see Converting ComfyUI pipelines below.

hordelib/nodes/ These are the custom ComfyUI nodes we use for hordelib specific processing.

Running ComfyUI Web Application

In this example we install the dependencies in the OS default environment. When using the git version of hordelib, from the project root:

pip install -r requirements.txt --extra-index-url --upgrade

Ensure ComfyUI is installed, one way is running the tests:

tox -- -k test_comfy_install

From then on to run ComfyUI:

cd ComfyUI python

Then open a browser at:

Designing ComfyUI Pipelines

Use the standard ComfyUI web app. Use the "title" attribute to name the nodes, these names become parameter names in the hordelib. For example, a KSampler with the "title" of "sampler2" would become a parameter sampler2.seed, sampler2.cfg, etc. Load the pipeline hordelib/pipeline_designs/pipeline_stable_diffusion.json in the ComfyUI web app for an example.

Save any new pipeline in hordelib/pipeline_designs using the naming convention "pipeline_<name>.json".

Convert the JSON for the model (see Converting ComfyUI pipelines below) and save the resulting JSON in hordelib/pipelines using the same filename as the previous JSON file.

That is all. This can then be called from hordelib using the run_image_pipeline() method in hordelib.comfy.Comfy()

Converting ComfyUI pipelines

In addition to the design file saved from the UI, we need to save the pipeline file in the backend format. This file is created in the hordelib project root named comfy-prompt.json automatically if you run a pipeline through the ComfyUI version embedded in hordelib. Running ComfyUI with tox -e comfyui automatically patches ComfyUI so this JSON file is saved.

Build Configuration

The main config files for the project are: pyproject.toml, tox.ini and requirements.txt

PyPi Publishing

Pypi publishing is automatic all from the GitHub website.

  1. Create a PR from main to releases
  2. Label the PR with "release:patch" (0.0.1) or "release:minor" (0.1.0)
  3. Merge the PR with a standard merge commit (not squash)

Standalone "clean" environment test from Pypi

Here's an example:

Start in a new empty directory. Create requirements.txt:


Create the directory images/ and copy the test_db0.jpg into it.

Copy from the hordelib/tests/ directory.

Build a venv:

python -m venv venv
pip install -r requirements.txt

Run the test we copied:


The `images/` directory should have our test images.

Updating the embedded version of ComfyUI

  • Change the value in to the desired ComfyUI version.
  • Run the test suite via tox

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

horde_engine-2.11.1.tar.gz (2.0 MB view hashes)

Uploaded Source

Built Distribution

horde_engine-2.11.1-py3-none-any.whl (2.3 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page