Skip to main content

A package to use the ComfyAPI

Project description

TechTrash ComfyAPI v2

Librairie Python pour orchestrer plusieurs instances ComfyUI en parallele sur GPU.

Installation

Prerequis : Python 3.11+, pynvml, ComfyUI installe (une copie par GPU).

pip install .

Quickstart

import asyncio
from comfyapi import ComfyAPI, ComfyUIInstance, ModelRef, LoRARef

instances = [
    ComfyUIInstance(path="/app/comfyui/0", port=3050),
    ComfyUIInstance(path="/app/comfyui/1", port=3051),
    ComfyUIInstance(path="/app/comfyui/2", port=3052),
]

async def main():
    async with ComfyAPI(instances, models_path="/app/models") as api:
        images = await api.execute_workflow(
            workflow={...},  # workflow ComfyUI exporte en JSON (API format)
            models=[
                ModelRef(type="checkpoints", name="juggernaut_xl.safetensors"),
            ],
            loras=[
                LoRARef(
                    name="detail_enhancer.safetensors",
                    url="https://example.com/detail_enhancer.safetensors",
                    strength=0.8,
                ),
            ],
            params={
                "steps": {
                    "type": "integer",
                    "value": 20,
                    "node_mappings": "API - STEP",
                },
                "prompt": {
                    "type": "string",
                    "value": "A beautiful landscape",
                    "node_mappings": "API - PROMPT",
                },
                "batch_size": {
                    "type": "integer",
                    "value": 4,
                    "gpu_scale": True,
                    "node_mappings": "API - BATCH SIZE",
                },
                "image_input": {
                    "type": "image",
                    "value": "https://example.com/photo.jpg",
                    "node_mappings": "API - PICTURE",
                },
            },
        )
        print(images)
        # [
        #     "/app/comfyui/0/output/ComfyUI_00001_.png",
        #     "/app/comfyui/0/output/ComfyUI_00002_.png",
        #     "/app/comfyui/1/output/ComfyUI_00001_.png",
        #     "/app/comfyui/2/output/ComfyUI_00001_.png",
        # ]

asyncio.run(main())

API

ComfyAPI(instances, models_path, *, gpu=None, auto_start=True)

Orchestrateur principal. A utiliser comme async context manager (async with).

Parametre Type Description
instances list[ComfyUIInstance] Instances ComfyUI (path + port). Fournir au moins autant d'instances que de GPU.
models_path str | Path Dossier racine des modeles (checkpoints/, loras/, etc.)
gpu GPUInfo | None Injection manuelle pour les tests. Par defaut : detection auto via pynvml.
auto_start bool Lancer les subprocess ComfyUI automatiquement (True par defaut).

await api.execute_workflow(workflow, *, models, loras, params)

Execute le workflow sur toutes les GPU et retourne les chemins des images generees.

Parametre Type Description
workflow dict Workflow ComfyUI (format API JSON)
models list[ModelRef] Modeles requis. Verifie qu'ils existent dans models_path.
loras list[LoRARef] LoRAs. Telecharges automatiquement si absents.
params dict[str, dict] Parametres du workflow (voir section ci-dessous).

Retour : list[str] — chemins absolus des images generees sur disque.

Params

Chaque entree de params a cette structure :

{
    "type": "integer" | "string" | "image",
    "value": ...,
    "node_mappings": "API - NOM DU NODE",
    "gpu_scale": True  # optionnel
}
Champ Description
type Type du parametre.
value Valeur a injecter dans le workflow.
node_mappings Titre du node ComfyUI cible (champ _meta.title).
gpu_scale Si True, la valeur est distribuee equitablement entre les GPU.

Comportement par type

type Comportement
"integer" Injecte value dans inputs.value du node.
"string" Injecte value dans inputs.value du node.
"image" Telecharge l'URL, copie l'image dans le dossier input/ de chaque instance ComfyUI, injecte le nom du fichier dans inputs.image.

gpu_scale : distribution equitable

Quand gpu_scale: True, la valeur est repartie entre les GPU avec distribute() :

batch_size=4, 3 GPUs → [2, 1, 1]   (total = 4)
batch_size=6, 3 GPUs → [2, 2, 2]   (total = 6)
batch_size=1, 3 GPUs → [1, 0, 0]   (total = 1)

Chaque GPU recoit son propre workflow avec sa part exacte. Pas de surproduction.

Models et LoRAs

models=[
    ModelRef(type="checkpoints", name="model.safetensors"),
    ModelRef(type="vae", name="vae.safetensors"),
]

Les modeles doivent etre pre-installes. Si un modele manque, ModelNotFoundError est levee.

loras=[
    LoRARef(name="style.safetensors", url="https://...", strength=0.7),
]

Les LoRAs sont telecharges automatiquement dans models_path/loras/ s'ils ne sont pas deja presents.

Gestion des erreurs

Toutes les exceptions heritent de ComfyAPIError :

Exception Quand
ModelNotFoundError Un modele requis n'existe pas dans models_path.
DownloadError Le telechargement d'un LoRA ou d'une image echoue.
ComfyUIStartupError Le subprocess ComfyUI n'a pas pu demarrer.
ComfyUITimeoutError ComfyUI n'est pas pret apres 120s, ou le polling depasse 600s.
WorkflowError ComfyUI a rejete le workflow (HTTP != 200).

Partial failure : si certaines GPU echouent mais d'autres reussissent, les erreurs sont loguees et les resultats partiels sont retournes. Si toutes les GPU echouent, la premiere exception est re-raise.

Logging

La lib utilise logging (pas de print). Elle ne configure aucun handler — c'est au caller de le faire :

import logging
logging.basicConfig(level=logging.INFO)

Niveaux utilises : INFO pour les milestones, DEBUG pour les details, WARNING pour les erreurs transitoires.

Architecture

comfyapi/
    __init__.py      # Exports publics + NullHandler
    comfyapi.py      # Orchestrateur (classe ComfyAPI)
    _types.py        # Dataclasses + exceptions
    _gpu.py          # Detection GPU (pynvml)
    _client.py       # Client HTTP async (httpx) pour une instance ComfyUI
    _process.py      # Lancement subprocess ComfyUI
    _download.py     # Telechargement async (LoRAs, images)
    _workflow.py     # Preparation workflow JSON (fonctions pures)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

techtrash_comfyapi-1.1.0.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

techtrash_comfyapi-1.1.0-py3-none-any.whl (12.5 kB view details)

Uploaded Python 3

File details

Details for the file techtrash_comfyapi-1.1.0.tar.gz.

File metadata

  • Download URL: techtrash_comfyapi-1.1.0.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for techtrash_comfyapi-1.1.0.tar.gz
Algorithm Hash digest
SHA256 f7c86e40e724b4713b989e7a1a6c6e17f8e341d1a953d4d4678c1fda53eed1c0
MD5 9bed273e3a8a979199b259ede6f31141
BLAKE2b-256 b8a1543a4937aeb9e4c989b0669fd28a43473928d4475646283f92ba21b9e80e

See more details on using hashes here.

File details

Details for the file techtrash_comfyapi-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for techtrash_comfyapi-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 160b2f38bfe86fe4e65ab6cf74ef2608beefcf7f85e295d0025cf512411a259f
MD5 4fc862d95214d8a675833795d86b17ef
BLAKE2b-256 747c763f1e954b9ab852b5878f7c8d783fe1d51d72ffac6155641890f4b20949

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page