Skip to main content

A package to use the ComfyAPI

Project description

TechTrash ComfyAPI v2

Librairie Python pour orchestrer plusieurs instances ComfyUI en parallele sur GPU.

Installation

Prerequis : Python 3.11+, pynvml, ComfyUI installe (une copie par GPU).

pip install .

Quickstart

import asyncio
from comfyapi import ComfyAPI, ComfyUIInstance, ModelRef, LoRARef

instances = [
    ComfyUIInstance(path="/app/comfyui/0", port=3050),
    ComfyUIInstance(path="/app/comfyui/1", port=3051),
    ComfyUIInstance(path="/app/comfyui/2", port=3052)
]

async def main():
    async with ComfyAPI(instances, models_path="/app/models") as api:
        images = await api.execute_workflow(
            workflow={...},  # workflow ComfyUI exporte en JSON (API format)
            models=[
                ModelRef(type="checkpoints", name="juggernaut_xl.safetensors"),
            ],
            loras=[
                LoRARef(
                    name="detail_enhancer.safetensors",
                    url="https://example.com/detail_enhancer.safetensors",
                    strength=0.8,
                ),
            ],
            params={
                "steps": {
                    "type": "integer",
                    "value": 20,
                    "node_mappings": "API - STEP",
                },
                "prompt": {
                    "type": "string",
                    "value": "A beautiful landscape",
                    "node_mappings": "API - PROMPT",
                },
                "batch_size": {
                    "type": "integer",
                    "value": 4,
                    "gpu_scale": True,
                    "node_mappings": "API - BATCH SIZE",
                },
                "image_input": {
                    "type": "image",
                    "value": "https://example.com/photo.jpg",
                    "node_mappings": "API - PICTURE",
                },
            },
        )
        print(images)
        # [
        #     "/app/comfyui/0/output/ComfyUI_00001_.png",
        #     "/app/comfyui/0/output/ComfyUI_00002_.png",
        #     "/app/comfyui/1/output/ComfyUI_00001_.png",
        #     "/app/comfyui/2/output/ComfyUI_00001_.png",
        # ]

asyncio.run(main())

API

ComfyAPI(instances, models_path, *, gpu=None, auto_start=True)

Orchestrateur principal. A utiliser comme async context manager (async with).

Parametre Type Description
instances list[ComfyUIInstance] Instances ComfyUI (path + port). Fournir au moins autant d'instances que de GPU.
models_path str | Path Dossier racine des modeles (checkpoints/, loras/, etc.)
gpu GPUInfo | None Injection manuelle pour les tests. Par defaut : detection auto via pynvml.
auto_start bool Lancer les subprocess ComfyUI automatiquement (True par defaut).

await api.execute_workflow(workflow, *, models, loras, params)

Execute le workflow sur toutes les GPU et retourne les chemins des images generees.

Parametre Type Description
workflow dict Workflow ComfyUI (format API JSON)
models list[ModelRef] Modeles requis. Verifie qu'ils existent dans models_path.
loras list[LoRARef] LoRAs. Telecharges automatiquement si absents.
params dict[str, dict] Parametres du workflow (voir section ci-dessous).

Retour : list[str] — chemins absolus des images generees sur disque.

Params

Chaque entree de params a cette structure :

{
    "type": "integer" | "string" | "image",
    "value": ...,
    "node_mappings": "API - NOM DU NODE",
    "gpu_scale": True  # optionnel
}
Champ Description
type Type du parametre.
value Valeur a injecter dans le workflow.
node_mappings Titre du node ComfyUI cible (champ _meta.title).
gpu_scale Si True, la valeur est distribuee equitablement entre les GPU.

Comportement par type

type Comportement
"integer" Injecte value dans inputs.value du node.
"string" Injecte value dans inputs.value du node.
"image" Telecharge l'URL, copie l'image dans le dossier input/ de chaque instance ComfyUI, injecte le nom du fichier dans inputs.image.

gpu_scale : distribution equitable

Quand gpu_scale: True, la valeur est repartie entre les GPU avec distribute() :

batch_size=4, 3 GPUs → [2, 1, 1]   (total = 4)
batch_size=6, 3 GPUs → [2, 2, 2]   (total = 6)
batch_size=1, 3 GPUs → [1, 0, 0]   (total = 1)

Chaque GPU recoit son propre workflow avec sa part exacte. Pas de surproduction.

Models et LoRAs

models=[
    ModelRef(type="checkpoints", name="model.safetensors"),
    ModelRef(type="vae", name="vae.safetensors"),
]

Les modeles doivent etre pre-installes. Si un modele manque, ModelNotFoundError est levee.

loras=[
    LoRARef(name="style.safetensors", url="https://...", strength=0.7),
]

Les LoRAs sont telecharges automatiquement dans models_path/loras/ s'ils ne sont pas deja presents.

Gestion des erreurs

Toutes les exceptions heritent de ComfyAPIError :

Exception Quand
ModelNotFoundError Un modele requis n'existe pas dans models_path.
DownloadError Le telechargement d'un LoRA ou d'une image echoue.
ComfyUIStartupError Le subprocess ComfyUI n'a pas pu demarrer.
ComfyUITimeoutError ComfyUI n'est pas pret apres 120s, ou le polling depasse 600s.
WorkflowError ComfyUI a rejete le workflow (HTTP != 200).

Partial failure : si certaines GPU echouent mais d'autres reussissent, les erreurs sont loguees et les resultats partiels sont retournes. Si toutes les GPU echouent, la premiere exception est re-raise.

Logging

La lib utilise logging (pas de print). Elle ne configure aucun handler — c'est au caller de le faire :

import logging
logging.basicConfig(level=logging.INFO)

Niveaux utilises : INFO pour les milestones, DEBUG pour les details, WARNING pour les erreurs transitoires.

Architecture

comfyapi/
    __init__.py      # Exports publics + NullHandler
    comfyapi.py      # Orchestrateur (classe ComfyAPI)
    _types.py        # Dataclasses + exceptions
    _gpu.py          # Detection GPU (pynvml)
    _client.py       # Client HTTP async (httpx) pour une instance ComfyUI
    _process.py      # Lancement subprocess ComfyUI
    _download.py     # Telechargement async (LoRAs, images)
    _workflow.py     # Preparation workflow JSON (fonctions pures)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

techtrash_comfyapi-1.1.3.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

techtrash_comfyapi-1.1.3-py3-none-any.whl (13.1 kB view details)

Uploaded Python 3

File details

Details for the file techtrash_comfyapi-1.1.3.tar.gz.

File metadata

  • Download URL: techtrash_comfyapi-1.1.3.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for techtrash_comfyapi-1.1.3.tar.gz
Algorithm Hash digest
SHA256 6ade8514c9e915302b4730b2d4fd50c77db8760690b07908cca2902c487da707
MD5 768adabf93915080303c7af9963d46d1
BLAKE2b-256 3d5904e7b5f649924303d89c7d22734095da68987969803aa214136e8d074b84

See more details on using hashes here.

File details

Details for the file techtrash_comfyapi-1.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for techtrash_comfyapi-1.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 953d755cdf1c005642315e85d07749aecfddff0c708271f8b44402f70001f1f5
MD5 26c2643ccea818a0b5fad3e6cc007869
BLAKE2b-256 c9eea66877829e08b1cf23abca9e55158bf9f7fa81550c0c5e90cf7fe47e88c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page