Skip to main content

A package to use the ComfyAPI

Project description

TechTrash ComfyAPI v2

Librairie Python pour orchestrer plusieurs instances ComfyUI en parallele sur GPU.

Installation

Prerequis : Python 3.11+, pynvml, ComfyUI installe (une copie par GPU).

pip install .

Quickstart

import asyncio
from comfyapi import ComfyAPI, ComfyUIInstance, ModelRef, LoRARef

instances = [
    ComfyUIInstance(path="/app/comfyui/0", port=3050),
    ComfyUIInstance(path="/app/comfyui/1", port=3051),
    ComfyUIInstance(path="/app/comfyui/2", port=3052)
]

async def main():
    async with ComfyAPI(instances, models_path="/app/models") as api:
        images = await api.execute_workflow(
            workflow={...},  # workflow ComfyUI exporte en JSON (API format)
            models=[
                ModelRef(type="checkpoints", name="juggernaut_xl.safetensors"),
            ],
            loras=[
                LoRARef(
                    name="detail_enhancer.safetensors",
                    url="https://example.com/detail_enhancer.safetensors",
                    strength=0.8,
                ),
            ],
            params={
                "steps": {
                    "type": "integer",
                    "value": 20,
                    "node_mappings": "API - STEP",
                },
                "prompt": {
                    "type": "string",
                    "value": "A beautiful landscape",
                    "node_mappings": "API - PROMPT",
                },
                "batch_size": {
                    "type": "integer",
                    "value": 4,
                    "gpu_scale": True,
                    "node_mappings": "API - BATCH SIZE",
                },
                "image_input": {
                    "type": "image",
                    "value": "https://example.com/photo.jpg",
                    "node_mappings": "API - PICTURE",
                },
            },
        )
        print(images)
        # [
        #     "/app/comfyui/0/output/ComfyUI_00001_.png",
        #     "/app/comfyui/0/output/ComfyUI_00002_.png",
        #     "/app/comfyui/1/output/ComfyUI_00001_.png",
        #     "/app/comfyui/2/output/ComfyUI_00001_.png",
        # ]

asyncio.run(main())

API

ComfyAPI(instances, models_path, *, gpu=None, auto_start=True)

Orchestrateur principal. A utiliser comme async context manager (async with).

Parametre Type Description
instances list[ComfyUIInstance] Instances ComfyUI (path + port). Fournir au moins autant d'instances que de GPU.
models_path str | Path Dossier racine des modeles (checkpoints/, loras/, etc.)
gpu GPUInfo | None Injection manuelle pour les tests. Par defaut : detection auto via pynvml.
auto_start bool Lancer les subprocess ComfyUI automatiquement (True par defaut).

await api.execute_workflow(workflow, *, models, loras, params)

Execute le workflow sur toutes les GPU et retourne les chemins des images generees.

Parametre Type Description
workflow dict Workflow ComfyUI (format API JSON)
models list[ModelRef] Modeles requis. Verifie qu'ils existent dans models_path.
loras list[LoRARef] LoRAs. Telecharges automatiquement si absents.
params dict[str, dict] Parametres du workflow (voir section ci-dessous).

Retour : list[str] — chemins absolus des images generees sur disque.

Params

Chaque entree de params a cette structure :

{
    "type": "integer" | "string" | "image",
    "value": ...,
    "node_mappings": "API - NOM DU NODE",
    "gpu_scale": True  # optionnel
}
Champ Description
type Type du parametre.
value Valeur a injecter dans le workflow.
node_mappings Titre du node ComfyUI cible (champ _meta.title).
gpu_scale Si True, la valeur est distribuee equitablement entre les GPU.

Comportement par type

type Comportement
"integer" Injecte value dans inputs.value du node.
"string" Injecte value dans inputs.value du node.
"image" Telecharge l'URL, copie l'image dans le dossier input/ de chaque instance ComfyUI, injecte le nom du fichier dans inputs.image.

gpu_scale : distribution equitable

Quand gpu_scale: True, la valeur est repartie entre les GPU avec distribute() :

batch_size=4, 3 GPUs → [2, 1, 1]   (total = 4)
batch_size=6, 3 GPUs → [2, 2, 2]   (total = 6)
batch_size=1, 3 GPUs → [1, 0, 0]   (total = 1)

Chaque GPU recoit son propre workflow avec sa part exacte. Pas de surproduction.

Models et LoRAs

models=[
    ModelRef(type="checkpoints", name="model.safetensors"),
    ModelRef(type="vae", name="vae.safetensors"),
]

Les modeles doivent etre pre-installes. Si un modele manque, ModelNotFoundError est levee.

loras=[
    LoRARef(name="style.safetensors", url="https://...", strength=0.7),
]

Les LoRAs sont telecharges automatiquement dans models_path/loras/ s'ils ne sont pas deja presents.

Gestion des erreurs

Toutes les exceptions heritent de ComfyAPIError :

Exception Quand
ModelNotFoundError Un modele requis n'existe pas dans models_path.
DownloadError Le telechargement d'un LoRA ou d'une image echoue.
ComfyUIStartupError Le subprocess ComfyUI n'a pas pu demarrer.
ComfyUITimeoutError ComfyUI n'est pas pret apres 120s, ou le polling depasse 600s.
WorkflowError ComfyUI a rejete le workflow (HTTP != 200).

Partial failure : si certaines GPU echouent mais d'autres reussissent, les erreurs sont loguees et les resultats partiels sont retournes. Si toutes les GPU echouent, la premiere exception est re-raise.

Logging

La lib utilise logging (pas de print). Elle ne configure aucun handler — c'est au caller de le faire :

import logging
logging.basicConfig(level=logging.INFO)

Niveaux utilises : INFO pour les milestones, DEBUG pour les details, WARNING pour les erreurs transitoires.

Architecture

comfyapi/
    __init__.py      # Exports publics + NullHandler
    comfyapi.py      # Orchestrateur (classe ComfyAPI)
    _types.py        # Dataclasses + exceptions
    _gpu.py          # Detection GPU (pynvml)
    _client.py       # Client HTTP async (httpx) pour une instance ComfyUI
    _process.py      # Lancement subprocess ComfyUI
    _download.py     # Telechargement async (LoRAs, images)
    _workflow.py     # Preparation workflow JSON (fonctions pures)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

techtrash_comfyapi-1.1.2.tar.gz (10.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

techtrash_comfyapi-1.1.2-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file techtrash_comfyapi-1.1.2.tar.gz.

File metadata

  • Download URL: techtrash_comfyapi-1.1.2.tar.gz
  • Upload date:
  • Size: 10.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for techtrash_comfyapi-1.1.2.tar.gz
Algorithm Hash digest
SHA256 bca2fa1396a32d5ce1552fc346d7326227e5f72d482aa5ee672cefcdeb8615ef
MD5 ad3c3275b17267394cdca7ae41c94ed9
BLAKE2b-256 4a03a1252c6ae3762d1ffb3020df4aefc5e05b5b036cc021f69f8722dffa38f8

See more details on using hashes here.

File details

Details for the file techtrash_comfyapi-1.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for techtrash_comfyapi-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a5e62e7ffa8eb918972a87e2b2078777cfbd05cee565a802ec0aefb73800dd9f
MD5 20a31bfd7c9aed0e9003db917cf7cb99
BLAKE2b-256 9dc369ebb58b209dab55a465622c71b5e75f2d3d0eb7a94522942c3fb3bb43ca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page