Skip to main content

ComfyUI inference engine as a standalone Python library (no server, no UI).

Project description

comfy-diffusion

PyPI version Python 3.12+ CI License: GPL-3.0

comfy-diffusion is a standalone Python package that exposes ComfyUI's inference engine as importable modules. It is not a server, node graph runner, web UI, MCP server, daemon, or binary app.

The package vendors ComfyUI at vendor/ComfyUI and makes its internal comfy.* modules available when runtime APIs need them. Application authors can install this package, import comfy_diffusion, and compose inference flows directly in Python.

Install

Use uv for development and dependency resolution:

uv sync --extra cpu --extra comfyui

For CUDA environments:

uv sync --extra cuda --extra comfyui

Useful extras:

Extra Includes Use
cpu torch, torchvision CPU-only development and CI
cuda torch, torchvision via the configured PyTorch CUDA index NVIDIA GPU inference
comfyui ComfyUI runtime dependencies Importing and running ComfyUI internals
audio torchaudio Audio helpers and pipelines
video av, imageio, opencv-python Video I/O helpers
all CUDA, audio, video, and ComfyUI runtime dependencies Full local runtime

Python API

The public package root intentionally stays small:

from comfy_diffusion import check_runtime, vae_decode, vae_encode, apply_lora

Most APIs are imported from explicit submodules:

from comfy_diffusion.models import ModelManager
from comfy_diffusion.conditioning import encode_prompt
from comfy_diffusion.sampling import sample

Quick Start

Call check_runtime() before loading models or sampling. On first runtime use, comfy-diffusion can perform an automatic download of the pinned ComfyUI release when the vendored runtime is missing. Expected failures are returned as an error dict instead of being raised; check_runtime() returns an error dict for runtime bootstrap problems.

Example:

from comfy_diffusion import apply_lora, check_runtime, vae_decode
from comfy_diffusion.conditioning import encode_prompt
from comfy_diffusion.models import ModelManager
from comfy_diffusion.sampling import sample

runtime = check_runtime()
if "error" in runtime:
    raise RuntimeError(runtime["error"])

manager = ModelManager(models_dir="/path/to/models")
checkpoint = manager.load_checkpoint("model.safetensors")

model, clip = apply_lora(
    checkpoint.model,
    checkpoint.clip,
    "style.safetensors",
    0.8,
    0.8,
)

positive = encode_prompt(clip, "a portrait, studio lighting")
negative = encode_prompt(clip, "blurry, low quality")

import torch

latent = {"samples": torch.zeros(1, 4, 64, 64)}
denoised = sample(
    model,
    positive,
    negative,
    latent,
    steps=20,
    cfg=7.0,
    sampler_name="euler",
    scheduler="normal",
    seed=42,
)
image = vae_decode(checkpoint.vae, denoised)
image.save("output.png")

comfy_diffusion.pipelines remains available as an optional helper namespace for explicit ready-made flows, but the main interface is the modular Python API above.

Experimental Raw Node Access

Advanced implementers can inspect and execute raw ComfyUI nodes that are not covered by the curated wrapper modules:

from comfy_diffusion.nodes import get_node_info, list_nodes, run_node

nodes = list_nodes()
print(nodes["VAEDecode"])

info = get_node_info("KSampler")
result = run_node("SomeUtilityNode", value=123)

This is an experimental escape hatch. Prefer the explicit models, conditioning, sampling, vae, image, mask, audio, video, and pipelines modules for stable application code.

By default, raw node discovery loads ComfyUI core nodes and built-in extra nodes only. API nodes are opt-in because they may require provider credentials, network access, and additional ComfyUI API-node runtime configuration:

api_nodes = list_nodes(include_api=True)

ComfyUI API nodes use the Comfy.org proxy (https://api.comfy.org) and Comfy.org credentials. They do not use direct provider keys such as OPENAI_API_KEY, KLING_API_KEY, or LUMA_API_KEY.

from comfy_diffusion.nodes import ApiNodeAuth, run_node

result = run_node(
    "OpenAIChatNode",
    include_api=True,
    api_auth=ApiNodeAuth(api_key="your-comfy-org-api-key"),
    prompt="Describe this package in one sentence.",
    model="gpt-4.1",
)

For headless apps, environment variables are also supported:

export COMFY_ORG_API_KEY=your-comfy-org-api-key
export COMFY_API_BASE=https://api.comfy.org

Browser login, OAuth token refresh, and ComfyUI Cloud session management are intentionally out of scope for this package. Use a Comfy.org API key for Python execution. The CLI can discover API nodes with nodes list --include-api, but API node execution is Python-only.

External custom nodes are trusted Python code and are not sandboxed. They are never loaded by scanning ComfyUI's default custom_nodes folder; pass explicit paths instead:

nodes = list_nodes(custom_node_paths=["~/.cache/comfy-diffusion/custom_nodes/example-node"])

The CLI can install trusted custom node repositories into the comfy-diffusion cache:

uv run comfy-diffusion nodes install https://github.com/example/example-node.git
uv run comfy-diffusion nodes install https://github.com/example/example-node.git --ref v1.2.3
uv run comfy-diffusion nodes install https://github.com/example/example-node.git --install-deps
uv run comfy-diffusion nodes installed
uv run comfy-diffusion nodes list --custom-node ~/.cache/comfy-diffusion/custom_nodes/example-node

If a custom node repository has requirements.txt, dependencies are installed only with --install-deps; otherwise the CLI reports the command to run.

CLI

The first-party CLI is named comfy-diffusion and provides operational package tools only.

uv run comfy-diffusion runtime check --json
uv run comfy-diffusion runtime paths
uv run comfy-diffusion models list --models-dir /path/to/models
uv run comfy-diffusion models download --manifest models.json --models-dir /path/to/models
uv run comfy-diffusion nodes list --json
uv run comfy-diffusion nodes show VAEDecode --json
uv run comfy-diffusion nodes list --include-api
uv run comfy-diffusion nodes install https://github.com/example/example-node.git
uv run comfy-diffusion nodes installed --json

Model manifest shape:

{
  "models": [
    {
      "type": "hf",
      "repo_id": "org/model",
      "filename": "model.safetensors",
      "dest": "checkpoints",
      "sha256": null
    },
    {
      "type": "url",
      "url": "https://example.com/model.safetensors",
      "dest": "unet/model.safetensors"
    },
    {
      "type": "civitai",
      "model_id": 12345,
      "version_id": 67890,
      "dest": "loras"
    }
  ]
}

The CLI does not start servers, manage services, expose MCP tools, run a web UI, queue background jobs, or provide Parallax commands.

Development

uv sync --extra cpu --extra comfyui
uv run pytest
uv run ruff check .

ComfyUI is pinned as a git submodule at vendor/ComfyUI. Do not edit vendored ComfyUI code directly.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

comfy_diffusion-2.1.0.tar.gz (7.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

comfy_diffusion-2.1.0-py3-none-any.whl (248.4 kB view details)

Uploaded Python 3

File details

Details for the file comfy_diffusion-2.1.0.tar.gz.

File metadata

  • Download URL: comfy_diffusion-2.1.0.tar.gz
  • Upload date:
  • Size: 7.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for comfy_diffusion-2.1.0.tar.gz
Algorithm Hash digest
SHA256 182d3f1514e9de940efd3378b636fac3fdc7d9efcae259becb4873b4516d1b70
MD5 67ef8e525dd2a8a3cb4d5381bbcd88bd
BLAKE2b-256 3681e11758de9623e3c30060496778b3dbf06b1837ae7d60b3978788b7167085

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_diffusion-2.1.0.tar.gz:

Publisher: publish.yml on quinteroac/comfy-diffusion

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_diffusion-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: comfy_diffusion-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 248.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for comfy_diffusion-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 085cd0c93427470ab63f6b34e9df35099b1eb97ae695c1966a182b18d5caa793
MD5 c53f56d6a6d674e7ed855f57f044fa88
BLAKE2b-256 1468d7e9b6ed4f9bb9ecca22dfb6740360530430895a840075aa05cba534378b

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_diffusion-2.1.0-py3-none-any.whl:

Publisher: publish.yml on quinteroac/comfy-diffusion

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page