Skip to main content

AI Model Dynamic Offloader for ComfyUI

Project description

AI Model Dynamic Offloader

This project is a pytorch VRAM allocator that implements on-demand offloading of model weights when the primary pytorch VRAM allocator comes under pressure.

Support:

  • Nvidia GPUs only
  • Pytorch 2.6+
  • Cuda 12.8+
  • Windows 11+ / Linux as per python ManyLinux support

How it works:

  • The pytorch application creates a Virtual Base Address Register (VBAR) for a model. Creating a VBAR doesn't cost any VRAM, only GPU virtual address space (which is pretty much free).
  • The pytorch application allocates tensors for model weights within the VBAR. These tensors are initially un-allocated and will segfault if touched.
  • The pytorch application faults in the tensors using the fault() API at the time the tensor is needed. This is where VRAM actually gets allocated.
If the fault() is successful (sufficient VRAM for this tensor):
  1. If the fault() resultant signature is changed or unknown:
    • The application uses tensor::_copy() to populate the weight data on the GPU.
    • The application saves the returned signature against this weight for future comparison
  2. The layer uses the weight tensor.
  3. The application calls unpin() on the tensor to allow it to be freed under pressure later if needed.
If the fault() is unsuccessful (offloaded weight):
  1. The application allocates a temporary regular GPU tensor.
  2. Uses _copy to populate weight data on the GPU.
  3. The layer uses the temporary as the weight.
  4. Pytorch garbage collects the temp when the layer is finished.

see examples/example.py


Priorities:

  • The most recent VBARs are the highest priority and lower addresses in the VBAR take priority over higher addresses.
  • Applications should order their tensor allocations in the VBAR in load-priority order with the lowest addresses for the highest priority weights.
  • Calling fault() on a weight that is higher priority than other weights will cause those lower priority weights to get freed to make space.
  • Having a weight evicted sets that VBAR's watermark to that weight's level. Any weights in the same VBAR above the watermark automatically fail the fault() API. This avoids constantly faulting in all weights each model iteration while allowing the application to just blindly call fault() every layer and check the results. There is no need for the application to manage any VRAM quotas or watermarks.
  • Existing VBARs can be pushed to top priority with the prioritize() API. This allows use of an already loaded or partially model (e.g. using the same model twice in a complex workflow). Using prioritize resets the offload watermark of that model to no offloading, giving its weights priority over any other currently loaded models.

Backend:

  • VBAR allocation is done with cuMemAddressReserve(), faulting with cuMemCreate() and cuMemMap() and all frees done with appropriate converse APIs.
  • For consistency with VBAR memory management, main pytorch allocator plugin is also implemented with cuMemAddressReserve -> cuMemCreate -> cuMemMap. This also behaves a lot better on Windows systems with System Memory fallback.
  • This allocator is incompatible with the pytorch cudaMallocAsync backend or expandable segments backends (as the plugin interface does not exist on these backends as of this writing).

Caveats:

  • There is no real way for this allocator to tell the difference between high usage and bad fragmentation in the pytorch caching allocator. As we always return success to the pytorch caching allocator it experiences no pressure while weights are being offloaded which means it can run in an extremely fragmented mode. The assumption is model weight access patterns are reasonably regular over blocks or iterations and it finds a good set of sizes to cache. What you should generally do though, is completely flush the pytorch caching allocator before each new model run, which avoids completely un-used reservations from taking priority over the next models weights.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

comfy_aimdo-0.1.3-py3-none-any.whl (18.4 kB view details)

Uploaded Python 3

comfy_aimdo-0.1.3-cp39-abi3-win_amd64.whl (106.8 kB view details)

Uploaded CPython 3.9+Windows x86-64

comfy_aimdo-0.1.3-cp39-abi3-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl (39.1 kB view details)

Uploaded CPython 3.9+manylinux: glibc 2.17+ x86-64manylinux: glibc 2.5+ x86-64

File details

Details for the file comfy_aimdo-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: comfy_aimdo-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 18.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for comfy_aimdo-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0fd789dba8f2cbf1ccda070ee7d007f16db6470872d26369e763a76ae92df2df
MD5 881c16020fa4e1cd9071ba42ac83bf6d
BLAKE2b-256 67bafc6b46b11f713ba82a0abd5e371497463d435d85737e6638380feba736cd

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.1.3-py3-none-any.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_aimdo-0.1.3-cp39-abi3-win_amd64.whl.

File metadata

  • Download URL: comfy_aimdo-0.1.3-cp39-abi3-win_amd64.whl
  • Upload date:
  • Size: 106.8 kB
  • Tags: CPython 3.9+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for comfy_aimdo-0.1.3-cp39-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 d6c2f05aa37e3155702b9b6a5fc73473d366b1ebdf41595c0327447d0e833431
MD5 c794c820b7448660769ea07bc0f5e73b
BLAKE2b-256 ef9ea094e59160caea86609ee8b347e19b84728c8e208739f8d8085e220a718b

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.1.3-cp39-abi3-win_amd64.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_aimdo-0.1.3-cp39-abi3-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl.

File metadata

File hashes

Hashes for comfy_aimdo-0.1.3-cp39-abi3-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl
Algorithm Hash digest
SHA256 f75453cd628632c25be9c5f13cc4462bfb743a201d3fef7f5f85d099b017909f
MD5 ab784165856b4814fe42384df304f63c
BLAKE2b-256 a1dea284e1191fac6f4c9e317da700d3e5d31de8011896c74f30fc30a3e6f948

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.1.3-cp39-abi3-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page