Skip to main content

AI Model Dynamic Offloader for ComfyUI

Project description

AI Model Dynamic Offloader

This project is a pytorch VRAM allocator that implements on-demand offloading of model weights when the primary pytorch VRAM allocator comes under pressure.

Support:

  • Nvidia GPUs only
  • Pytorch 2.8+
  • Cuda 12.8+
  • Windows 11+ / Linux as per python ManyLinux support

How it works:

  • The pytorch application creates a Virtual Base Address Register (VBAR) for a model. Creating a VBAR doesn't cost any VRAM, only GPU virtual address space (which is pretty much free).
  • The pytorch application allocates tensors for model weights within the VBAR. These tensors are initially un-allocated and will segfault if touched.
  • The pytorch application faults in the tensors using the fault() API at the time the tensor is needed. This is where VRAM actually gets allocated.
If the fault() is successful (sufficient VRAM for this tensor):
  1. If the fault() resultant signature is changed or unknown:
    • The application uses tensor::_copy() to populate the weight data on the GPU.
    • The application saves the returned signature against this weight for future comparison
  2. The layer uses the weight tensor.
  3. The application calls unpin() on the tensor to allow it to be freed under pressure later if needed.
If the fault() is unsuccessful (offloaded weight):
  1. The application allocates a temporary regular GPU tensor.
  2. Uses _copy to populate weight data on the GPU.
  3. The layer uses the temporary as the weight.
  4. Pytorch garbage collects the temp when the layer is finished.

see examples/example.py


Priorities:

  • The most recent VBARs are the highest priority and lower addresses in the VBAR take priority over higher addresses.
  • Applications should order their tensor allocations in the VBAR in load-priority order with the lowest addresses for the highest priority weights.
  • Calling fault() on a weight that is higher priority than other weights will cause those lower priority weights to get freed to make space.
  • Having a weight evicted sets that VBAR's watermark to that weight's level. Any weights in the same VBAR above the watermark automatically fail the fault() API. This avoids constantly faulting in all weights each model iteration while allowing the application to just blindly call fault() every layer and check the results. There is no need for the application to manage any VRAM quotas or watermarks.
  • Existing VBARs can be pushed to top priority with the prioritize() API. This allows use of an already loaded or partially model (e.g. using the same model twice in a complex workflow). Using prioritize resets the offload watermark of that model to no offloading, giving its weights priority over any other currently loaded models.

Backend:

  • VBAR allocation is done with cuMemAddressReserve(), faulting with cuMemCreate() and cuMemMap() and all frees done with appropriate converse APIs.
  • For consistency with VBAR memory management, main pytorch allocator plugin is also implemented with cuMemAddressReserve -> cuMemCreate -> cuMemMap. This also behaves a lot better on Windows systems with System Memory fallback.

Caveats:

  • There is no real way for this allocator to tell the difference between high usage and bad fragmentation in the pytorch caching allocator. As we always return success to the pytorch caching allocator it experiences no pressure while weights are being offloaded which means it can run in an extremely fragmented mode. The assumption is model weight access patterns are reasonably regular over blocks or iterations and it finds a good set of sizes to cache. What you should generally do though, is completely flush the pytorch caching allocator before each new model run, which avoids completely un-used reservations from taking priority over the next models weights.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

comfy_aimdo-0.4.1-py3-none-any.whl (21.8 kB view details)

Uploaded Python 3

comfy_aimdo-0.4.1-cp39-abi3-win_arm64.whl (220.1 kB view details)

Uploaded CPython 3.9+Windows ARM64

comfy_aimdo-0.4.1-cp39-abi3-win_amd64.whl (249.2 kB view details)

Uploaded CPython 3.9+Windows x86-64

comfy_aimdo-0.4.1-cp39-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (534.8 kB view details)

Uploaded CPython 3.9+manylinux: glibc 2.17+ ARM64

comfy_aimdo-0.4.1-cp39-abi3-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl (309.9 kB view details)

Uploaded CPython 3.9+manylinux: glibc 2.12+ x86-64manylinux: glibc 2.17+ x86-64

File details

Details for the file comfy_aimdo-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: comfy_aimdo-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 21.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for comfy_aimdo-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c0add884f0918048ab57b258aee0306d34e6f6926d91bfdc28a8763d9194e12d
MD5 faacef07f10ab6883d6d90555fa4774a
BLAKE2b-256 862733a158a1e129209fc41f82b8528acbef62b6e798b0305ee672ef9876c221

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.4.1-py3-none-any.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_aimdo-0.4.1-cp39-abi3-win_arm64.whl.

File metadata

  • Download URL: comfy_aimdo-0.4.1-cp39-abi3-win_arm64.whl
  • Upload date:
  • Size: 220.1 kB
  • Tags: CPython 3.9+, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for comfy_aimdo-0.4.1-cp39-abi3-win_arm64.whl
Algorithm Hash digest
SHA256 5365cd4b9b1f95d6c5a4f8ec43b07f2d0f3408f5efec0975f93a266b8be10781
MD5 9acddce8aec0d22196ce7598e914952d
BLAKE2b-256 a231d7d9570dd5609794f578be56931bd7c93749db056cec53a825e5570a0bda

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.4.1-cp39-abi3-win_arm64.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_aimdo-0.4.1-cp39-abi3-win_amd64.whl.

File metadata

  • Download URL: comfy_aimdo-0.4.1-cp39-abi3-win_amd64.whl
  • Upload date:
  • Size: 249.2 kB
  • Tags: CPython 3.9+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for comfy_aimdo-0.4.1-cp39-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 b17f281df8fb7a263b7a15fbb380b152518a4c5c34caaeb28d83b4d7c45aa4de
MD5 a5aabbd6c8d05fe75f08b3af86474d43
BLAKE2b-256 3c4e012e2faaf2426fb082e23f54c206bfc5e46d5528fb98b70e4039e6318903

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.4.1-cp39-abi3-win_amd64.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_aimdo-0.4.1-cp39-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.

File metadata

File hashes

Hashes for comfy_aimdo-0.4.1-cp39-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
Algorithm Hash digest
SHA256 5216a162e7215b009ff283191b50a1262122241a6451279e0e092c581bbf2399
MD5 a51a4d9973107d9b847243af35971e55
BLAKE2b-256 539d04a83f4911366fd2b0f453816e054355e6ec2dbcd3b3f9449a1cc75ec090

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.4.1-cp39-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file comfy_aimdo-0.4.1-cp39-abi3-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for comfy_aimdo-0.4.1-cp39-abi3-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 a521b8d54b005c9f0861c9dfd71f7a1ad669bb47f38ed8f22219b8f0c4b12c9e
MD5 e07686bcd1eece0df2d64ce81af23b8d
BLAKE2b-256 902f34b6313c4bbf228b5c1e49bfda877f8a136f6881ec7dd269e37d08919a2a

See more details on using hashes here.

Provenance

The following attestation bundles were made for comfy_aimdo-0.4.1-cp39-abi3-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl:

Publisher: build-wheels.yml on Comfy-Org/comfy-aimdo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page