Skip to main content

Convert safetensors weights to quantized formats (FP8, INT8) with learned rounding optimization

Project description

convert_to_quant

Convert safetensors weights to quantized formats (FP8, INT8, NVFP4, MXFP8) with learned rounding optimization for ComfyUI inference.

PyPI version GitHub release Python 3.10+ License: MIT


Installation

pip install convert_to_quant

Or install from source:

git clone https://github.com/silveroxides/convert_to_quant.git
cd convert_to_quant
pip install -e .

Requirements Summary

Feature Requirement
Minimum (FP8/INT8) Python 3.10+, PyTorch 2.8+, CUDA 12.8+
Full (NVFP4/MXFP8) Python 3.12+, PyTorch 2.10+, CUDA 13.0+, comfy-kitchen
INT8 Kernels Triton (Linux native, Windows via triton-windows)

[!IMPORTANT] PyTorch must be installed manually with the correct CUDA version for your GPU. This package does not install PyTorch automatically to prevent environment conflicts.


Detailed Installation (GPU-Specific)

1. Install PyTorch

Visit pytorch.org to get the correct install command.

Examples:

# CUDA 13.0 (Required for Blackwell NVFP4/MXFP8)
pip install torch --index-url https://download.pytorch.org/whl/cu130

# CUDA 12.8 (Stable)
pip install torch --index-url https://download.pytorch.org/whl/cu128

# CPU only
pip install torch --index-url https://download.pytorch.org/whl/cpu

2. Optional: Triton (needed for blockwise INT8)

# Linux
pip install -U triton

# Windows (Example for torch>=2.9)
pip install -U "triton-windows<3.6"

Quick Start

# Basic FP8 quantization with ComfyUI metadata (recommended)
convert_to_quant -i model.safetensors --comfy_quant

# INT8 Block-wise with SVD optimization
convert_to_quant -i model.safetensors --int8 --block_size 128 --comfy_quant

# Blackwell NVFP4 (4-bit)
convert_to_quant -i model.safetensors --nvfp4 --comfy_quant

Load the output .safetensors file in ComfyUI like any other model.


Supported Quantization Formats

Format CLI Flag Hardware Optimization
FP8 (E4M3) (default) Ada/Hopper+ Learned Rounding (SVD)
INT8 Block-wise --int8 Any GPU Learned Rounding (SVD)
INT8 Tensor-wise --int8 --scaling_mode tensor Any GPU High-perf _scaled_mm
NVFP4 (4-bit) --nvfp4 Blackwell Dual-scale optimization
MXFP8 --mxfp8 Blackwell Microscaling (E8M0)

For a deep dive into how these formats work, see FORMATS.md.


Model-Specific Presets

Model Flag Notes
Flux.2 --flux2 Keep modulation/guidance/time/final high-precision
T5-XXL --t5xxl Decoder removed
Hunyuan Video --hunyuan Attention norms excluded
WAN Video --wan Time embeddings excluded

(See --help-filters for a full list of presets)


Documentation

  • 📖 MANUAL.md - Complete usage guide with examples and troubleshooting
  • 📚 FORMATS.md - Technical reference for quantization formats
  • 🧪 DEVELOPMENT.md - Changelog and research notes
  • 📋 AGENTS.md - Developer guide & registry architecture

Key Features

  • Learned Rounding: SVD-based optimization minimizes quantization error.
  • Bias Correction: Automatic bias adjustment using synthetic calibration data.
  • Model-Specific Support: Exclusion lists for sensitive layers (norms, embeddings).
  • Three-Tier Quantization: Mix different formats per layer using --custom-layers.

Advanced Usage

Layer Config JSON

Define per-layer settings with regex patterns:

convert_to_quant -i model.safetensors --layer-config layers.json --comfy_quant

Scaling Modes

# Block-wise scaling for better accuracy
convert_to_quant -i model.safetensors --scaling-mode block --block_size 64 --comfy_quant

Acknowledgements

Special thanks to:


License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

convert_to_quant-1.1.5.tar.gz (124.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

convert_to_quant-1.1.5-py3-none-any.whl (139.1 kB view details)

Uploaded Python 3

File details

Details for the file convert_to_quant-1.1.5.tar.gz.

File metadata

  • Download URL: convert_to_quant-1.1.5.tar.gz
  • Upload date:
  • Size: 124.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for convert_to_quant-1.1.5.tar.gz
Algorithm Hash digest
SHA256 8c9eb6f14d9af5b01fdd24aedb325d3d1fd88634fe595e9eb715d6e0757f2d25
MD5 6f102057e8ecefdca0e5b2fd487a8742
BLAKE2b-256 305be50cf01dc0778ea426b466b5493d50658f6f008bc673cdbb4bbb749ec6d5

See more details on using hashes here.

File details

Details for the file convert_to_quant-1.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for convert_to_quant-1.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 3b40548c70f764912fae41eb373da87f03cdfce3b8cf64de2dc83777186d0443
MD5 1386ce17a8042f2b963956c0f35ad690
BLAKE2b-256 b37deef2f9b6ab7da2d55f0eb79f3fe1b5a660f422d899bc2b4b73052ed6917f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page