Skip to main content

Convert safetensors weights to quantized formats (FP8, INT8) with learned rounding optimization

Project description

convert_to_quant

Convert safetensors weights to quantized formats (FP8, INT8, NVFP4, MXFP8) with learned rounding optimization for ComfyUI inference.

Python 3.10+ License: MIT


Installation

pip install convert_to_quant

Or install from source:

git clone https://github.com/silveroxides/convert_to_quant.git
cd convert_to_quant
pip install -e .

Requirements Summary

Feature Requirement
Minimum (FP8/INT8) Python 3.10+, PyTorch 2.8+, CUDA 12.8+
Full (NVFP4/MXFP8) Python 3.12+, PyTorch 2.10+, CUDA 13.0+, comfy-kitchen
INT8 Kernels Triton (Linux native, Windows via triton-windows)

[!IMPORTANT] PyTorch must be installed manually with the correct CUDA version for your GPU. This package does not install PyTorch automatically to prevent environment conflicts.


Detailed Installation (GPU-Specific)

1. Install PyTorch

Visit pytorch.org to get the correct install command.

Examples:

# CUDA 13.0 (Required for Blackwell NVFP4/MXFP8)
pip install torch --index-url https://download.pytorch.org/whl/cu130

# CUDA 12.8 (Stable)
pip install torch --index-url https://download.pytorch.org/whl/cu128

# CPU only
pip install torch --index-url https://download.pytorch.org/whl/cpu

2. Optional: Triton (needed for blockwise INT8)

# Linux
pip install -U triton

# Windows (Example for torch>=2.9)
pip install -U "triton-windows<3.6"

Quick Start

# Basic FP8 quantization with ComfyUI metadata (recommended)
convert_to_quant -i model.safetensors --comfy_quant

# INT8 Block-wise with SVD optimization
convert_to_quant -i model.safetensors --int8 --block_size 128 --comfy_quant

# Blackwell NVFP4 (4-bit)
convert_to_quant -i model.safetensors --nvfp4 --comfy_quant

Load the output .safetensors file in ComfyUI like any other model.


Supported Quantization Formats

Format CLI Flag Hardware Optimization
FP8 (E4M3) (default) Ada/Hopper+ Learned Rounding (SVD)
INT8 Block-wise --int8 Any GPU Learned Rounding (SVD)
INT8 Tensor-wise --int8 --scaling_mode tensor Any GPU High-perf _scaled_mm
NVFP4 (4-bit) --nvfp4 Blackwell Dual-scale optimization
MXFP8 --mxfp8 Blackwell Microscaling (E8M0)

For a deep dive into how these formats work, see FORMATS.md.


Model-Specific Presets

Model Flag Notes
Flux.2 --flux2 Keep modulation/guidance/time/final high-precision
T5-XXL --t5xxl Decoder removed
Hunyuan Video --hunyuan Attention norms excluded
WAN Video --wan Time embeddings excluded

(See --help-filters for a full list of presets)


Documentation

  • 📖 MANUAL.md - Complete usage guide with examples and troubleshooting
  • 📚 FORMATS.md - Technical reference for quantization formats
  • 🧪 DEVELOPMENT.md - Changelog and research notes
  • 📋 AGENTS.md - Developer guide & registry architecture

Key Features

  • Learned Rounding: SVD-based optimization minimizes quantization error.
  • Bias Correction: Automatic bias adjustment using synthetic calibration data.
  • Model-Specific Support: Exclusion lists for sensitive layers (norms, embeddings).
  • Three-Tier Quantization: Mix different formats per layer using --custom-layers.

Advanced Usage

Layer Config JSON

Define per-layer settings with regex patterns:

convert_to_quant -i model.safetensors --layer-config layers.json --comfy_quant

Scaling Modes

# Block-wise scaling for better accuracy
convert_to_quant -i model.safetensors --scaling-mode block --block_size 64 --comfy_quant

Acknowledgements

Special thanks to:


License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

convert_to_quant-1.0.3.tar.gz (115.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

convert_to_quant-1.0.3-py3-none-any.whl (132.2 kB view details)

Uploaded Python 3

File details

Details for the file convert_to_quant-1.0.3.tar.gz.

File metadata

  • Download URL: convert_to_quant-1.0.3.tar.gz
  • Upload date:
  • Size: 115.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for convert_to_quant-1.0.3.tar.gz
Algorithm Hash digest
SHA256 08b76ec7dc5b3f10fb143f2c13398ab59ce4d24ab19e92fc58173c81bb327b60
MD5 321da9832d09a01e9e5cc41aad7e6edb
BLAKE2b-256 4a33a6ec56ace53ed309194b3ed7645e1cbe06e457102220df38984289742fb5

See more details on using hashes here.

File details

Details for the file convert_to_quant-1.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for convert_to_quant-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 493bf085ea3515768ae1cf8f58dd3c2b1e0ac27378ac03536912f97ab1a356c0
MD5 be8873cac7671c355c9aa9af65f2c431
BLAKE2b-256 b3c0c69c09d5f22fc2525a23d756678f3270fc0ccab5bef417433fb6f472ead8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page