Skip to main content

A unified interface for memory efficient per tensor loading of safetensors files as raw bytes from offset, handling CPU/GPU pinned transfers, and converting between tensors and dicts.

Project description

unifiedefficientloader

A unified interface for loading safetensors, handling CPU/GPU pinned transfers, and converting between tensors and dicts.

Installation

You can install this package via pip. Since it heavily relies on torch and safetensors but doesn't strictly force them as hard dependencies for package building/installation, make sure you have them installed in your environment:

pip install unifiedefficientloader
pip install torch safetensors tqdm

Usage

Unified Safetensors Loader

from unifiedefficientloader import UnifiedSafetensorsLoader

# Standard mode (preload all)
with UnifiedSafetensorsLoader("model.safetensors", low_memory=False) as loader:
    tensor = loader.get_tensor("weight_name")

# Low memory mode (streaming)
with UnifiedSafetensorsLoader("model.safetensors", low_memory=True) as loader:
    for key in loader.keys():
        tensor = loader.get_tensor(key)
        # Process tensor...
        loader.mark_processed(key) # Frees memory

Loading Specific Tensors Dynamically (Header Analysis)

You can analyze the file's header without loading the entire multi-gigabyte safetensors file into memory. This allows you to locate specific data (like embedded JSON dictionaries stored as uint8 tensors) and load only those specific tensors directly from their file offsets.

from unifiedefficientloader import UnifiedSafetensorsLoader, tensor_to_dict

with UnifiedSafetensorsLoader("model.safetensors", low_memory=True) as loader:
    # 1. Analyze the header metadata without loading any tensors
    # loader._header contains the full safetensors header directory
    uint8_tensor_keys = [
        key for key, info in loader._header.items()
        if isinstance(info, dict) and info.get("dtype") == "U8"
    ]
    
    # 2. Load ONLY those specific tensors using their keys
    for key in uint8_tensor_keys:
        # get_tensor dynamically reads only the bytes for this tensor 
        # based on the offsets found in the header
        loaded_tensor = loader.get_tensor(key)
        
        # 3. Decode the uint8 tensor back into a Python dictionary
        extracted_dict = tensor_to_dict(loaded_tensor)
        print(f"Decoded {key}:", extracted_dict)

Optimized Asynchronous Streaming via ThreadPoolExecutor

For maximum I/O throughput while maintaining strict memory backpressure, use async_stream. This utilizes a ThreadPoolExecutor for background disk reading and a bounded queue to prevent memory exhaustion. By setting pin_memory=True, memory pinning is performed sequentially in the main thread to avoid OS-level lock contention and preserve high DMA transfer speeds.

from unifiedefficientloader import UnifiedSafetensorsLoader, transfer_to_gpu_pinned

with UnifiedSafetensorsLoader("model.safetensors", low_memory=True) as loader:
    keys_to_load = loader.keys()
    
    # Create the continuous streaming generator
    # prefetch_batches controls how many batches to buffer in memory
    stream = loader.async_stream(
        keys_to_load, 
        batch_size=8, 
        prefetch_batches=2, 
        pin_memory=True
    )
    
    # Iterate directly over the generator
    for batch in stream:
        for key, pinned_tensor in batch:
            # Transfer directly to GPU via DMA (pinning is already done)
            gpu_tensor = transfer_to_gpu_pinned(pinned_tensor, device="cuda")
            
            # ... process gpu_tensor ...
            loader.mark_processed(key)

Tensor/Dict Conversion

from unifiedefficientloader import dict_to_tensor, tensor_to_dict

my_dict = {"param": 1.0, "name": "test"}
tensor = dict_to_tensor(my_dict)
recovered_dict = tensor_to_dict(tensor)

Pinned Memory Transfers

import torch
from unifiedefficientloader import transfer_to_gpu_pinned

tensor = torch.randn(100, 100)
# Transfers using pinned memory if CUDA is available, otherwise falls back gracefully
gpu_tensor = transfer_to_gpu_pinned(tensor, device="cuda:0")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unifiedefficientloader-0.2.2.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unifiedefficientloader-0.2.2-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file unifiedefficientloader-0.2.2.tar.gz.

File metadata

  • Download URL: unifiedefficientloader-0.2.2.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for unifiedefficientloader-0.2.2.tar.gz
Algorithm Hash digest
SHA256 ec4020f44980d0a11b57b41202f0fddccbf9d8c480ba88a397d12b7c34024be0
MD5 76341ece6e8721562aaf09099af3aa4d
BLAKE2b-256 271938d4fdaec8a6eef0bad90d7165b417fbaceb92b1e5f4301436ded2492c0c

See more details on using hashes here.

File details

Details for the file unifiedefficientloader-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for unifiedefficientloader-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c5e65c860d9889ae116d316daa62d96786728a66ddef1665656de213489d94d6
MD5 fa88d8d83242d16f2f0917be185df5f1
BLAKE2b-256 8dc7c047a13144e33e43f574c6c49cd20ece3a3e0bab5642bdeba646277d0dba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page