Skip to main content

A PyTorch library for multi-modal image translation with diffusion bridges, GANs, and transformer backbones.

Project description

pytorch-image-translation-models

License: MIT PyPI version

A PyTorch library for multi-modal image translation with diffusion bridges, GANs, and transformer backbones.

Installation

Install from PyPI

pip install pytorch-image-translation-models

Install from source

pip install -e .

With optional dependencies:

# With training extras (accelerate, peft, datasets, tensorboard)
pip install -e ".[training]"

# With metrics extras (torchmetrics, lpips, torch-fidelity, scipy)
pip install -e ".[metrics]"

# Everything
pip install -e ".[all]"

Note: PyTorch is listed as a dependency but you may want to install a specific CUDA build first. See PyTorch — Get Started for details.

Features

Models

  • GAN generatorsUNetGenerator (encoder-decoder with skip connections), ResNetGenerator (residual blocks)
  • GAN discriminatorsPatchGANDiscriminator (Markovian patch-level classifier)
  • StegoGANResnetMaskV1Generator, ResnetMaskV3Generator, NetMatchability (steganographic masking for non-bijective translation, CVPR 2024)
  • Diffusion bridgeI2SBUNet (ADM-style U-Net for Image-to-Image Schrödinger Bridge)
  • DiT backboneSiTBackbone (Scalable Interpolant Transformer for diffusion bridges)

Schedulers

Scheduler Description
I2SBScheduler Symmetric beta schedule with forward/reverse bridge kernels for I2SB
DDBMScheduler Karras sigma schedule with Heun/Euler sampling for DDBM (VP/VE modes)
BiBBDMScheduler Brownian Bridge noise schedule with bidirectional sampling for BiBBDM
DDIBScheduler Gaussian diffusion with DDIM forward/reverse steps for DDIB
BDBMScheduler Bidirectional Brownian Bridge schedule for BDBM
DBIMScheduler Faster bridge sampler with eta-controlled stochasticity for DBIM
CDTSDEScheduler Dynamic domain-shift eta schedule for CDTSDE
LBMScheduler Flow-matching bridge for single/few-step LBM translation

Pipelines

Pipeline Description
I2SBPipeline End-to-end inference for I2SB models
DDBMPipeline DDBM bridge diffusion with Heun's method
BiBBDMPipeline Bidirectional Brownian Bridge translation (b2a / a2b)
DDIBPipeline Dual-model DDIM encode/decode translation
BDBMPipeline Bidirectional diffusion bridge with context conditioning
DBIMPipeline Fast DBIM bridge sampling with bridge preconditioning
CDTSDEPipeline CDTSDE with dynamic domain-shift scheduling
LBMPipeline LBM flow-matching for single/few-step image translation

All pipelines support "pt", "pil", and "np" output types.

Data

  • PairedImageDataset / UnpairedImageDataset with configurable transform pipelines

Losses

  • GANLoss (vanilla / LSGAN / hinge), VGG-based PerceptualLoss

Training

  • Pix2PixTrainer — Paired GAN training with checkpoint save/load
  • StegoGANTrainer — StegoGAN unpaired training with steganographic masking and consistency losses
  • I2SBTrainer — I2SB bridge model training (in examples/i2sb/)

Metrics

  • compute_psnr, compute_ssim, compute_lpips, compute_fid

Community Pipelines

Self-contained, single-file modules contributed by the community (inspired by diffusers community pipelines):

Pipeline Paper Description
parallel_gan.py Wang et al., TGRS 2022 SAR-to-Optical with hierarchical latent features

Quick Start

GAN-based translation (Pix2Pix)

import src

gen = src.UNetGenerator(in_channels=3, out_channels=3)
disc = src.PatchGANDiscriminator(in_channels=6)

from src.training import Pix2PixTrainer, TrainingConfig
config = TrainingConfig(epochs=100, device="cuda")
trainer = Pix2PixTrainer(gen, disc, config)
trainer.fit(dataloader)  # expects {"source": tensor, "target": tensor}

translator = src.ImageTranslator(gen, device="cuda")
result = translator.predict(pil_image)

Diffusion bridge translation (I2SB)

from src.models.unet import I2SBUNet, create_model
from src.schedulers import I2SBScheduler
from src.pipelines.i2sb import I2SBPipeline

# Create model and scheduler
model = create_model(
    image_size=256, in_channels=3, num_channels=128,
    num_res_blocks=2, attention_resolutions="32,16,8",
    condition_mode="concat",
)
scheduler = I2SBScheduler(interval=1000, beta_max=0.3)

# Inference pipeline
pipeline = I2SBPipeline(unet=model, scheduler=scheduler)
result = pipeline(source_tensor, nfe=20, output_type="pt")

DDBM bridge diffusion

from src.schedulers import DDBMScheduler
from src.pipelines import DDBMPipeline

scheduler = DDBMScheduler(pred_mode="vp", num_train_timesteps=40)
pipeline = DDBMPipeline(unet=my_unet, scheduler=scheduler)
result = pipeline(source_image, num_inference_steps=40, output_type="pil")

BiBBDM bidirectional translation

from src.schedulers import BiBBDMScheduler
from src.pipelines import BiBBDMPipeline

scheduler = BiBBDMScheduler(num_timesteps=1000, sample_step=100)
pipeline = BiBBDMPipeline(unet=my_unet, scheduler=scheduler)
# Source → Target
result = pipeline(source_tensor, direction="b2a", output_type="pt")
# Target → Source
result = pipeline(target_tensor, direction="a2b", output_type="pt")

DDIB dual-model translation

from src.schedulers import DDIBScheduler
from src.pipelines import DDIBPipeline

scheduler = DDIBScheduler(num_train_timesteps=1000)
pipeline = DDIBPipeline(source_unet=src_model, target_unet=tgt_model, scheduler=scheduler)
result = pipeline(source_image, num_inference_steps=250, output_type="pil")

LBM flow-matching translation

from src.schedulers import LBMScheduler
from src.pipelines import LBMPipeline

scheduler = LBMScheduler(num_train_timesteps=1000)
pipeline = LBMPipeline(unet=my_unet, scheduler=scheduler)
result = pipeline(source_image, num_inference_steps=1, output_type="pil")

DiT backbone (SiT) for diffusion bridges

from src.models.dit import SiTBackbone, SIT_CONFIGS

# Create a SiT-S/2 backbone (small, patch size 2)
depth, hidden_size, num_heads = SIT_CONFIGS["S"]
model = SiTBackbone(
    image_size=256, patch_size=2, in_channels=3,
    hidden_size=hidden_size, depth=depth, num_heads=num_heads,
    condition_mode="concat",
)
# Use as drop-in replacement for UNet in any bridge pipeline
output = model(noisy_sample, timestep, xT=source_image)

I2SB training with task configs

from examples.i2sb.config import sar2eo_config
from examples.i2sb.trainer import I2SBTrainer

cfg = sar2eo_config(resolution=256, train_batch_size=8)
trainer = I2SBTrainer(cfg)
model = trainer.build_model()
scheduler = trainer.build_scheduler()

# Single-step loss computation
loss = I2SBTrainer.compute_training_loss(model, scheduler, source_batch, target_batch)
loss.backward()

StegoGAN non-bijective translation

from src.training import StegoGANTrainer, StegoGANConfig

cfg = StegoGANConfig(
    input_nc=3, output_nc=3, ngf=64,
    lambda_reg=0.3, lambda_consistency=1.0,
    resnet_layer=8, fusionblock=True,
    device="cuda",
)
trainer = StegoGANTrainer(cfg)
# Run a single training step with unpaired data
losses = trainer.train_step(real_A_batch, real_B_batch)

Package Structure

src/                                 # ← Core library (single source of truth)
├── __init__.py                      # Public API
├── models/
│   ├── generators.py                # UNetGenerator, ResNetGenerator
│   ├── discriminators.py            # PatchGANDiscriminator
│   ├── unet/
│   │   ├── i2sb_unet.py            # I2SBUNet (native ADM-style backbone)
│   │   ├── unet_2d.py              # create_model factory
│   │   └── diffusers_wrappers.py   # DDBMUNet, DDIBUNet, … (diffusers UNet2DModel wrappers)
│   ├── dit/
│   │   └── sit.py                  # SiTBackbone (Diffusion Transformer)
│   └── stegogan/
│       ├── generators.py           # ResnetMaskV1Generator, ResnetMaskV3Generator
│       └── networks.py             # NetMatchability, mask_generate, ResnetBlock
├── schedulers/                      # One scheduler per method
│   ├── i2sb.py                     # I2SBScheduler
│   ├── ddbm.py                     # DDBMScheduler
│   ├── bibbdm.py                   # BiBBDMScheduler
│   ├── ddib.py                     # DDIBScheduler
│   ├── bdbm.py                     # BDBMScheduler
│   ├── dbim.py                     # DBIMScheduler
│   ├── cdtsde.py                   # CDTSDEScheduler
│   └── lbm.py                      # LBMScheduler
├── pipelines/                       # One pipeline per method
│   ├── i2sb.py                     # I2SBPipeline
│   ├── ddbm.py                     # DDBMPipeline
│   ├── bibbdm.py                   # BiBBDMPipeline
│   ├── ddib.py                     # DDIBPipeline
│   ├── bdbm.py                     # BDBMPipeline
│   ├── dbim.py                     # DBIMPipeline
│   ├── cdtsde.py                   # CDTSDEPipeline
│   └── lbm.py                      # LBMPipeline
├── data/
│   ├── datasets.py                 # PairedImageDataset, UnpairedImageDataset
│   └── transforms.py               # get_transforms, default_transforms
├── losses/
│   ├── adversarial.py              # GANLoss
│   └── perceptual.py               # PerceptualLoss
├── training/
│   ├── trainer.py                  # Pix2PixTrainer, TrainingConfig
│   └── stegogan_trainer.py         # StegoGANTrainer, StegoGANConfig
├── inference/
│   └── predictor.py                # ImageTranslator
└── metrics/
    └── image_quality.py            # PSNR, SSIM, LPIPS, FID
examples/                            # ← Training/inference scripts (import from src/)
├── community/                       # Community-contributed pipelines (single-file)
│   └── parallel_gan.py             # Parallel-GAN (Wang et al., TGRS 2022)
├── i2sb/                            # I2SB paper-oriented training code
│   ├── config.py                   # TaskConfig, sar2eo_config, etc.
│   └── trainer.py                  # I2SBTrainer
└── inference/
    └── run_inference.py            # Unified inference script for all methods

Credits

Reference papers

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_image_translation_models-0.2.0.tar.gz (89.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file pytorch_image_translation_models-0.2.0.tar.gz.

File metadata

File hashes

Hashes for pytorch_image_translation_models-0.2.0.tar.gz
Algorithm Hash digest
SHA256 5a9d19b69babf929de706b5cfd6e8e6443e8aca18a5625f4071f0576a6d9cadd
MD5 02e4b75a9b7303c47cc66f92742e9faa
BLAKE2b-256 d96f26dc87dd852a774cc1a644aef5555b467fd65f843955f63bc3f75c6ccb11

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytorch_image_translation_models-0.2.0.tar.gz:

Publisher: publish.yml on Bili-Sakura/pytorch-image-translation-models

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pytorch_image_translation_models-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pytorch_image_translation_models-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bd56794311223689f262dea64c80cf30ced41413be49d1f825eb169a55512dcc
MD5 7267a75bae1d088512de116a3eca079b
BLAKE2b-256 50a67d54131fcc3d2d478fe7d5d8fd5743baff9c252054cb9302c26c37c94490

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytorch_image_translation_models-0.2.0-py3-none-any.whl:

Publisher: publish.yml on Bili-Sakura/pytorch-image-translation-models

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page