Skip to main content

A PyTorch library for multi-modal image translation with diffusion bridges, GANs, and transformer backbones.

Project description

pytorch-image-translation-models

License: MIT PyPI version

A PyTorch library for multi-modal image translation with diffusion bridges, GANs, and transformer backbones.

Installation

Install from PyPI

pip install pytorch-image-translation-models

Install from source

pip install -e .

With optional dependencies:

# With training extras (accelerate, peft, datasets, tensorboard)
pip install -e ".[training]"

# With metrics extras (torchmetrics, lpips, torch-fidelity, scipy)
pip install -e ".[metrics]"

# Everything
pip install -e ".[all]"

Note: PyTorch is listed as a dependency but you may want to install a specific CUDA build first. See PyTorch — Get Started for details.

Features

Models

  • GAN generatorsUNetGenerator (encoder-decoder with skip connections), ResNetGenerator (residual blocks)
  • GAN discriminatorsPatchGANDiscriminator (Markovian patch-level classifier)
  • StegoGANResnetMaskV1Generator, ResnetMaskV3Generator, NetMatchability (steganographic masking for non-bijective translation, CVPR 2024)
  • Diffusion bridgeI2SBUNet (ADM-style U-Net for Image-to-Image Schrödinger Bridge)
  • UNSBUNSBGenerator, UNSBDiscriminator, UNSBEnergyNet (time-conditional networks for Unpaired Neural Schrödinger Bridge, ICLR 2024)
  • Local DiffusionLocalDiffusionUNet, ConditionEncoder (conditional denoising U-Net with branch-and-fuse for hallucination suppression, ECCV 2024 Oral)
  • DiT backboneSiTBackbone (Scalable Interpolant Transformer for diffusion bridges)

Schedulers

Scheduler Description
I2SBScheduler Symmetric beta schedule with forward/reverse bridge kernels for I2SB
DDBMScheduler Karras sigma schedule with Heun/Euler sampling for DDBM (VP/VE modes)
BiBBDMScheduler Brownian Bridge noise schedule with bidirectional sampling for BiBBDM
DDIBScheduler Gaussian diffusion with DDIM forward/reverse steps for DDIB
BDBMScheduler Bidirectional Brownian Bridge schedule for BDBM
DBIMScheduler Faster bridge sampler with eta-controlled stochasticity for DBIM
CDTSDEScheduler Dynamic domain-shift eta schedule for CDTSDE
LBMScheduler Flow-matching bridge for single/few-step LBM translation
UNSBScheduler Non-uniform harmonic time schedule with stochastic bridge dynamics for UNSB
LocalDiffusionScheduler Gaussian diffusion (DDPM/DDIM) with sigmoid/cosine/linear beta schedules for Local Diffusion

Pipelines

Pipeline Description
I2SBPipeline End-to-end inference for I2SB models
DDBMPipeline DDBM bridge diffusion with Heun's method
BiBBDMPipeline Bidirectional Brownian Bridge translation (b2a / a2b)
DDIBPipeline Dual-model DDIM encode/decode translation
BDBMPipeline Bidirectional diffusion bridge with context conditioning
DBIMPipeline Fast DBIM bridge sampling with bridge preconditioning
CDTSDEPipeline CDTSDE with dynamic domain-shift scheduling
LBMPipeline LBM flow-matching for single/few-step image translation
UNSBPipeline Multi-step Schrödinger Bridge with adversarial + contrastive losses
LocalDiffusionPipeline Branch-and-fuse diffusion for hallucination-aware image translation

All pipelines support "pt", "pil", and "np" output types.

Data

  • PairedImageDataset / UnpairedImageDataset with configurable transform pipelines

Losses

  • GANLoss (vanilla / LSGAN / hinge), VGG-based PerceptualLoss

Training

  • Pix2PixTrainer — Paired GAN training with checkpoint save/load
  • StegoGANTrainer — StegoGAN unpaired training with steganographic masking and consistency losses
  • I2SBTrainer — I2SB bridge model training (in examples/i2sb/)

Metrics

  • compute_psnr, compute_ssim, compute_lpips, compute_fid

Community Pipelines

Self-contained, single-file modules contributed by the community (inspired by diffusers community pipelines):

Pipeline Paper Description
parallel_gan.py Wang et al., TGRS 2022 SAR-to-Optical with hierarchical latent features

Quick Start

GAN-based translation (Pix2Pix)

import src

gen = src.UNetGenerator(in_channels=3, out_channels=3)
disc = src.PatchGANDiscriminator(in_channels=6)

from src.training import Pix2PixTrainer, TrainingConfig
config = TrainingConfig(epochs=100, device="cuda")
trainer = Pix2PixTrainer(gen, disc, config)
trainer.fit(dataloader)  # expects {"source": tensor, "target": tensor}

translator = src.ImageTranslator(gen, device="cuda")
result = translator.predict(pil_image)

Diffusion bridge translation (I2SB)

from src.models.unet import I2SBUNet, create_model
from src.schedulers import I2SBScheduler
from src.pipelines.i2sb import I2SBPipeline

# Create model and scheduler
model = create_model(
    image_size=256, in_channels=3, num_channels=128,
    num_res_blocks=2, attention_resolutions="32,16,8",
    condition_mode="concat",
)
scheduler = I2SBScheduler(interval=1000, beta_max=0.3)

# Inference pipeline
pipeline = I2SBPipeline(unet=model, scheduler=scheduler)
result = pipeline(source_tensor, nfe=20, output_type="pt")

DDBM bridge diffusion

from src.schedulers import DDBMScheduler
from src.pipelines import DDBMPipeline

scheduler = DDBMScheduler(pred_mode="vp", num_train_timesteps=40)
pipeline = DDBMPipeline(unet=my_unet, scheduler=scheduler)
result = pipeline(source_image, num_inference_steps=40, output_type="pil")

BiBBDM bidirectional translation

from src.schedulers import BiBBDMScheduler
from src.pipelines import BiBBDMPipeline

scheduler = BiBBDMScheduler(num_timesteps=1000, sample_step=100)
pipeline = BiBBDMPipeline(unet=my_unet, scheduler=scheduler)
# Source → Target
result = pipeline(source_tensor, direction="b2a", output_type="pt")
# Target → Source
result = pipeline(target_tensor, direction="a2b", output_type="pt")

DDIB dual-model translation

from src.schedulers import DDIBScheduler
from src.pipelines import DDIBPipeline

scheduler = DDIBScheduler(num_train_timesteps=1000)
pipeline = DDIBPipeline(source_unet=src_model, target_unet=tgt_model, scheduler=scheduler)
result = pipeline(source_image, num_inference_steps=250, output_type="pil")

LBM flow-matching translation

from src.schedulers import LBMScheduler
from src.pipelines import LBMPipeline

scheduler = LBMScheduler(num_train_timesteps=1000)
pipeline = LBMPipeline(unet=my_unet, scheduler=scheduler)
result = pipeline(source_image, num_inference_steps=1, output_type="pil")

DiT backbone (SiT) for diffusion bridges

from src.models.dit import SiTBackbone, SIT_CONFIGS

# Create a SiT-S/2 backbone (small, patch size 2)
depth, hidden_size, num_heads = SIT_CONFIGS["S"]
model = SiTBackbone(
    image_size=256, patch_size=2, in_channels=3,
    hidden_size=hidden_size, depth=depth, num_heads=num_heads,
    condition_mode="concat",
)
# Use as drop-in replacement for UNet in any bridge pipeline
output = model(noisy_sample, timestep, xT=source_image)

UNSB unpaired translation (multi-step Schrödinger Bridge)

from src.models.unsb import create_generator
from src.schedulers.unsb import UNSBScheduler
from src.pipelines.unsb import UNSBPipeline

# Create time-conditional generator and scheduler
generator = create_generator(input_nc=3, output_nc=3, ngf=64, n_blocks=9)
scheduler = UNSBScheduler(num_timesteps=5, tau=0.01)

# Inference pipeline (multi-step stochastic refinement)
pipeline = UNSBPipeline(generator=generator, scheduler=scheduler)
result = pipeline(source_image, output_type="pt")
print(result.nfe)  # 5 function evaluations

UNSB training

from examples.unsb.config import UNSBConfig
from examples.unsb.train_unsb import UNSBTrainer

cfg = UNSBConfig(
    input_nc=3, output_nc=3, ngf=64,
    num_timesteps=5, tau=0.01,
    lambda_GAN=1.0, lambda_SB=1.0, lambda_NCE=1.0,
    device="cuda",
)
trainer = UNSBTrainer(cfg)
# Single training step with unpaired data
losses = trainer.train_step(real_A_batch, real_B_batch)

Local Diffusion hallucination-aware translation

from src.models.local_diffusion import create_unet
from src.schedulers.local_diffusion import LocalDiffusionScheduler
from src.pipelines.local_diffusion import LocalDiffusionPipeline

# Create conditional U-Net and Gaussian diffusion scheduler
unet = create_unet(dim=32, channels=1, dim_mults=(1, 2, 4, 8))
scheduler = LocalDiffusionScheduler(num_train_timesteps=250, beta_schedule="sigmoid")

# Standard inference
pipeline = LocalDiffusionPipeline(unet=unet, scheduler=scheduler)
result = pipeline(cond_image, output_type="pt")

# Branch-and-fuse inference (hallucination suppression)
result = pipeline(
    cond_image, anomaly_mask=mask,
    branch_out=True, fusion_timestep=2, output_type="pt",
)

Local Diffusion training

from examples.local_diffusion.config import LocalDiffusionConfig
from examples.local_diffusion.train_local_diffusion import LocalDiffusionTrainer

cfg = LocalDiffusionConfig(
    dim=32, channels=1,
    num_train_timesteps=250, beta_schedule="sigmoid",
    objective="pred_x0", device="cuda",
)
trainer = LocalDiffusionTrainer(cfg)
losses = trainer.train_step(source_batch, target_batch)

I2SB training with task configs

from examples.i2sb.config import sar2eo_config
from examples.i2sb.trainer import I2SBTrainer

cfg = sar2eo_config(resolution=256, train_batch_size=8)
trainer = I2SBTrainer(cfg)
model = trainer.build_model()
scheduler = trainer.build_scheduler()

# Single-step loss computation
loss = I2SBTrainer.compute_training_loss(model, scheduler, source_batch, target_batch)
loss.backward()

StegoGAN non-bijective translation

from src.training import StegoGANTrainer, StegoGANConfig

cfg = StegoGANConfig(
    input_nc=3, output_nc=3, ngf=64,
    lambda_reg=0.3, lambda_consistency=1.0,
    resnet_layer=8, fusionblock=True,
    device="cuda",
)
trainer = StegoGANTrainer(cfg)
# Run a single training step with unpaired data
losses = trainer.train_step(real_A_batch, real_B_batch)

Package Structure

src/                                 # ← Core library (single source of truth)
├── __init__.py                      # Public API
├── models/
│   ├── generators.py                # UNetGenerator, ResNetGenerator
│   ├── discriminators.py            # PatchGANDiscriminator
│   ├── unet/
│   │   ├── i2sb_unet.py            # I2SBUNet (native ADM-style backbone)
│   │   ├── unet_2d.py              # create_model factory
│   │   └── diffusers_wrappers.py   # DDBMUNet, DDIBUNet, … (diffusers UNet2DModel wrappers)
│   ├── dit/
│   │   └── sit.py                  # SiTBackbone (Diffusion Transformer)
│   └── stegogan/
│       ├── generators.py           # ResnetMaskV1Generator, ResnetMaskV3Generator
│       └── networks.py             # NetMatchability, mask_generate, ResnetBlock
│   └── unsb/
│       └── unsb_model.py           # UNSBGenerator, UNSBDiscriminator, UNSBEnergyNet
│   └── local_diffusion/
│       └── local_diffusion_model.py # LocalDiffusionUNet, ConditionEncoder
├── schedulers/                      # One scheduler per method
│   ├── i2sb.py                     # I2SBScheduler
│   ├── ddbm.py                     # DDBMScheduler
│   ├── bibbdm.py                   # BiBBDMScheduler
│   ├── ddib.py                     # DDIBScheduler
│   ├── bdbm.py                     # BDBMScheduler
│   ├── dbim.py                     # DBIMScheduler
│   ├── cdtsde.py                   # CDTSDEScheduler
│   └── lbm.py                      # LBMScheduler
│   └── unsb.py                     # UNSBScheduler
│   └── local_diffusion.py          # LocalDiffusionScheduler (DDPM/DDIM)
├── pipelines/                       # One pipeline per method
│   ├── i2sb.py                     # I2SBPipeline
│   ├── ddbm.py                     # DDBMPipeline
│   ├── bibbdm.py                   # BiBBDMPipeline
│   ├── ddib.py                     # DDIBPipeline
│   ├── bdbm.py                     # BDBMPipeline
│   ├── dbim.py                     # DBIMPipeline
│   ├── cdtsde.py                   # CDTSDEPipeline
│   └── lbm.py                      # LBMPipeline
│   └── unsb.py                     # UNSBPipeline
│   └── local_diffusion.py          # LocalDiffusionPipeline
├── data/
│   ├── datasets.py                 # PairedImageDataset, UnpairedImageDataset
│   └── transforms.py               # get_transforms, default_transforms
├── losses/
│   ├── adversarial.py              # GANLoss
│   └── perceptual.py               # PerceptualLoss
├── training/
│   ├── trainer.py                  # Pix2PixTrainer, TrainingConfig
│   └── stegogan_trainer.py         # StegoGANTrainer, StegoGANConfig
├── inference/
│   └── predictor.py                # ImageTranslator
└── metrics/
    └── image_quality.py            # PSNR, SSIM, LPIPS, FID
examples/                            # ← Training/inference scripts (import from src/)
├── community/                       # Community-contributed pipelines (single-file)
│   └── parallel_gan.py             # Parallel-GAN (Wang et al., TGRS 2022)
├── i2sb/                            # I2SB paper-oriented training code
│   ├── config.py                   # TaskConfig, sar2eo_config, etc.
│   └── trainer.py                  # I2SBTrainer
└── inference/
    └── run_inference.py            # Unified inference script for all methods

Credits

Reference papers

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_image_translation_models-0.2.3.tar.gz (148.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file pytorch_image_translation_models-0.2.3.tar.gz.

File metadata

File hashes

Hashes for pytorch_image_translation_models-0.2.3.tar.gz
Algorithm Hash digest
SHA256 325e5595e20cddf928a6bac3dec6afcdebb29d2aab6d2b8ac1952023ac357b0e
MD5 073d19ec00fd958e28d9c795df28ea96
BLAKE2b-256 89769f2b884288ae5d31bb219260264d10daec44c06283d56fa188d51b7de6f9

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytorch_image_translation_models-0.2.3.tar.gz:

Publisher: publish.yml on Bili-Sakura/pytorch-image-translation-models

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pytorch_image_translation_models-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for pytorch_image_translation_models-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 13ff275ca535adf8987b331be615f933015934c89f5011a767a110dbb0bca7a9
MD5 8b2c3b3b678a7b23b09e1d35139a5200
BLAKE2b-256 230f022a6e90368d0f06796d9cb27e916cd0b28c50ec9e6a44ad958fcb66d1d4

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytorch_image_translation_models-0.2.3-py3-none-any.whl:

Publisher: publish.yml on Bili-Sakura/pytorch-image-translation-models

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page