A PyTorch library for multi-modal image translation with diffusion bridges, GANs, and transformer backbones.
Project description
pytorch-image-translation-models
A PyTorch library for multi-modal image translation with diffusion bridges, GANs, and transformer backbones.
Installation
Install from PyPI
pip install pytorch-image-translation-models
Install from source
pip install -e .
With optional dependencies:
# With training extras (accelerate, peft, datasets, tensorboard)
pip install -e ".[training]"
# With metrics extras (torchmetrics, lpips, torch-fidelity, scipy)
pip install -e ".[metrics]"
# Everything
pip install -e ".[all]"
Note: PyTorch is listed as a dependency but you may want to install a specific CUDA build first. See PyTorch — Get Started for details.
Features
Models
- GAN generators —
UNetGenerator(encoder-decoder with skip connections),ResNetGenerator(residual blocks) - GAN discriminators —
PatchGANDiscriminator(Markovian patch-level classifier) - Diffusion bridge —
I2SBUNet(ADM-style U-Net for Image-to-Image Schrödinger Bridge)
Schedulers
- I2SBScheduler — Symmetric beta schedule with forward/reverse bridge kernels for I2SB
Pipelines
- I2SBPipeline — End-to-end inference for I2SB models (supports
"pt","pil","np"output)
Data
PairedImageDataset/UnpairedImageDatasetwith configurable transform pipelines
Losses
GANLoss(vanilla / LSGAN / hinge), VGG-basedPerceptualLoss
Training
Pix2PixTrainer— Paired GAN training with checkpoint save/loadI2SBTrainer— I2SB bridge model training (inexamples/i2sb/)
Metrics
compute_psnr,compute_ssim,compute_lpips,compute_fid
Quick Start
GAN-based translation (Pix2Pix)
import src
gen = src.UNetGenerator(in_channels=3, out_channels=3)
disc = src.PatchGANDiscriminator(in_channels=6)
from src.training import Pix2PixTrainer, TrainingConfig
config = TrainingConfig(epochs=100, device="cuda")
trainer = Pix2PixTrainer(gen, disc, config)
trainer.fit(dataloader) # expects {"source": tensor, "target": tensor}
translator = src.ImageTranslator(gen, device="cuda")
result = translator.predict(pil_image)
Diffusion bridge translation (I2SB)
from src.models.unet import I2SBUNet, create_model
from src.schedulers import I2SBScheduler
from src.pipelines.i2sb import I2SBPipeline
# Create model and scheduler
model = create_model(
image_size=256, in_channels=3, num_channels=128,
num_res_blocks=2, attention_resolutions="32,16,8",
condition_mode="concat",
)
scheduler = I2SBScheduler(interval=1000, beta_max=0.3)
# Inference pipeline
pipeline = I2SBPipeline(unet=model, scheduler=scheduler)
result = pipeline(source_tensor, nfe=20, output_type="pt")
I2SB training with task configs
from examples.i2sb.config import sar2eo_config
from examples.i2sb.trainer import I2SBTrainer
cfg = sar2eo_config(resolution=256, train_batch_size=8)
trainer = I2SBTrainer(cfg)
model = trainer.build_model()
scheduler = trainer.build_scheduler()
# Single-step loss computation
loss = I2SBTrainer.compute_training_loss(model, scheduler, source_batch, target_batch)
loss.backward()
Package Structure
src/
├── __init__.py # Public API
├── models/
│ ├── generators.py # UNetGenerator, ResNetGenerator
│ ├── discriminators.py # PatchGANDiscriminator
│ └── unet/ # ADM-style U-Net for I2SB
│ ├── i2sb_unet.py # I2SBUNet
│ └── unet_2d.py # create_model factory
├── schedulers/
│ └── i2sb.py # I2SBScheduler
├── pipelines/
│ └── i2sb.py # I2SBPipeline
├── data/
│ ├── datasets.py # PairedImageDataset, UnpairedImageDataset
│ └── transforms.py # get_transforms, default_transforms
├── losses/
│ ├── adversarial.py # GANLoss
│ └── perceptual.py # PerceptualLoss
├── training/
│ └── trainer.py # Pix2PixTrainer, TrainingConfig
├── inference/
│ └── predictor.py # ImageTranslator
└── metrics/
└── image_quality.py # PSNR, SSIM, LPIPS, FID
examples/
└── i2sb/
├── config.py # TaskConfig, sar2eo_config, etc.
└── trainer.py # I2SBTrainer
Credits
Reference papers
- I2SB: Image-to-Image Schrödinger Bridge (ICML 2023)
- DDBM: Denoising Diffusion Bridge Models (ICLR 2024)
- DDIB: Dual Diffusion Implicit Bridges (ICLR 2023)
- BBDM: Image-to-Image Translation with Brownian Bridge Diffusion Models (CVPR 2023)
- CUT: Contrastive Unpaired Translation (ECCV 2020)
- CycleGAN (ICCV 2017)
- img2img-turbo (2024)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pytorch_image_translation_models-0.1.1.tar.gz.
File metadata
- Download URL: pytorch_image_translation_models-0.1.1.tar.gz
- Upload date:
- Size: 49.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
51bfe40380b1bbf0bd290d3fd17d2ebc57f3861505501c6d097be8148aad6807
|
|
| MD5 |
70ee55aac5f7d50acd4086fcb8fc654b
|
|
| BLAKE2b-256 |
d402a2a7c24d7d531df6b959b09144f3af6d3e76ece01157a6494655a1eae1ff
|
Provenance
The following attestation bundles were made for pytorch_image_translation_models-0.1.1.tar.gz:
Publisher:
publish.yml on Bili-Sakura/pytorch-image-translation-models
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pytorch_image_translation_models-0.1.1.tar.gz -
Subject digest:
51bfe40380b1bbf0bd290d3fd17d2ebc57f3861505501c6d097be8148aad6807 - Sigstore transparency entry: 1045925224
- Sigstore integration time:
-
Permalink:
Bili-Sakura/pytorch-image-translation-models@43e657a07344509778d153f539e9bd963d178f0a -
Branch / Tag:
refs/tags/0.1.1 - Owner: https://github.com/Bili-Sakura
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@43e657a07344509778d153f539e9bd963d178f0a -
Trigger Event:
release
-
Statement type:
File details
Details for the file pytorch_image_translation_models-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pytorch_image_translation_models-0.1.1-py3-none-any.whl
- Upload date:
- Size: 53.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
890b6de3d308c40d2ae6f0539b4f60afd107532ba38a3ce467d6b6cc83ff582b
|
|
| MD5 |
4185a96af1013c449a95156f13e2c368
|
|
| BLAKE2b-256 |
515fc25e2e498fe2345861c477bfee4996dea84662f4c4d0cfd4982049f7efbe
|
Provenance
The following attestation bundles were made for pytorch_image_translation_models-0.1.1-py3-none-any.whl:
Publisher:
publish.yml on Bili-Sakura/pytorch-image-translation-models
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pytorch_image_translation_models-0.1.1-py3-none-any.whl -
Subject digest:
890b6de3d308c40d2ae6f0539b4f60afd107532ba38a3ce467d6b6cc83ff582b - Sigstore transparency entry: 1045925305
- Sigstore integration time:
-
Permalink:
Bili-Sakura/pytorch-image-translation-models@43e657a07344509778d153f539e9bd963d178f0a -
Branch / Tag:
refs/tags/0.1.1 - Owner: https://github.com/Bili-Sakura
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@43e657a07344509778d153f539e9bd963d178f0a -
Trigger Event:
release
-
Statement type: