Skip to main content

LVSM - Pytorch

Project description

LVSM - Pytorch (wip)

Implementation of LVSM, SOTA Large View Synthesis with Minimal 3d Inductive Bias, from Adobe Research

We will focus only on the Decoder-only architecture in this repository.

This paper lines up with another from ICLR 2025

Install

$ pip install lvsm-pytorch

Usage

import torch
from lvsm_pytorch import LVSM

rays = torch.randn(2, 4, 6, 256, 256)
images = torch.randn(2, 4, 3, 256, 256)

target_rays = torch.randn(2, 6, 256, 256)
target_images = torch.randn(2, 3, 256, 256)

model = LVSM(
    dim = 512,
    max_image_size = 256,
    patch_size = 32,
    depth = 2,
)

loss = model(
    input_images = images,
    input_rays = rays,
    target_rays = target_rays,
    target_images = target_images
)

loss.backward()

# after much training

pred_images = model(
    input_images = images,
    input_rays = rays,
    target_rays = target_rays,
) # (2, 3, 256, 256)

assert pred_images.shape == target_images.shape

Or from the raw camera intrinsic / extrinsics (please submit an issue or pull request if you see an error. new to view synthesis and out of my depths here)

import torch
from lvsm_pytorch import LVSM, CameraWrapper

input_intrinsic_rotation = torch.randn(2, 4, 3, 3)
input_extrinsic_rotation = torch.randn(2, 4, 3, 3)
input_translation = torch.randn(2, 4, 3)
input_uniform_points = torch.randn(2, 4, 3, 256, 256)

target_intrinsic_rotation = torch.randn(2, 3, 3)
target_extrinsic_rotation = torch.randn(2, 3, 3)
target_translation = torch.randn(2, 3)
target_uniform_points = torch.randn(2, 3, 256, 256)

images = torch.randn(2, 4, 4, 256, 256)
target_images = torch.randn(2, 4, 256, 256)

lvsm = LVSM(
    dim = 512,
    max_image_size = 256,
    patch_size = 32,
    channels = 4,
    depth = 2,
)

model = CameraWrapper(lvsm)

loss = model(
    input_intrinsic_rotation = input_intrinsic_rotation,
    input_extrinsic_rotation = input_extrinsic_rotation,
    input_translation = input_translation,
    input_uniform_points = input_uniform_points,
    target_intrinsic_rotation = target_intrinsic_rotation,
    target_extrinsic_rotation = target_extrinsic_rotation,
    target_translation = target_translation,
    target_uniform_points = target_uniform_points,
    input_images = images,
    target_images = target_images,
)

loss.backward()

# after much training

pred_target_images = model(
    input_intrinsic_rotation = input_intrinsic_rotation,
    input_extrinsic_rotation = input_extrinsic_rotation,
    input_translation = input_translation,
    input_uniform_points = input_uniform_points,
    target_intrinsic_rotation = target_intrinsic_rotation,
    target_extrinsic_rotation = target_extrinsic_rotation,
    target_translation = target_translation,
    target_uniform_points = target_uniform_points,
    input_images = images,
)

Citations

@inproceedings{Jin2024LVSMAL,
    title   = {LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias},
    author  = {Haian Jin and Hanwen Jiang and Hao Tan and Kai Zhang and Sai Bi and Tianyuan Zhang and Fujun Luan and Noah Snavely and Zexiang Xu},
    year    = {2024},
    url     = {https://api.semanticscholar.org/CorpusID:273507016}
}
@article{Zhang2024CamerasAR,
    title     = {Cameras as Rays: Pose Estimation via Ray Diffusion},
    author    = {Jason Y. Zhang and Amy Lin and Moneish Kumar and Tzu-Hsuan Yang and Deva Ramanan and Shubham Tulsiani},
    journal   = {ArXiv},
    year      = {2024},
    volume    = {abs/2402.14817},
    url       = {https://api.semanticscholar.org/CorpusID:267782978}
}
@misc{he2021masked,
    title   = {Masked Autoencoders Are Scalable Vision Learners}, 
    author  = {Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and Piotr Dollár and Ross Girshick},
    year    = {2021},
    eprint  = {2111.06377},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lvsm_pytorch-0.0.19.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

lvsm_pytorch-0.0.19-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file lvsm_pytorch-0.0.19.tar.gz.

File metadata

  • Download URL: lvsm_pytorch-0.0.19.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for lvsm_pytorch-0.0.19.tar.gz
Algorithm Hash digest
SHA256 fe11e112a56f45b628d5f1c837afedf8aef3078dceb7e6ecba929eb8c4d288e7
MD5 8602d2ca1f5e550ae60af89dc56ac7fe
BLAKE2b-256 4099d6dcf3659b7a8836ff614c4bcf29fa23feea61a878cfe9744f636fd86532

See more details on using hashes here.

File details

Details for the file lvsm_pytorch-0.0.19-py3-none-any.whl.

File metadata

File hashes

Hashes for lvsm_pytorch-0.0.19-py3-none-any.whl
Algorithm Hash digest
SHA256 18b41c73bdcb4b37773ca6a013a83cd3d6f733753285f63c902112e3b9b23464
MD5 ffe3dd4db04f083b1d48f20f26cdc7a6
BLAKE2b-256 2ebbdc5c27b3f5fcfb1e804bc82a2f5b009aabf8b8f39646157174101ef1bf0c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page