LVSM - Pytorch
Project description
LVSM - Pytorch (wip)
Implementation of LVSM, Large View Synthesis with Minimal 3d Inductive Bias, from Adobe Research
We will focus only on the Decoder-only architecture in this repository.
This paper lines up with another from ICLR 2025
Install
$ pip install lvsm-pytorch
Usage
import torch
from lvsm_pytorch.lvsm import LVSM
rays = torch.randn(2, 6, 256, 256)
images = torch.randn(2, 3, 256, 256)
target_rays = torch.randn(2, 6, 256, 256)
target_images = torch.randn(2, 3, 256, 256)
model = LVSM(
dim = 512,
patch_size = 32,
depth = 2,
)
loss = model(
input_images = images,
input_rays = rays,
target_rays = target_rays,
target_images = target_images
)
loss.backward()
# after much training
pred_images = model(
input_images = images,
input_rays = rays,
target_rays = target_rays,
) # (2, 3, 256, 256)
assert pred_images.shape == target_images.shape
Citations
@inproceedings{Jin2024LVSMAL,
title={LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias},
author={Haian Jin and Hanwen Jiang and Hao Tan and Kai Zhang and Sai Bi and Tianyuan Zhang and Fujun Luan and Noah Snavely and Zexiang Xu},
year={2024},
url={https://api.semanticscholar.org/CorpusID:273507016}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
lvsm_pytorch-0.0.1.tar.gz
(1.4 MB
view hashes)
Built Distribution
Close
Hashes for lvsm_pytorch-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 191633bb2b324eefb3c04addd750a3c5adb975888411123edf43c1fbcc00dd0b |
|
MD5 | 1cb42c3fdcc97724f60f5e201213e672 |
|
BLAKE2b-256 | b309dd89e6408552e83b9afca48e46c15d436fd505cca14c1ec7aab53068d8f7 |