Skip to main content

Refactored python training and inference code for 3D Gaussian Splatting

Project description

LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming (Packaged Python Version)

This repository contains the refactored Python code for LapisGS. It is forked from commit 12dcda37ed43838d7407b28675bc26b7364ae431. The original code has been refactored to follow the standard Python package structure, while maintaining the same algorithms as the original version.

Features

  • Code organized as a standard Python package
  • Layered progressive 3D Gaussian Splatting
  • Multi-resolution training pipeline

Prerequisites

  • Pytorch (v2.4 or higher recommended)
  • CUDA Toolkit (12.4 recommended, should match with PyTorch version)

Install

PyPI Install

pip install --upgrade lapisgs

Install (Development)

Install gaussian-splatting. You can download the wheel from PyPI:

pip install --upgrade gaussian-splatting

Alternatively, install the latest version from the source:

pip install --upgrade git+https://github.com/yindaheng98/gaussian-splatting.git@master

Install reduced-3dgs. You can download the wheel from PyPI:

pip install --upgrade reduced-3dgs

Alternatively, install the latest version from the source:

pip install --upgrade git+https://github.com/yindaheng98/reduced-3dgs.git@main
git clone --recursive https://github.com/yindaheng98/lapis-gs
cd lapis-gs
pip install tqdm plyfile tifffile
pip install --target . --upgrade --no-deps .

(Optional) If you prefer not to install gaussian-splatting and reduced-3dgs in your environment, you can install them in your lapis-gs directory:

pip install --target . --no-deps --upgrade git+https://github.com/yindaheng98/gaussian-splatting.git@master
pip install --target . --no-deps --upgrade git+https://github.com/yindaheng98/reduced-3dgs.git@main

Quick Start

  1. Download dataset (T&T+DB COLMAP dataset, size 650MB):
wget https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip -P ./data
unzip data/tandt_db.zip -d data/
  1. Train LapisGS with full pipeline (8x → 4x → 2x → 1x), in this way each layer shares the same training parameters except rescale_factor:
python -m lapisgs.train_full_pipeline_reduced -s data/truck -d output/truck -i 30000 --mode base -olambda_dssim=0.8
  1. (Optional) Train progressive layers (8x → 4x → 2x → 1x), in this way you can modify the training parameters for each layer:
# Train 8x (lowest resolution)
python -m lapisgs.train_reduced -s data/truck -d output/truck/8x --rescale_factor 0.125 -i 10000 --mode shculling -olambda_dssim=0.8

# Train 4x (load from 8x)
python -m lapisgs.train_reduced -s data/truck -d output/truck/4x --rescale_factor 0.25 -l output/truck/8x/point_cloud/iteration_10000/point_cloud.ply --load_camera output/truck/8x/cameras.json -i 10000 --mode camera-shculling -olambda_dssim=0.8

# Train 2x (load from 4x)
python -m lapisgs.train_reduced -s data/truck -d output/truck/2x --rescale_factor 0.5 -l output/truck/4x/point_cloud/iteration_10000/point_cloud.ply --load_camera output/truck/4x/cameras.json -i 10000 --mode camera-shculling -olambda_dssim=0.8

# Train 1x (full resolution, load from 2x)
python -m lapisgs.train_reduced -s data/truck -d output/truck/1x --rescale_factor 1.0 -l output/truck/2x/point_cloud/iteration_10000/point_cloud.ply --load_camera output/truck/2x/cameras.json -i 10000 --mode camera-shculling -olambda_dssim=0.8
  1. Render LapisGS at different resolutions:
# Render 8x
python -m lapisgs.render -s data/truck -d output/truck/8x -i 10000 --mode base --load_camera output/truck/8x/cameras.json --rescale_factor 0.125

# Render 4x
python -m lapisgs.render -s data/truck -d output/truck/4x -i 10000 --mode camera --load_camera output/truck/4x/cameras.json --rescale_factor 0.25

# Render 2x
python -m lapisgs.render -s data/truck -d output/truck/2x -i 10000 --mode camera --load_camera output/truck/2x/cameras.json --rescale_factor 0.5

# Render 1x (full resolution)
python -m lapisgs.render -s data/truck -d output/truck/1x -i 10000 --mode camera --load_camera output/truck/1x/cameras.json --rescale_factor 1.0

💡 This repo does not contain code for creating dataset. If you want to create your own dataset, please refer to InstantSplat or use convert.py.

💡 See .vscode/launch.json for advanced examples. See lapisgs.train_full_pipeline_reduced and lapisgs.train_reduced for full options.

API Usage

This project is built on top of gaussian-splatting and reduced-3dgs. Please refer to their documentation for basic usage of Gaussian models, datasets, and trainers.

Gaussian Models

LapisGS uses the standard Gaussian models from gaussian-splatting:

from gaussian_splatting import GaussianModel, CameraTrainableGaussianModel

# Standard Gaussian model
gaussians = GaussianModel(sh_degree).to(device)

# For camera-trainable scenarios
gaussians = CameraTrainableGaussianModel(sh_degree).to(device)

Multi-Resolution Datasets

LapisGS provides rescale-aware dataset classes for multi-resolution training:

from lapisgs.dataset import RescaleColmapCameraDataset, RescaleTrainableCameraDataset

# For standard training
dataset = RescaleColmapCameraDataset(source_path, rescale_factor=0.125, load_depth=True) # 8x
dataset = RescaleColmapCameraDataset(source_path, rescale_factor=0.25, load_depth=True) # 4x
dataset = RescaleColmapCameraDataset(source_path, rescale_factor=0.5, load_depth=True) # 2x
dataset = RescaleColmapCameraDataset(source_path, rescale_factor=1.0, load_depth=True) # 1x
# ... you can use any rescale_factor as you want

# For camera-trainable scenarios
dataset = RescaleTrainableCameraDataset.from_colmap(source_path, rescale_factor=0.125, load_depth=True) # 8x
dataset = RescaleTrainableCameraDataset.from_colmap(source_path, rescale_factor=0.25, load_depth=True) # 4x
dataset = RescaleTrainableCameraDataset.from_colmap(source_path, rescale_factor=0.5, load_depth=True) # 2x
dataset = RescaleTrainableCameraDataset.from_colmap(source_path, rescale_factor=1.0, load_depth=True) # 1x
# ... you can use any rescale_factor as you want

# Load from saved JSON
dataset = RescaleTrainableCameraDataset.from_json(camera_json_path, rescale_factor=0.125, load_depth=True) # 8x
dataset = RescaleTrainableCameraDataset.from_json(camera_json_path, rescale_factor=0.25, load_depth=True) # 4x
dataset = RescaleTrainableCameraDataset.from_json(camera_json_path, rescale_factor=0.5, load_depth=True) # 2x
dataset = RescaleTrainableCameraDataset.from_json(camera_json_path, rescale_factor=1.0, load_depth=True) # 1x
# ... you can use any rescale_factor as you want

LapisGS Trainers

LapisGS provides specialized trainers with partial densification and opacity reset:

from lapisgs.trainer import LapisTrainer, DepthLapisTrainer, LapisCameraTrainer, DepthLapisCameraTrainer

# Basic LapisGS trainer
trainer = LapisTrainer(
    gaussians,
    scene_extent=dataset.scene_extent(),
    # ... other parameters
)

# LapisGS trainer with depth regularization
trainer = DepthLapisTrainer(
    gaussians,
    scene_extent=dataset.scene_extent(),
    # ... other parameters
)

# LapisGS trainer with camera optimization
trainer = LapisCameraTrainer(
    gaussians,
    scene_extent=dataset.scene_extent(),
    dataset=dataset,
    # ... other parameters
)

# LapisGS trainer with both depth and camera optimization
trainer = DepthLapisCameraTrainer(
    gaussians,
    scene_extent=dataset.scene_extent(),
    dataset=dataset,
    # ... other parameters
)

Training Pipeline

from lapisgs.prepare import prepare_dataset, prepare_trainer
from reduced_3dgs.prepare import prepare_gaussians

# Prepare components for training
dataset = prepare_dataset(
    source=source_path,
    device=device,
    trainable_camera=True,
    load_camera=camera_json_path,
    rescale_factor=0.5
)

gaussians = prepare_gaussians(
    sh_degree=3,
    source=source_path,
    device=device,
    trainable_camera=True,
    load_ply=foundation_ply_path
)

trainer = prepare_trainer(
    gaussians=gaussians,
    dataset=dataset,
    mode="camera",  # "base", "camera", "nodepth-base", "nodepth-camera"
    trainable_camera=True,
    load_ply=foundation_ply_path
)

# Training loop
for camera in dataset:
    loss, out = trainer.step(camera)

How to extract the enhanced layer

Note that <scene>_res1 is the highest resolution, and <scene>_res8 is the lowest resolution. The model is trained from the lowest resolution to the highest resolution. The model stored in the higher resolution folder contains not only the higher layer but also the lower layer(s).

We construct the merged GS with a specially designed order: the lower layers come first as the foundation base, and the enhanced layer is stiched behind the foundation base, as shown in the figure below. As the foundation base is frozen to optimization and adaptive control, one can easily extract the enhanced layer by performing the operation like GS[size_of_foundation_layers:].

model_structure

CUDA out-of-memory error

Through experiments, we found that the default loss function is not sensitive to low-resolution images, making optimization and desification failed. It is because in the default loss function, L1 loss is attached much more importance (0.8), but L1 loss is not sensitive to finer details, blurriness, or low-resolution artifacts. Therefore, the loss, computed from the default loss function, would be small at low layers, disabling the parameter update and adaptive control for the low-layer Gaussian splats. Therefore, we set lambda_dssim to 0.8 to emphasize the structural similarity loss, which is more sensitive to low-resolution artifacts and then causes much heavier desification, finally producing bigger 3DGS model.

To reduce the model size, you may try to 1) lower down the lambda_dssim, or 2) increase the densification threshold. Also, generally speaking, it is not necessary to make it SSIM-sensitive for complex scenes. For example, we note that training LapisGS for complex scene playroom with default lambda_dssim 0.2 can still produce reasonable layered structure, while it fails for simple object lego.

icon LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming

Yuang Shi1, Simone Gasparini2, Géraldine Morin2, Wei Tsang Ooi1,

1National University of Singapore, 2IRIT - Université de Toulouse

International Conference on 3D Vision (3DV), 2025



teaser

We introduce LapisGS*, a layered progressive 3DGS, for adaptive streaming and view-adaptive rendering.

*Lapis means "layer" in Malay, the national language of Singapore --- the host of 3DV'25. The logo in the title depicts kuih lapis, or "layered cake", a local delight in Singapore and neighboring countries. The authors are glad to serve kuih lapis to our friends at the conference to share the joy of the layered approach 🥳.


If you find our code or paper useful, please cite

@inproceedings{shi2024lapisgs,
  author    = {Shi, Yuang and Gasparini, Simone and Morin, Géraldine and Ooi, Wei Tsang},
  title     = {{LapisGS}: Layered Progressive {3D Gaussian} Splatting for Adaptive Streaming},
  publisher = {{IEEE}},
  booktitle = {International Conference on 3D Vision, 3DV 2025, Singapore, March 25-28, 2025},
  year      = {2025},
  }

Based on our LapisGS, we built the first ever dynamic 3DGS streaming system, which achieves superior performance in both live streaming and on-demand streaming. Our work is to be appeared in the MMSys'25 in March 2025. Access to the Preprint Paper.

@inproceedings{sun2025lts,
  author    = {Sun, Yuan-Chun and Shi, Yuang and Lee, Cheng-Tse and Zhu, Mufeng and Ooi, Wei Tsang and Liu, Yao and Huang, Chun-Ying and Hsu, Cheng-Hsin},
  title     = {{LTS}: A {DASH} Streaming System for Dynamic Multi-Layer {3D Gaussian} Splatting Scenes},
  publisher = {{ACM}},
  booktitle = {The 16th ACM Multimedia Systems Conference, MMSys 2025, 2025},
  year      = {2025},
  }

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lapisgs-1.0.2.tar.gz (19.6 kB view details)

Uploaded Source

File details

Details for the file lapisgs-1.0.2.tar.gz.

File metadata

  • Download URL: lapisgs-1.0.2.tar.gz
  • Upload date:
  • Size: 19.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.13

File hashes

Hashes for lapisgs-1.0.2.tar.gz
Algorithm Hash digest
SHA256 d5665a9eaaf54203dd0327dad4e2a5d3c49f965107e46034bb0d9aa1b77ee3e7
MD5 54ad4f723b1ff90a718fd8963db6434c
BLAKE2b-256 d1b86b818f1a04332d48f0c22c868b7ec8297121b223535d6e41dc12546d0468

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page