A geometry-shader-based, global CUDA sorted high-performance 3D Gaussian Splatting rasterizer. Can achieve a 5-10x speedup in rendering compared to the vanialla diff-gaussian-rasterization.
Project description
Fast Gaussian Rasterization
- Can be 5-10x faster than the original software CUDA rasterizer (diff-gaussian-rasterization).
- Can be 2-3x faster if using offline rendering. (Bottleneck: copying rendered images around, thinking about improvements.)
- Speedup most visible with high pixel-to-point ratio (large Gaussians, small point count, high-res rendering).
No backward pass is supported yet. Will think of ways to add a backward. Depth-peeling (4K4D) is too slow. Discussion welcomed.
Installation
Install the latest release from PyPI:
pip install fast_gauss
Or the latest commit from GitHub:
pip install git+https://github.com/dendenxu/fast-gaussian-rasterization
No CUDA compilation is required to build fast_gauss
since we're only shader-based for now.
Usage
Replace the original import of diff_gaussian_rasterization
with fast_gauss
.
For example, replace this:
from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer
with this:
from fast_gauss import GaussianRasterizationSettings, GaussianRasterizer
And you're good to go.
Tips
Note: for the ultimate 5-10x performance increase, you'll need to let fast_gauss
's shader directly write to your desired framebuffer.
Currently, we are trying to automatically detect whether you're managing your own OpenGL context (i.e. opening up a GUI) by checking for the module OpenGL
during the import of fast_gauss
.
If detected, all rendering commands will return None
s and we will directly write to the bound framebuffer at the time of the draw call.
Thus if you're running in a GUI (OpenGL-based) environment, the output of our rasterizer will be None
s and does not require further processing.
- TODO: Improve offline rendering performance.
- TODO: Add a warning to the user if they're performing further processing on the returned values.
Note: the speedup is the most visible when the pixel-to-point ratio is high.
That is, when there are large Gaussians and very high-resolution rendering, the speedup is more visible. The CUDA-based software implementation is more resolution sensitive and for some extremely dense point clouds (> 1 million points), the CUDA implementation might be faster. This is because the typical rasterization-based pipeline on modern graphics hardware is not well-optimized for small triangles.
Note: for best performance, cache the persistent results (for example, the 6 elements of the covariance matrix).
This is more of a general tip and not directly related to fast_gauss
.
However, the impact is more observable here since we haven't implemented a fast 3D covariance computation (from scales and rotations) in the shader yet.
Only PyTorch implementation is available for now.
When the point count increases, even the smallest precomputation
can help.
An example is the concatenation of the base 0-degree SH parameter and the rest, that small maneuver might cost us 10ms on a 3060 with 5 million points.
Thus, store the concatenated tensors instead and avoid concatenating them in every frame.
- TODO: Implement SH eval in the vertex shader.
- TODO: Warn users if they're not properly precomputing the covariance matrix.
- TODO: Implement a more optimized
OptimizedGaussians
for precomputing things and apply a cache. Similar to that of the vertex shader (see Invokation frequency).
Note: it's recommended to pass in a CPU tensor in the GaussianRasterizationSettings
to avoid explicit synchronizations for even better performance.
- TODO: Add a warning to the user if GPU tensors are detected.
Note: the second output of the GaussianRasterizer
is not radii anymore (since we're not gonna use it for the backward pass), but the alpha values of the rendered image instead.
And the alpha channel content seems to be bugged currently, will debug.
- TODO: Debug alpha channel values
TODOs
- TODO: Apply more of the optimization techniques used by similar shaders, including packing the data into a texture and bit reduction during computation.
- TODO: Thinks of ways for a backward pass. Welcome to discuss!
- TODO: Compute covariance from scaling and rotation in the shader, currently it's on the CUDA (PyTorch) side.
- TODO: Compute SH in the shader, currently it's on the CUDA (PyTorch) side.
- TODO: Try to align the rendering results at the pixel level, small deviation exists currently.
- TODO: Use indexed draw calls to minimize data passing and shuffling.
- TODO: Do incremental sorting based on viewport change, currently it's a full resort on with CUDA (PyTorch).
Implementation
Guidelines
- Let the professionals do the work.
- Let GPU do the large-scale sorting.
- Let the graphics pipeline do the rasterization for us, not the other way around.
- Let OpenGL directly write to your framebuffer.
- Minimize repeated work.
- Compute the 3D to 2D covariance projection only once for each Gaussian, instead of 4 times for the quad, enabled by the geometry shader.
- Minimize stalls (minimize explicit synchronizations between GPU and CPU).
- Enabled by using
non_blocking=True
data passing and moving sync points to as early as possible. - Boosted by the fact that we're sorting on the GPU, thus no need to perform synchronized host-to-device copies.
- Enabled by using
Why does a global sort work?
The OpenGL specification is somewhat vague but there's this reference: (in the 4th paragraph of section 2.1 of chapter 2 of this specification: https://registry.khronos.org/OpenGL/specs/gl/glspec44.core.pdf)
Commands are always processed in the order in which they are received, although there may be an indeterminate delay before the effects of a command are realized. This means, for example, that one primitive must be drawn completely before any subsequent one can affect the framebuffer.
Thus if the order of the data in the vertex buffer (or as specified by an index buffer) is back-to-front, and alpha blending is enabled, you can count on OpenGL to correctly update the framebuffer in the correct back to front order.
- TODO: Expand implementation details.
Environment
This project requires you to have an NVIDIA GPU with the ability to interop between CUDA and OpenGL. Thus, WSL is not supported and OSX (MacOS) is not supported. Tested on Linux and Windows.
For offline rendering (the drop-in replacement of the original CUDA rasterizer), we also need a valid EGL environment. It can sometimes be hard to set up for virtualized machines. Potential fix.
Credits
Inspired by those insanely fast WebGL-based 3DGS viewers:
- GaussianSplats3D for inspiring our vertex-geometry-fragment shader pipeline.
- gsplat.tech.
- splat.
Using the algorithm and improvements from:
- diff-gaussian-rasterization for the main Gaussian Splatting algorithm.
- diff_gauss for the fixed culling.
CUDA-GL interop & EGL environment inspired by:
- 4K4D where they(I) used the interop for depth-peeling.
- EasyVolcap for the collection of utilities, including EGL setup.
- nvdiffrast for their EGL context setup and CUDA-GL interop setup.
Citation
@misc{fast_gauss,
title = {Fast Gaussian Rasterization},
howpublished = {GitHub},
year = {2024},
url = {https://github.com/dendenxu/fast-gaussian-rasterization}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fast_gauss-0.0.9.tar.gz
.
File metadata
- Download URL: fast_gauss-0.0.9.tar.gz
- Upload date:
- Size: 43.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bf95cf2fbcd43d44429465fa1adc3f11e1eb413afd29ef86733360ab951f0fd0 |
|
MD5 | 7eaf5adef79da6eec39b10aa2ca5307b |
|
BLAKE2b-256 | f915d365e91c659f041f081031ecc9858b8c28024799cf64e5b894140e4261f7 |
File details
Details for the file fast_gauss-0.0.9-py3-none-any.whl
.
File metadata
- Download URL: fast_gauss-0.0.9-py3-none-any.whl
- Upload date:
- Size: 47.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6588bc449b595b38879683bb1259b30163ae01761cd14a748944c608670d731c |
|
MD5 | 6fbc67c0254d5b91ae64be0c26818317 |
|
BLAKE2b-256 | 0e9e9c949adb8d343cf8313772128b59a990cee587c9d7ba8b06efab406e3264 |