spatial sparse convolution
Project description
SpConv: Spatially Sparse Convolution Library
spconv
is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.
Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.
Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!
WARNING
Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.
Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv
and pip list | grep cumm
to check all installed package. then use pip to install new spconv.
NEWS
-
spconv 2.3: int8 quantization support. see docs and examples for more details.
-
spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6
Spconv 2.2 vs Spconv 2.1
- faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
- greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
- drop python 3.6 support
- nvrtc support: kernel in old GPUs will be compiled in runtime.
- libspconv: pure c++ build of all spconv ops. see example
- tf32 kernels, faster fp32 training, disabled by default. set
import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True
to enable them. - all weights are KRSC layout, some old model can't be loaded anymore.
Spconv 2.1 vs Spconv 1.x
- spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
- Microsoft Windows support (only windows 10 has been tested).
- fp32 (not tf32) training/inference speed is increased (+50~80%)
- fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
- int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
- doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
- since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.
Usage
Firstly you need to use import spconv.pytorch as spconv
in spconv 2.x.
Then see this.
Don't forget to check performance guide.
Common Solution for Some Bugs
see common problems.
Install
You need to install python >= 3.7 first to use spconv 2.x.
You need to install CUDA toolkit first before using prebuilt binaries or build from source.
You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.
Prebuilt
We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).
We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.
For Linux users, you need to install pip >= 20.3 first to install prebuilt.
WARNING: spconv-cu117 may require CUDA Driver >= 515.
pip install spconv
for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).
pip install spconv-cu102
for CUDA 10.2
pip install spconv-cu113
for CUDA 11.3 (Linux Only)
pip install spconv-cu114
for CUDA 11.4
pip install spconv-cu117
for CUDA 11.7
pip install spconv-cu120
for CUDA 12.0
NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.
NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.
Prebuilt GPU Support Matrix
See this page to check supported GPU names by arch.
If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.
CUDA version | GPU Arch List |
---|---|
11.1~11.7 | 52,60,61,70,75,80,86 |
11.8+ | 60,70,75,80,86,89,90 |
Build from source for development (JIT, recommend)
The c++ code will be built automatically when you change c++ code in project.
For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2"
for xavier, export CUMM_CUDA_ARCH_LIST="6.2"
for TX2, export CUMM_CUDA_ARCH_LIST="8.7"
for orin.
You need to remove cumm
in requires
section in pyproject.toml after install editable cumm
and before install spconv due to pyproject limit (can't find editable installed cumm
).
You need to ensure pip list | grep spconv
and pip list | grep cumm
show nothing before install editable spconv/cumm.
Linux
- uninstall spconv and cumm installed by pip
- install build-essential, install CUDA
git clone https://github.com/FindDefinition/cumm
,cd ./cumm
,pip install -e .
git clone https://github.com/traveller59/spconv
,cd ./spconv
,pip install -e .
- in python,
import spconv
and wait for build finish.
Windows
- uninstall spconv and cumm installed by pip
- install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
- set powershell script execution policy
- start a new powershell, run
tools/msvc_setup.ps1
git clone https://github.com/FindDefinition/cumm
,cd ./cumm
,pip install -e .
git clone https://github.com/traveller59/spconv
,cd ./spconv
,pip install -e .
- in python,
import spconv
and wait for build finish.
Build wheel from source (not recommend, this is done in CI.)
You need to rebuild cumm
first if you are build along a CUDA version that not provided in prebuilts.
Linux
- install build-essential, install CUDA
- run
export SPCONV_DISABLE_JIT="1"
- run
pip install pccm cumm wheel
- run
python setup.py bdist_wheel
+pip install dists/xxx.whl
Windows
- install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
- set powershell script execution policy
- start a new powershell, run
tools/msvc_setup.ps1
- run
$Env:SPCONV_DISABLE_JIT = "1"
- run
pip install pccm cumm wheel
- run
python setup.py bdist_wheel
+pip install dists/xxx.whl
Citation
If you find this project useful in your research, please consider cite:
@misc{spconv2022,
title={Spconv: Spatially Sparse Convolution Library},
author={Spconv Contributors},
howpublished = {\url{https://github.com/traveller59/spconv}},
year={2022}
}
Contributers
- EvernightAurora: add ampere feature.
Note
The work is done when the author is an employee at Tusimple.
LICENSE
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
File details
Details for the file spconv-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: spconv-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.5 MB
- Tags: CPython 3.11, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0e32978e64507dc6a4b565e7172d469c7fb3a21209f9889f3cc51cea11f91ef8 |
|
MD5 | afcbdae945c84abe6066f95f588681da |
|
BLAKE2b-256 | 2d6096d9a27957ca95a1342d9fed805eb17658053c0592e803f21955c5f6e3ca |
File details
Details for the file spconv-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: spconv-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.5 MB
- Tags: CPython 3.10, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2a75ffa0870a64fbbb4f3d1ab1651fe57141e47790e55dcc405d737e2b00b1b3 |
|
MD5 | 85c49e5e6e40e4182e41234bf737ef8b |
|
BLAKE2b-256 | e4697b998497fd6802e55fa79e8b5032c2e3fddf71351615a1d02d43f60309aa |
File details
Details for the file spconv-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: spconv-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.5 MB
- Tags: CPython 3.9, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6cd787590a7a817d81da6c1fa395e297f7cbc0f183c2d258466fae4bde41841e |
|
MD5 | f49cd2d1bb05ef62f493b5d449679094 |
|
BLAKE2b-256 | a79eb9877702e986f11a6d65dc021d18775dbf64ed44a7457c316631665460a9 |
File details
Details for the file spconv-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: spconv-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.5 MB
- Tags: CPython 3.8, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3a9597cabe66bb5cd3c3626fded908ffebfd8a90b4c6056f6a82562b911b3033 |
|
MD5 | 221c7698783fa83541b99da4a13372e5 |
|
BLAKE2b-256 | aa72a7a8420aeb6a90d26babb6ab1d048436279aa3af8e31be7bdb14f5472beb |
File details
Details for the file spconv-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: spconv-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.5 MB
- Tags: CPython 3.7m, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d7b45693141475b91db52d6820e83c058e7caba7431306136368bd43cbc7879 |
|
MD5 | 743b08ca21d6cf335b653f947bf18eac |
|
BLAKE2b-256 | 6f4ec7736f3ab6b151b5e4dd7e2d2779f6231914e2a9d92ed07377c954e3e0ba |