Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu113-2.3.6-cp311-cp311-win_amd64.whl (69.2 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu113-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.5 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu113-2.3.6-cp310-cp310-win_amd64.whl (69.2 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu113-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.5 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu113-2.3.6-cp39-cp39-win_amd64.whl (69.2 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu113-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.5 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu113-2.3.6-cp38-cp38-win_amd64.whl (69.2 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu113-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.5 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu113-2.3.6-cp37-cp37m-win_amd64.whl (69.2 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu113-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.5 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu113-2.3.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 d37069f141b22777ce7de0aa9e57693b5c58bbd91c2dad3f4cac3b8ec631dcab
MD5 465ea735e2c84b22ac66e5d176cd2306
BLAKE2b-256 b718aa8f2564ee646a33a690a80bfc61b606ab093fdfe5477f1cdf6016cc280d

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 59b4834e98b42f6d7ef2933531f1de2dab57a22b4d06ed1cba72955e5103ad8c
MD5 92e01593ada1119365995864e38b8fe5
BLAKE2b-256 714fe9939f084cd57d19638aacff02a93679b2c0a1e971fa2b9e63424e6f3838

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 83acdb55756c7f57fa1f4a4301f79396913d716f13e97ed45e4fcf2a633498d6
MD5 26cf2b1d1b1b04ddead4a91b41caacfe
BLAKE2b-256 636a04accf740c07101885e167e8d807d715d33254746f72cc1d39269d62c52b

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 14b2440dacff52855faad6ce60339a84615bb0700c6db931cafe94023a43da02
MD5 e3691ed2c5e3fe12e20dd21f67f7ee34
BLAKE2b-256 cc9c5ebf87bdb7d03acdd04dfbef39992d6dd2b4272bef0f4b25e05af74df2e7

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 231630d02218482d164ef52c0f4bc5641fb9832bd2bc9c52147df7128d4546ae
MD5 a403322ddd558db048aaaa9ac9c6c6a8
BLAKE2b-256 38ab402c6c86776f44d92f1b87fa48ebf60074c1940f2097c84d9475cf8e0450

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 24a136cff557c92eda5cebbba103d02586c10cddb8bac638e52a2d05cef8ac27
MD5 570e25d21e32385225b78840141eaf02
BLAKE2b-256 6ed72e44c3d272dd7a8e73b37f86e89d4727e57130ccc3a67b0a640bd3b46656

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 c728962a677db899a7f2c1c11f95914757f0773a294df3c83ff8bf0914170bf0
MD5 38a19f7551d4e9ae67dbcfbe050870fa
BLAKE2b-256 5863bd85f6995e906295b1392d64481eb08c9d286b997bbb816d3972e3fefd65

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 86f9a1aba5f8f4e2d605d65e60b6490eb319f1462acc7942627379d1c83f7fc6
MD5 7fe658244bcc591432125f9eb53e3e86
BLAKE2b-256 8f1fcc77f7bd6ab1f3bb15d83e9fe8e5e1b62ed103a3b75cccd49179981fd578

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 4af57cc6250216d026dd2eef2232c19fca8f2b136e8d4b5e8268cbd39f2dceb2
MD5 d3aad69df97e21e752b2902ddd9de9bb
BLAKE2b-256 e24099f48a6e31fbf4558638eaa0c1a506e11244083ac02c4dd62704728beafa

See more details on using hashes here.

File details

Details for the file spconv_cu113-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu113-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3c3e97f89094555f4b4411cf1f3377c792c944aafbcff81f946b57935a3ece0b
MD5 0148918986192f1f0c22d03410063858
BLAKE2b-256 68ee748325d176134a3d6522f7218e529b0efa7cfb85309f0671126fd83132d2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page