Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

spconv_cu114-2.3.8-cp313-cp313-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.13Windows x86-64

spconv_cu114-2.3.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.8-cp312-cp312-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.12Windows x86-64

spconv_cu114-2.3.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.8-cp311-cp311-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.11Windows x86-64

spconv_cu114-2.3.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.8-cp310-cp310-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.10Windows x86-64

spconv_cu114-2.3.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.8-cp39-cp39-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.9Windows x86-64

spconv_cu114-2.3.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu114-2.3.8-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 b49a6674368f47a75aa8ba247baa48886d440ebbf5abd4cf8c25016161fdd4a1
MD5 463976d60435d9771ddedad80e5777c0
BLAKE2b-256 427396de8d623bd7777ade9fe671b5545f42e1b02930a84b47891865d3740d06

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 421f67ca23d36f3d46b0dbbc309d1c9d330a70e6943ad894c8dc7833feaa32b0
MD5 908586366d62541e4dd3ff8aadce0994
BLAKE2b-256 4de929e70b05752c11fa8cb1cb3191960002ce42df9108dbfedd4e5661e6580e

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 829badf8de8383cb32a83ce180e2c8a03b750877bfe8c651043f0c80dcbdfe0d
MD5 4be8076e2f371248d4c2b9871b57c530
BLAKE2b-256 decf35c3571845c876874882b7228cc9aaf7ad88a70e34fba2158327acf13737

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 137d96fe575835237ab777d06154a108cbb790ebe0b5c19afb4dbfabd17e9dce
MD5 5e25b53e3272674cf9aadc1f024bbe02
BLAKE2b-256 22d3e8b418f42e81e13d128e43bcb4b5402fe187beaa96b2ad538aa778c1df1a

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 80ee21decdf77d100c7ac892197c323eaab5365f4751a17a8ef68ae3023fd146
MD5 2277e33527b302cf03d48f2551b30dcc
BLAKE2b-256 2947fdf62c1d0d31ef5d54c1e4202a1f3ee3efb523171b9e1e92e14c634dc0f7

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 58c913c4569028b743f640de6597b52c4ccd9528ee071e93e1d5e26135663ef9
MD5 09af61c67cbf0fec4db02366dc5a95cd
BLAKE2b-256 a049c4766fba83edd6ff8dcdfd235e37c2f5308c9db52548e2b63e0abfbeb71b

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 b12dfaca6bbebc806ef6a409e46f54e311c2bf50cfcad9782135864fd18f70f5
MD5 ce3ce9cb43418288fb139b63f4a9e08f
BLAKE2b-256 a67de5ca4c16dda2f429abab4317593308d5eee71fc0d0307c748c8d4703624d

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f7b3a82e35c882fd3a470d93a37681bf762d072ee748fd28ffabdad85e1b80a9
MD5 ef1b0fa2885a0654f740392a4f0ee0e9
BLAKE2b-256 bf36e33c5913228f5499daa47611c816e891f90db8cab741fe42f2917300b7ab

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: spconv_cu114-2.3.8-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 68.9 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.13

File hashes

Hashes for spconv_cu114-2.3.8-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 0cd18f9e4011e1ea2f2a262fca9c7cf164e0b125ca175f028236945546233d49
MD5 76f843210599c29f2967c793269692a4
BLAKE2b-256 b43010c4c753f3201a2f745b41f3bb745a5240c86c3492566c5224e6b2039168

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7d7d8537a5a92d3ee22f3f3db7dacf95cec2aff00c393c8547696a605bb74f38
MD5 1127d18945fb717f0ca1a430112fd0b9
BLAKE2b-256 c683fd225de2fcbeecf6cc3fdf08ef892c1380dae9845e75dd0db2ba7e5962e6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page