Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu117-2.3.6-cp311-cp311-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu117-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.7 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.6-cp310-cp310-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu117-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.7 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.6-cp39-cp39-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu117-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.7 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.6-cp38-cp38-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu117-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.7 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.6-cp37-cp37m-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu117-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.7 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu117-2.3.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 4012f7e7879c8f5a307f01667af5f870c33f4b79e8ffe96fe04cbdf14c7f6b49
MD5 2496de0346b0e83d56763fa38feb5e69
BLAKE2b-256 a5d670c23c80fb53ce3d7ad05884e06def4f6002201dbcbcc98e5d8aa0b797ad

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1b811d2089d4b4a5edbefd3518c90da1c631697f6ce2d13244057cdfb0fb380d
MD5 de5df40680ac872132b4f2da9c70633e
BLAKE2b-256 26b6153f299778584e214582d4a9d5757860a551838aa1a823f0ec48bb896366

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 1384b936cec66fb3bfd50a5f33362a7eb6e9a923d9244aa06aee42133b086115
MD5 320efc8a15036fc7ae18348380bcac67
BLAKE2b-256 96012d98d70b0c048bbadfc837a2f0a0a3ea7d4a789b14811dddbfb3fea2aa1d

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 93fa0f10ad097b5898f4f8cb2e939aba34d2d76bc752681e07ccfbf1c10cfb86
MD5 421322bf1a1c87fd8a9879415926b01f
BLAKE2b-256 ef70add03cf8fd3c5777e4209882facf2f127554849b47fbfb748817d473fb5a

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 49cc543678b7c765eebd3b68f0bce6bc98feb2c648576f53e17b5e17f9fc6343
MD5 53c1013aa5aca1511285eb3d927e3624
BLAKE2b-256 43491c117fb17397722c6eed2b079a625319ff13e36a488de450e2d1773d6b05

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1075036be8282425f5d4fad6c3ddcdfbebb5ad429c6a6e8bef8ee4247616d952
MD5 8b38e15220da63c2b7e6967a8b515cc5
BLAKE2b-256 43b03994730d6119bf285ae1084692f777b6abb02b3efa7e8aa59d375ad7a31d

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 79e9da3dd14213e4538463061eb3cd6335014ef2b3be4a82954188efe12934a7
MD5 115564121d5018bf0510cbd7ac36a17b
BLAKE2b-256 01acc65f2fd06d5b961b9bfbeeda1bba0e24892c9708fa8713288d577ce067e7

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e6d639b3626eead35a5583eda0a1a61e087721b7917b04bbfdc628e0a64fcb54
MD5 5b5011a2cae6a845be48981df6512c6e
BLAKE2b-256 2b9b371d156a9c5573e02427ac43e08c0dcd7e432a522a250fa05fe800ae1905

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 31d7d7d9f1a955eb0fd143d1ce48b942827ab0ac239e7d855b1338d75c65b72d
MD5 a5e71d1e9321b4c08e9941424a9fcaf5
BLAKE2b-256 bc175e5ce745889743043b97570061f8ed72f154b73bb56ebcf3338ed7b16fb4

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 059a4bde5ae6ba70424f4e816e311ecefb8a54b0b94258f89f870de90914ce99
MD5 6809466cf969c61a4832f36d2b163cf9
BLAKE2b-256 3b2db9e237bfb0ecf568efa04a12f586559f3b98aeab65dbbc00f555af40cf8a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page