Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu102-2.3.6-cp311-cp311-win_amd64.whl (50.4 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu102-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (51.9 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu102-2.3.6-cp310-cp310-win_amd64.whl (50.4 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu102-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (51.9 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu102-2.3.6-cp39-cp39-win_amd64.whl (50.4 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu102-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (51.9 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu102-2.3.6-cp38-cp38-win_amd64.whl (50.4 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu102-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (51.9 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu102-2.3.6-cp37-cp37m-win_amd64.whl (50.4 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu102-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (51.9 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu102-2.3.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 11f51c0f4f74d5f3f1f4ef6846b6aece50b55ef935745cf0f9317f5b0f3f9bd2
MD5 310cd992393650f5b99289af0448c337
BLAKE2b-256 473f3352e364d281ef5d647a18e1d8f11ede6215435b633f01b363a32829caa0

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 18bc9af1273652efda0b38ada0ab04e1a4c855577fa332e0968b5d6e5ccc1342
MD5 6ca6e2be4d336ed554230d94dec14a11
BLAKE2b-256 e95eb904105f94e7a6647ecf9a9a5cb925d3a6db107a24c7569db44f4ba0aaad

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 b5ebadd365be52eea08cd2c4cb29ccbf36951ef0a4fc61c3c28548537a0a234b
MD5 ac0969eb4ba3b24610f028281441acd8
BLAKE2b-256 3cef59734ded63f4c4d3b765fa7acf375586fef74520a7bb5173117edf6d2a8b

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 df8559c1ce7fe7f9ba0a331e6ec8a581d52d4ac0e7f17eb3333a4cbda4804bc9
MD5 2f90b0e40ae27a45b7e8b890f92ce8b9
BLAKE2b-256 435c5df126879349ee8650080875cef23dcecdbf6e5abca39260a81ab5f44bfd

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 0ef8db89a28c29f03ff9305f68b1b599269b010c6b8556374dcff49a02d7b347
MD5 1de4c0533123d27cd617f8d8a03c3c89
BLAKE2b-256 fe1da2d71624222d3c524bf13ef9dcb5e5287b5a505ef0964a7dbe9f8c860f26

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bb48fcbcc4558ea446de889250989fcb8fc25aeef3df5348e7c56ca67f57d133
MD5 031496d9c60f8c05dfa825ed74f8a67c
BLAKE2b-256 31c7046e8c967703d9547130c4211360e341aa7499afea52fa6e1a6acc741156

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 e50882b41c2449d2117e7a70d3058dfab95ce11a6baa2f90629718affce858c5
MD5 879124925112fc0bf07e875fa5592927
BLAKE2b-256 4cf3afaf604f41602737ccd67da4aa8307592d8892bdb146da888b0930ec4b4f

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5adff5f014b2f3513fbde0d84c4df98614ae58d2c77c25d299b3fc6e0052685d
MD5 52c6f43c5c87ad8d9eaff314cd061707
BLAKE2b-256 9570b28aeedee64a08344720fa91d616965c84c26b89ad66680fa1401ba569eb

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 62ee4468a3463612b939b73a7c0fdfcc478dc73511f4e03324a82b6955968348
MD5 ae7a758fc7086211c5f83d07dea5e901
BLAKE2b-256 10b371193ec81ee22d924c9ccffaff6296dcce4f232c78f474c4795decc8b58b

See more details on using hashes here.

File details

Details for the file spconv_cu102-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu102-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7619cd3b827a8f010d435ba22b7b49642e05e41f7734522fcd086af61b9eda84
MD5 8955ccf1e5d81be5bdb8a40a07176f18
BLAKE2b-256 bed843bc55c41733595ef4fb639a8a893b9bd7b83644636c45ff1a2c05751291

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page