Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu120-2.3.6-cp311-cp311-win_amd64.whl (74.6 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu120-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu120-2.3.6-cp310-cp310-win_amd64.whl (74.6 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu120-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu120-2.3.6-cp39-cp39-win_amd64.whl (74.6 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu120-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu120-2.3.6-cp38-cp38-win_amd64.whl (74.6 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu120-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu120-2.3.6-cp37-cp37m-win_amd64.whl (74.6 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu120-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu120-2.3.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 645e27a722d2d215916abac7c99a86c65006501e820c4183f0122940ec0ab4e5
MD5 d9abc041f7d3c1ed5f54f0b5017c3de9
BLAKE2b-256 4095230a20847213930a0300f87c795116f4e44a12c9b98a82a81e50a9bdd871

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f9e116962e9777558a96101a5d9252db5d46845ad5e5b32f473594984dfee983
MD5 290c27f937d1ffa4e8cd25e2d500b48a
BLAKE2b-256 422ad3322bdcf3bcd2b31386106ce90dece2aa6e4ddf665c075240e6958e4bd9

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 434723b24afb534dff31bd3af744ffd60dee8f2d9745e37249831a27510b89c5
MD5 bce9706fdc33fc26343f573ccfd48464
BLAKE2b-256 15b442184815a63f6aa61702ac0963dcda4db184337a06557c3a1b4ba848a34d

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 77cf697d8213bb6c0eb8c9ad7d3ef585a5e1cbc9d3670967be7a157f5a03988a
MD5 6e494e948824246ad7c8bc5691f467bc
BLAKE2b-256 5172786b8cc48f7cd7ce566c93e430ca292a7026ba9a6d148656ae93a0d04f76

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 58e8ebb07a1167e5ce8c752b80b728c4ad060fc0cc5e40de8ddc8291026328b3
MD5 40022143239332d3aa707e921607af08
BLAKE2b-256 b6f71d03e081a53047f39071dc50f3c21090971c6b2e8ea9be6082325c4996b8

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 70fc8cccc6f6878fb364ee96bdbba4dc322c6fc66265cce3fadd268b34c62446
MD5 4d66247ffd907f1cc5c82f74e5406474
BLAKE2b-256 08bc700da91a46def71041c728efaee25c4be2d43620a2e43572494a6f29693d

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 c466782f16250bc251eed1dbd2c63a7414f19eab88a84023eea7e440c2c14947
MD5 9e623c24d9e0e53ed1c837d054e7ffef
BLAKE2b-256 bfb612decc6d774670a0c071b918e7a7435a80adc723c93e60c44eb7b7847a45

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 94501b26b5d5869a6a5472c58224ce61da08ec9de0f764a238ac07acca0f2bd1
MD5 cbf344bd9fe8082a0e28edce12ea4a9b
BLAKE2b-256 e17dcbf663b641d11047282ae60197cd2d8456dab8056a61f46b9adfa2e063f6

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 8209231743dc0fe94b58d31e91b12d533ea99c3811cef32932ebd942f1c03d67
MD5 e9feb1cc745e1a1d2100cd2544a24494
BLAKE2b-256 b39bd781772ff99d64e1b8f15037dd42848462f20b321ed99de8a72bc37ed25e

See more details on using hashes here.

File details

Details for the file spconv_cu120-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu120-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 07b561fca7e8667ddb2f22fb9cf99cf24b9a93dad6afb738b8eb2464a66bc0ec
MD5 1361f61b2d1171bd2f1644cbf549b851
BLAKE2b-256 a580b6c895f3fdf2722ea360a922535ccc2cf5c3679e33275e6c9cd676c49de5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page