Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu117-2.2.4-cp311-cp311-win_amd64.whl (66.2 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu117-2.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.4-cp310-cp310-win_amd64.whl (66.2 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu117-2.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.4-cp39-cp39-win_amd64.whl (66.2 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu117-2.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.4-cp38-cp38-win_amd64.whl (66.2 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu117-2.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.4-cp37-cp37m-win_amd64.whl (66.2 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu117-2.2.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu117-2.2.4-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 fc02a5cc614212b9ef2b47204d2d032ff08ba2c83e33e85ceba56dee5ec7c3bd
MD5 d9269eced81561e2a52d377cf2a46380
BLAKE2b-256 2fa704e261b850edc451a802a2fc21cde51084a6c79c7f2304a52c4f3883e5dd

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 837b5f2b35b0313f1907d3fa6e8b8623e7480cbfc31757cb6ffc9e50caa64913
MD5 a05b99a58838e0b864c440ae89ee47d5
BLAKE2b-256 e83f63e27e381da45b1fc7604bdaa3417aef28ea1fb2fd172b91fcca77d13f87

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 08f6bfc3af383ef64e76393910fb2ffa94d829c2b8063ad84140a7371e56c8d2
MD5 76313bb39f1666bcc8696110a4682d5c
BLAKE2b-256 7f6f71758c5d1b9c00541fb8070df3ef2c9b7d25d1ecef9dcfa3da944844ddb4

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 276fdf17e313e5ebe933fd26711fc8e1457ab4095e0b6048ed138bbc47cb6b95
MD5 b0762f670a1db48825553f0bc65fedbf
BLAKE2b-256 48fd462ed7c1b240900bc36f557cee97c3c5b771a4bf2fc67493cfe980907688

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 630dd1130ed018ba1419c6c7269f5488ecaa06293c2c9e5b9abfbd2e81011913
MD5 4e763f1c75ad20e77b13353bff576c06
BLAKE2b-256 978f436ba06b94da30947b0e0be4616e3b2d134b6330377acad1c900260a6ac4

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e0d8e337761d698925e8cc31ac991d9dcb6ad49e10f0c13430b1749e0c1b11d1
MD5 779be426cb246994667383a4c16b8d59
BLAKE2b-256 9810ae9775c36407fdc999ac61afb8be502d3da38ff464bc3137086a61fcdaa1

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 d6a37c6a5c0d6ce354e080d90f28e58073b44572c5b0f38addba4c57c83fec57
MD5 224fb3c15fdc1ec45bcc274399da442a
BLAKE2b-256 f4d71b53cc4bca260703802863f50eebc6369e85ad776afe574407479ecb1a7b

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4b930628aaab0a62c7c1fa1fca9e2b6064c2d3d2619d5cb787a370ba24a9513c
MD5 115857cc36c94e6fca6489d256492a95
BLAKE2b-256 ca77f2d85c3096cb3e75836264c280dae89d5e6437ecc2845ef2da0c762813d6

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 7f4b119bb1dde2aeaec0aa0c76b6b9b1ad9ca349ae150f3822079028dd3311db
MD5 315ff36ba890e0d5748c90ce7a8f9601
BLAKE2b-256 6721d5cbb985ba47e26dfd3d840849238caa524dc15d26ff5fd0e2a09f00ba35

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cb4bfe6bdb5041e0afdfdf121bb2408d66f17531555dea6e454204a516381c0e
MD5 4b0aa5737827a7fd00aa22d1aa9be203
BLAKE2b-256 de61d286122a1aff639828e65e170099523cfdc9a818e68aec271f0693c6d213

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page