Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8* PyPI Version pip install spconv-cu118 pypi monthly download

*: sm_89 and sm_90 is added in CUDA 11.8. If you use RTX 4090 or H100, you should use this version.

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu117-2.2.6-cp311-cp311-win_amd64.whl (68.0 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu117-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (69.5 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.6-cp310-cp310-win_amd64.whl (68.0 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu117-2.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (69.5 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.6-cp39-cp39-win_amd64.whl (68.0 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu117-2.2.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (69.5 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.6-cp38-cp38-win_amd64.whl (68.0 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu117-2.2.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (69.5 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.2.6-cp37-cp37m-win_amd64.whl (68.0 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu117-2.2.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (69.4 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu117-2.2.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 93ea4da94f5cf6b4268ea470bc823e0eca3ca330eb04ca5fc235681de60d54ba
MD5 eb317ffe29238d0c77abdfa5389ca14d
BLAKE2b-256 effdab038e23b96aeae5b3c0c836a15f117bd9e094073df6fe5d1bd941122820

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cbdf6a81bb3e39b2f9b7030e9ec04e188de6d366b85da2464e4e12ff0f704625
MD5 cf44f6c1729bee7e8179f9ec51bf38ee
BLAKE2b-256 a3a1106e10ea96ca9bd69f093c81f17c0375aadc0d00a2e6e4db359b2599d472

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 0ee685ccd16d9313e3b17987e50cec3a29eae61e0b7c5a145535487178285368
MD5 7d51bed5c658bbdb3ff946a791aea90f
BLAKE2b-256 a78aa20a1f5cad644ef6e876521c62b88fd53677379a8a017edbf94a19f58645

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e2d83be7114a23b7e6392388570a1c035c1e751505bf4f4e11f12121ff890873
MD5 45c61a6c0410075d2c44d1188589e6f0
BLAKE2b-256 48817e1fa95242916fbdf197749aeddf14631497270792770d0f974b97cf3dfd

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 34267148ff9582b01dfa53c89e328285c5351ad083eae9ca7ba63e9d70beca79
MD5 c203fb1cded7ce350dddfe989f32a8d9
BLAKE2b-256 0adb082ec4c116187297adf2334df9f29250e4906906b95eaae869662132cd24

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d359f4cc8acdaf01ad939fbe5c8bae6b1deeee74cf58d727f6370255121fdafb
MD5 8c732caee710efab32043a31b271f1e8
BLAKE2b-256 b0e14dbf9f2b597cf76c0958446a459e8ce031f2ddc8500dac4d156f62fe7255

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 b712a3011cd1448c15ff1a733c321c9bb1366fd9b88cc99a1b4184c51b05ff37
MD5 12918636225a73b40eac0c997a18b41e
BLAKE2b-256 7edc9d34c0c37a5baffc0489c7abe83715afd9022fe9675027b28f326e25f1de

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2d9ef90369133131a3a95fc7495335df9421e4beb0a3a6d4d08c405b634bba69
MD5 14ab0a61392574010e3dcaf51f043426
BLAKE2b-256 4e5c2ed1f647be71c41f60f4adabc03d8545ad775dc68c235ace3143eb6bfb12

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 febda4d09a12e68bfced6aff0377a0632a15e865657a1933808e99d1f1ac4670
MD5 ae96ba107ccbed6454aaa23ec9b4b045
BLAKE2b-256 aa70ac0f96b648830728c2b84ef5abdc8f93fbf5901f6f218fe7849916a4cfa2

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.2.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.2.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d1915a08e918cffe89a57393ea10228473884dfcc2ce15c83b4a4f31707e0b0a
MD5 8cad0caaf2652a3dff523dca47a32187
BLAKE2b-256 0d6340a7bd0a030f03a91bf27cd7bf6a9b3e985fc3f13d09d50e229b2e84398c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page