Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

spconv_cu124-2.3.8-cp313-cp313-win_amd64.whl (68.4 MB view details)

Uploaded CPython 3.13Windows x86-64

spconv_cu124-2.3.8-cp313-cp313-manylinux_2_28_x86_64.whl (70.0 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

spconv_cu124-2.3.8-cp312-cp312-win_amd64.whl (68.4 MB view details)

Uploaded CPython 3.12Windows x86-64

spconv_cu124-2.3.8-cp312-cp312-manylinux_2_28_x86_64.whl (70.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

spconv_cu124-2.3.8-cp311-cp311-win_amd64.whl (68.4 MB view details)

Uploaded CPython 3.11Windows x86-64

spconv_cu124-2.3.8-cp311-cp311-manylinux_2_28_x86_64.whl (70.0 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

spconv_cu124-2.3.8-cp310-cp310-win_amd64.whl (68.4 MB view details)

Uploaded CPython 3.10Windows x86-64

spconv_cu124-2.3.8-cp310-cp310-manylinux_2_28_x86_64.whl (70.0 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

spconv_cu124-2.3.8-cp39-cp39-win_amd64.whl (68.4 MB view details)

Uploaded CPython 3.9Windows x86-64

spconv_cu124-2.3.8-cp39-cp39-manylinux_2_28_x86_64.whl (70.0 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.28+ x86-64

File details

Details for the file spconv_cu124-2.3.8-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 90205429aab61ceb9ff5a65616a363fa361daa8b6c5cae0cc7d03fa7ceb872a9
MD5 0a204b3a03545ca5140a13bec3a6a915
BLAKE2b-256 0e237f5f88a9e501d9e708744c21de8c88cc81268ba3101c9fc0520c540a8873

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d5cec997fcfde4f39ef09e7028906027bbc7be53f84e3271fbca16d5c580c6ff
MD5 3f63cbb7550e4caa3c5272ad649db177
BLAKE2b-256 d48ebcab746fb1c1c022d8ef0fbc713072562ae51c077b0e610e72a8cc4e450c

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 410542579147f5c6a26f53642ca088074dd23b2d03981a644ca14175cea2dde3
MD5 fbf629be3910b53a79e2b0d436766073
BLAKE2b-256 a8ebb67b72c6c7601c4f4d79da2d069242b69ac138c3c8619cf185b5b2dcbb54

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 57b4afa27464d7c85e3d2543c802c609bdba2b6a393272e740b564bba870fedc
MD5 6595fc8a8d9001600085622e39caaa1e
BLAKE2b-256 9e39f5a714e437bd5b3fc77ad56e7a1ad91e6a1fbffd187d65df46295f407cc6

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 8386e6d247920bcb0d5561242c61d5d0e2518c62cb4d88a0513554173593bd17
MD5 257ec5a9ffaca11f79bb2f9c5fa9faa3
BLAKE2b-256 6129109dc82505e5440f647038961e3fa5f5377a7ba8e6c72b41d990beede8b6

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 ccaf461f563dd46da3c84dcff479fa44430a27cc33a933e2bcc6ffa3924917c6
MD5 038ed24e0b49f136c9493a533c1fa0b9
BLAKE2b-256 0f7a9f49019d7c891011aeaaec924fe7decd6aea10f9c9d4f69641bd27fe892a

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 be057a033336df2fb6f3d213189b48bb7504048b1624774b9a99ae082670d905
MD5 020af706409d66ef02ebd3abc11bf473
BLAKE2b-256 45175c05d0d36f461f333c0ef9e66ac8cf0cc63d8cfe34232413ca8cb7870e82

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b6800e3ccdf71fc9015973e8d82cad337b2665af7f8f702aeb991fb8f13e5897
MD5 58e5229366f7bd7fd1acaff68dfaf526
BLAKE2b-256 aacb16158e21094b43eee792dd5bd72adf9a939c8b802177ed4b876533ed206f

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: spconv_cu124-2.3.8-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 68.4 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.13

File hashes

Hashes for spconv_cu124-2.3.8-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 605f6d30959e72fec8d2de98ad58d31b1c884abb51ca250feec9b2518454f8fb
MD5 2d19f7050bf28836b574c819396c3a79
BLAKE2b-256 e77aecc05e608e879d3f0acc91e76e22a553230bdd053bb5595cd69ad38eab00

See more details on using hashes here.

File details

Details for the file spconv_cu124-2.3.8-cp39-cp39-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu124-2.3.8-cp39-cp39-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 e9ce7914ad24a4f8d328e7732f88c57723ea19c627c41dc24e5d7bc3ef0843c3
MD5 aac779d475f5c5f43c3caaabcb659e63
BLAKE2b-256 1a7d281ac9066da92842680a4a8f59b2684fe77c4d5e91be3c373c09fde3ce3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page