Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu114-2.3.6-cp311-cp311-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu114-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.6-cp310-cp310-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu114-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.6-cp39-cp39-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu114-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.6-cp38-cp38-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu114-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu114-2.3.6-cp37-cp37m-win_amd64.whl (68.9 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu114-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (70.3 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu114-2.3.6-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 043093f2e3567edba8fd1a4ad7c03d77a6c084513e1414a134f674a993330188
MD5 8a6abb39a5031f9c970a23b7b67de417
BLAKE2b-256 aa6b25822a4996454a304a64f231777e062b7b57a2465daaa6d64426b713dce9

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 68a56a57d47960a19b1168950d173e9ccfe965f8b1b56a452a70121bda8f8be0
MD5 950707786514b6ab3c4f8051a8b154ee
BLAKE2b-256 e2ad1714b21b675311eeb5164959530b07d596598102bb852bdf561f63722d02

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 1de99faa84c4d94bd9b01178e370c804220a0c58979d10e7a6b61275a4a41c30
MD5 750d276cf31e8084d35a7a5c757bee32
BLAKE2b-256 c85086cf361e5751e7e0bb82bef23e1861af7a012061fa67302c715c2450dd22

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 05aeb324b0528c3d3f065b94aea337b2f87b87893f37c50648a67b645a22a33b
MD5 dc4aefed23cf4ae7305ffd07d899a280
BLAKE2b-256 6e2acdbe38402b697f0f8ca28a4d53765417eaae019ba3a6c8a3890690bd0565

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 82341f9d32ac66857693422c4a950b0ec9516568997d6b5db40ef178567961aa
MD5 e84e3f41fbbd171a4f4381d4da77910c
BLAKE2b-256 4510e2aa95297f9d59bc36f03674a83d8146db16164bfaf69c2ce30dee05a1c1

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5975ceaf82a794e06e063ddf52feb53c5c5a2b3f80feb27a46f08193a56ded77
MD5 a8a894fe8fad87455c57b969e5240bf0
BLAKE2b-256 b5c3848fa94f2cd12112209bcc54708c001536a6cada08a23d7e143b8b85a158

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 c67b6028bfc75e82037cd48d18b7d28bbb6832b4f6fd50fa230c0d65879f4100
MD5 9fdb39383aa68e3772d232de30598fd2
BLAKE2b-256 9e1311f169ffbb759f2525f2fb32da5ebcf194c0f4d48aed9766cabd61f06491

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a1c917b0122a31fb593d874b63313898c264fff19b4977cecc4db8f2bff8ea22
MD5 b91b604b987f4a5a608544296cb20f0e
BLAKE2b-256 4ef88c002cb8a99a5d7c153a5eb064ad279c5f1f7dd59a47b56e29c3e5778c24

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 96a56f97eeca7da741b5af2e74e4b09e4c011448717eee4aaee98d2fae1b8a67
MD5 d393ec2a2e03d749b084228feb3b2bfd
BLAKE2b-256 154ef8f516266ec66eab0a25cee9937f99a37ebcd91acf74cf73f60e196a7cde

See more details on using hashes here.

File details

Details for the file spconv_cu114-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu114-2.3.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 09aae80db00bbd89f5094fb6ded59fef48ad147e3f890fad6a4ca9917f64db4b
MD5 9950aa49b3238055b9c73b787b8b31c8
BLAKE2b-256 aeb67f9c919b3bb02edb894456a03eb1b4cf2891148bd252eb78521000fccabc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page