Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

spconv_cu117-2.3.3-cp311-cp311-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.11 Windows x86-64

spconv_cu117-2.3.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.3-cp310-cp310-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.10 Windows x86-64

spconv_cu117-2.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.3-cp39-cp39-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.9 Windows x86-64

spconv_cu117-2.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.3-cp38-cp38-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.8 Windows x86-64

spconv_cu117-2.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

spconv_cu117-2.3.3-cp37-cp37m-win_amd64.whl (66.3 MB view details)

Uploaded CPython 3.7m Windows x86-64

spconv_cu117-2.3.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (67.6 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu117-2.3.3-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 a9977807578fada3cb2b2a7567bc17d7a8b0e9c520ea6e28e29cf1e7a9f821ac
MD5 1991b244472e9115127e595a7bff09aa
BLAKE2b-256 afdba776304b4d7b733298a13e70e573f41b494d1dfba20283e11dae08426e08

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 06c6e2362fdad82832b049779bc95cf417490464e61043f3ce778723ff1a4b6c
MD5 a00249789cc4c8601ed9530cf67c3a8c
BLAKE2b-256 1bd5278b7ff73c2fa861ae11a0bed71a240683337e06e8be10cc814a453e82f2

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 b5bb0aaf3aebd7b2c42c74f38f6b817b5a55914b6c3f7cb67d4a07ae321b2aeb
MD5 9f267875042b9b1ff78366298fbf9d48
BLAKE2b-256 4fd307669257c802469d6bb121f5443bd91deddae720afc3563a68c958651096

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 11c4da0b4f68bb50c803a20e4477c5ce77c0c30f56286385dfde5c017f46f357
MD5 fdd699548a7e5d981755525a02cea4ae
BLAKE2b-256 7088e0a9c2f9d04d4dd93a7c95f499fb2787a539a5653fa8bcc4afdbbea9988b

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 033b02e94a6eb83bef6896667e23f0d10b71a26f3a55a52b7c37df8c8a4f60ec
MD5 cd934b5e0afe209375f79b889d3ccb95
BLAKE2b-256 7710497d4b6467dfda1222a08cd044a025eea2de3491df1f5e14d1a99cebdca6

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 963077c4cd6eb471470ef567fc615b633f2bd4b7e56175f71caae6a284a3a79f
MD5 71defab7a29a57983377a351a34043f5
BLAKE2b-256 8bd4c0e5632039053de52f3ed818bcc893cd58e15fb2f00076f78ce9a61324f7

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 57d6e0c245b8dcf9f0f8157a217b1bdee99f41d082132dd6bd396d36a2ff5663
MD5 7f50a7658c3747e75220eda5401d7e8b
BLAKE2b-256 9393ee31de855d47101bab7570c3ce76fc613287797fcab52ab0b0208d3619a9

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 88352c2f554afa1edd2874792347c4ee987c919191a7236597bf3c3ba83576f8
MD5 f982e7aa582023472c6c18aa1f7cb422
BLAKE2b-256 65f9631ff2e3afdc533289d8c4793806376047dcf3c6ca9d0dc4957fc822a935

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp37-cp37m-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 3bef04739f15338710c17dc46539c2d613e5ee5c3cac92aee94f1323db142b4d
MD5 2d6e8ee7a83d6307f0907efaed986ec8
BLAKE2b-256 554ac80a41f2f0f7f673a3e8f04d7e71cf1c733bf236571345e11f1f1e6bf508

See more details on using hashes here.

File details

Details for the file spconv_cu117-2.3.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu117-2.3.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5803ffe4f046d4d876e8ecf38232fefddd0a04a3792dad27d6ff2ab69b5d6c04
MD5 45b1516ad7576686305b00493a826e2d
BLAKE2b-256 fd2a4c6b9425301200eb9fd76290abc0562e137ead4cd63284f9a0601b99d809

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page