Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status

PyPi Version Downloads
CPU (Linux Only) PyPI Version pypi monthly download
CUDA 10.2 PyPI Version pypi monthly download
CUDA 11.1 PyPI Version pypi monthly download
CUDA 11.3 (Linux Only) PyPI Version pypi monthly download
CUDA 11.4 PyPI Version pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

WARNING spconv < 2.1.4 users need to upgrade your version to 2.1.4, it fix a serious bug in SparseInverseConvXd.

Breaking changes in Spconv 2.x

Spconv 1.x users NEED READ THIS before using spconv 2.x.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Spconv 2.1 vs 1.x speed:

1080Ti Spconv 1.x F32 1080Ti Spconv 2.0 F32 3080M* Spconv 2.1 F16
27x128x128 Fwd 11ms 5.4ms 1.4ms

* 3080M (Laptop) ~= 3070 Desktop

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 10.2 to build and run spconv 2.x. We won't offer any support for CUDA < 10.2.

Prebuilt

We offer python 3.7-3.10 and cuda 10.2/11.1/11.3/11.4 prebuilt binaries for linux (manylinux) and windows 10/11.

We will provide prebuilts for CUDA versions supported by latest pytorch release. For example, pytorch 1.10 provide cuda 10.2 and 11.3 prebuilts, so we provide them too.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

CUDA 11.1 will be removed in spconv 2.2 because pytorch 1.10 don't provide prebuilts for it.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu111 for CUDA 11.1

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in Linux. for example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82.

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Roadmap for Spconv 2.2-2.3:

  • TensorFormat32 support for faster fp32 training when you use NVIDIA Geforce RTX 30x0/Tesla A100/Quadro RTX Ax000 (2.2)
  • change implicit gemm weight layout from KRSC to RSKC to make sure we can use native algorithm with implicit gemm weight. (2.2)
  • documents (2.2)
  • Ampere feature support (2.3)
  • pytorch int8 inference, and QAT support (2.3)

TODO in Spconv 2.x

  • Ampere (A100 / RTX 3000 series) feature support (work in progress)
  • torch QAT support (work in progress)
  • TensorRT (torch.fx based)
  • Build C++ only package
  • JIT compilation for CUDA kernels
  • Document (low priority)

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

spconv-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (454.9 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

spconv-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (455.1 kB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

spconv-2.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (453.4 kB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

spconv-2.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (449.2 kB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.17+ x86-64

spconv-2.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (449.2 kB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.17+ x86-64

File details

Details for the file spconv-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 0c73b3ae37226cbe161ca61508ea6a36ba43f8e73a1dc1cbc2f63bce6f2b8d04
MD5 4d83bdaa48befc3c46931e6440fbb645
BLAKE2b-256 425682531359f8a6a4b48d24e782c020bc5d95f8098d30911da1a60fae7d8c56

See more details on using hashes here.

File details

Details for the file spconv-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a369ca33ea7c22f5bd65c052f86d6801043422f682a7ac86da65702dba2b071f
MD5 c769b4369de7de7d3ddc14d17faa1789
BLAKE2b-256 48dc2afb4fab1fb5a45b877f5239ec9ade23ffad2ba8f377175a84dee1dcde57

See more details on using hashes here.

File details

Details for the file spconv-2.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c9a062f71235daed10834092e4718ca4a6179b0b016a60fd2a4b9aa73007bf1b
MD5 01a678bf010cad009638c9a8a575e972
BLAKE2b-256 821bb976bf062ffd76402bebf2f7716c02a0f74a26efc94d51f5b91b408e9815

See more details on using hashes here.

File details

Details for the file spconv-2.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ef81306c3ccb725fc8b18c2c53e5b26a50bc83ce2c1e103ed8f1ebd9411ac72f
MD5 871e0e8501bf261cc1e60fc020947950
BLAKE2b-256 9659fcc840c5d57f02a30d245b1b8d7fc64717b5cb267f9a1fbaa72467961d80

See more details on using hashes here.

File details

Details for the file spconv-2.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 02d46ed04b9cc3533f2d2b211a0890c3b5ae4e6d293ccc94bf3d6590f67e71c6
MD5 78c2fc39b93650a557ae86289cf8bcf9
BLAKE2b-256 d3b114f16551fffbf1089c685957ad3a08d3a9a09be47f6aabfa8f1b1ff626f5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page