Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status

PyPi Version Downloads
CPU (Linux Only) PyPI Version pypi monthly download
CUDA 10.2 PyPI Version pypi monthly download
CUDA 11.1 PyPI Version pypi monthly download
CUDA 11.3 (Linux Only) PyPI Version pypi monthly download
CUDA 11.4 PyPI Version pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

WARNING spconv < 2.1.4 users need to upgrade your version to 2.1.4, it fix a serious bug in SparseInverseConvXd.

Breaking changes in Spconv 2.x

Spconv 1.x users NEED READ THIS before using spconv 2.x.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Spconv 2.1 vs 1.x speed:

1080Ti Spconv 1.x F32 1080Ti Spconv 2.0 F32 3080M* Spconv 2.1 F16
27x128x128 Fwd 11ms 5.4ms 1.4ms

* 3080M (Laptop) ~= 3070 Desktop

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 10.2 to build and run spconv 2.x. We won't offer any support for CUDA < 10.2.

Prebuilt

We offer python 3.7-3.10 and cuda 10.2/11.1/11.3/11.4 prebuilt binaries for linux (manylinux) and windows 10/11.

We will provide prebuilts for CUDA versions supported by latest pytorch release. For example, pytorch 1.10 provide cuda 10.2 and 11.3 prebuilts, so we provide them too.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

CUDA 11.1 will be removed in spconv 2.2 because pytorch 1.10 don't provide prebuilts for it.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu111 for CUDA 11.1

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in Linux. for example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82.

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Roadmap for Spconv 2.2-2.3:

  • TensorFormat32 support for faster fp32 training when you use NVIDIA Geforce RTX 30x0/Tesla A100/Quadro RTX Ax000 (2.2)
  • change implicit gemm weight layout from KRSC to RSKC to make sure we can use native algorithm with implicit gemm weight. (2.2)
  • documents (2.2)
  • Ampere feature support (2.3)
  • pytorch int8 inference, and QAT support (2.3)

TODO in Spconv 2.x

  • Ampere (A100 / RTX 3000 series) feature support (work in progress)
  • torch QAT support (work in progress)
  • TensorRT (torch.fx based)
  • Build C++ only package
  • JIT compilation for CUDA kernels
  • Document (low priority)

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

spconv-2.1.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (523.2 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

spconv-2.1.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (523.5 kB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

spconv-2.1.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (522.0 kB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

spconv-2.1.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (517.2 kB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.17+ x86-64

spconv-2.1.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (517.1 kB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.17+ x86-64

File details

Details for the file spconv-2.1.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 73e59e5862c648736cae510ae9adb26b0477b89b321102f407591bb12f0f8c0c
MD5 10de18c52a3c50e4bd0bcfd2ebdf77dd
BLAKE2b-256 52fcbb45d5c0c313ceee04ecf8725ca7e7a9a422d3166c48a1ac98a8c2be86c4

See more details on using hashes here.

File details

Details for the file spconv-2.1.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 07cb0d4b6d529578df38a035dd070549c4887ba20f7ed4da358026d265f3366c
MD5 9a2b368d9dd13554c7f5dfbe2ac9b4ef
BLAKE2b-256 80019de28c0e1731897a16f7e2a838a7fc0ce5e44810c29b9717922f65b9201d

See more details on using hashes here.

File details

Details for the file spconv-2.1.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 937683f0191e7bd0d77fb54f89be7dfa555d7dc330ca1b1c943d366d188a7d24
MD5 bed9536f2dc06936eb13e8ead9ab5f7c
BLAKE2b-256 b722fa066c190da8fece5a11587502ff45cbed513d20b81037b5f3f8bc3f9e95

See more details on using hashes here.

File details

Details for the file spconv-2.1.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7af797ce31e4ddeda535060fb37d30da23646bf2a63036b22075f89c2a629eb0
MD5 3c8f9adb4a4c2190a85ca352b95dc535
BLAKE2b-256 2c9a45dbc9358f526092ea0e48e0c9b47ddbef67caa1c79edf6d11f4895b6a41

See more details on using hashes here.

File details

Details for the file spconv-2.1.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv-2.1.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f423feada1f69e420609bcfe9e27d8ff1a80c660441e55dc5030f46e494a64d3
MD5 5604e3433023958365a6aa6f6b02a66d
BLAKE2b-256 7a64240d19ae3ad44eeaaf612ec6a83e92135419bf08d49426312465c671a9e6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page