Skip to main content

spatial sparse convolution

Project description

SpConv: Spatially Sparse Convolution Library

Build Status pypi versions

PyPI Install Downloads
CPU (Linux Only) PyPI Version pip install spconv pypi monthly download
CUDA 10.2 PyPI Version pip install spconv-cu102 pypi monthly download
CUDA 11.3 PyPI Version pip install spconv-cu113 pypi monthly download
CUDA 11.4 PyPI Version pip install spconv-cu114 pypi monthly download
CUDA 11.6 PyPI Version pip install spconv-cu116 pypi monthly download
CUDA 11.7 PyPI Version pip install spconv-cu117 pypi monthly download
CUDA 11.8 PyPI Version pip install spconv-cu118 pypi monthly download
CUDA 12.0 PyPI Version pip install spconv-cu120 pypi monthly download

spconv is a project that provide heavily-optimized sparse convolution implementation with tensor core support. check benchmark to see how fast spconv 2.x runs.

Spconv 1.x code. We won't provide any support for spconv 1.x since it's deprecated. use spconv 2.x if possible.

Check spconv 2.x algorithm introduction to understand sparse convolution algorithm in spconv 2.x!

WARNING

Use spconv >= cu114 if possible. cuda 11.4 can compile greatly faster kernel in some situation.

Update Spconv: you MUST UNINSTALL all spconv/cumm/spconv-cuxxx/cumm-cuxxx first, use pip list | grep spconv and pip list | grep cumm to check all installed package. then use pip to install new spconv.

NEWS

  • spconv 2.3: int8 quantization support. see docs and examples for more details.

  • spconv 2.2: ampere feature support (by EvernightAurora), pure c++ code generation, nvrtc, drop python 3.6

Spconv 2.2 vs Spconv 2.1

  • faster fp16 conv kernels (~5-30%) in ampere GPUs (tested in RTX 3090)
  • greatly faster int8 conv kernels (~1.2x-2.7x) in ampere GPUs (tested in RTX 3090)
  • drop python 3.6 support
  • nvrtc support: kernel in old GPUs will be compiled in runtime.
  • libspconv: pure c++ build of all spconv ops. see example
  • tf32 kernels, faster fp32 training, disabled by default. set import spconv as spconv_core; spconv_core.constants.SPCONV_ALLOW_TF32 = True to enable them.
  • all weights are KRSC layout, some old model can't be loaded anymore.

Spconv 2.1 vs Spconv 1.x

  • spconv now can be installed by pip. see install section in readme for more details. Users don't need to build manually anymore!
  • Microsoft Windows support (only windows 10 has been tested).
  • fp32 (not tf32) training/inference speed is increased (+50~80%)
  • fp16 training/inference speed is greatly increased when your layer support tensor core (channel size must be multiple of 8).
  • int8 op is ready, but we still need some time to figure out how to run int8 in pytorch.
  • doesn't depend on pytorch binary, but you may need at least pytorch >= 1.5.0 to run spconv 2.x.
  • since spconv 2.x doesn't depend on pytorch binary (never in future), it's impossible to support torch.jit/libtorch inference.

Usage

Firstly you need to use import spconv.pytorch as spconv in spconv 2.x.

Then see this.

Don't forget to check performance guide.

Common Solution for Some Bugs

see common problems.

Install

You need to install python >= 3.7 first to use spconv 2.x.

You need to install CUDA toolkit first before using prebuilt binaries or build from source.

You need at least CUDA 11.0 to build and run spconv 2.x. We won't offer any support for CUDA < 11.0.

Prebuilt

We offer python 3.7-3.11 and cuda 10.2/11.3/11.4/11.7/12.0 prebuilt binaries for linux (manylinux).

We offer python 3.7-3.11 and cuda 10.2/11.4/11.7/12.0 prebuilt binaries for windows 10/11.

For Linux users, you need to install pip >= 20.3 first to install prebuilt.

WARNING: spconv-cu117 may require CUDA Driver >= 515.

pip install spconv for CPU only (Linux Only). you should only use this for debug usage, the performance isn't optimized due to manylinux limit (no omp support).

pip install spconv-cu102 for CUDA 10.2

pip install spconv-cu113 for CUDA 11.3 (Linux Only)

pip install spconv-cu114 for CUDA 11.4

pip install spconv-cu117 for CUDA 11.7

pip install spconv-cu120 for CUDA 12.0

NOTE It's safe to have different minor cuda version between system and conda (pytorch) in CUDA >= 11.0 because of CUDA Minor Version Compatibility. For example, you can use spconv-cu114 with anaconda version of pytorch cuda 11.1 in a OS with CUDA 11.2 installed.

NOTE In Linux, you can install spconv-cuxxx without install CUDA to system! only suitable NVIDIA driver is required. for CUDA 11, we need driver >= 450.82. You may need newer driver if you use newer CUDA. for cuda 11.8, you need to have driver >= 520 installed.

Prebuilt GPU Support Matrix

See this page to check supported GPU names by arch.

If you use a GPU architecture that isn't compiled in prebuilt, spconv will use NVRTC to compile a slightly slower kernel.

CUDA version GPU Arch List
11.1~11.7 52,60,61,70,75,80,86
11.8+ 60,70,75,80,86,89,90

Build from source for development (JIT, recommend)

The c++ code will be built automatically when you change c++ code in project.

For NVIDIA Embedded Platforms, you need to specify cuda arch before build: export CUMM_CUDA_ARCH_LIST="7.2" for xavier, export CUMM_CUDA_ARCH_LIST="6.2" for TX2, export CUMM_CUDA_ARCH_LIST="8.7" for orin.

You need to remove cumm in requires section in pyproject.toml after install editable cumm and before install spconv due to pyproject limit (can't find editable installed cumm).

You need to ensure pip list | grep spconv and pip list | grep cumm show nothing before install editable spconv/cumm.

Linux

  1. uninstall spconv and cumm installed by pip
  2. install build-essential, install CUDA
  3. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  4. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  5. in python, import spconv and wait for build finish.

Windows

  1. uninstall spconv and cumm installed by pip
  2. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  3. set powershell script execution policy
  4. start a new powershell, run tools/msvc_setup.ps1
  5. git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
  6. git clone https://github.com/traveller59/spconv, cd ./spconv, pip install -e .
  7. in python, import spconv and wait for build finish.

Build wheel from source (not recommend, this is done in CI.)

You need to rebuild cumm first if you are build along a CUDA version that not provided in prebuilts.

Linux

  1. install build-essential, install CUDA
  2. run export SPCONV_DISABLE_JIT="1"
  3. run pip install pccm cumm wheel
  4. run python setup.py bdist_wheel+pip install dists/xxx.whl

Windows

  1. install visual studio 2019 or newer. make sure C++ development component is installed. install CUDA
  2. set powershell script execution policy
  3. start a new powershell, run tools/msvc_setup.ps1
  4. run $Env:SPCONV_DISABLE_JIT = "1"
  5. run pip install pccm cumm wheel
  6. run python setup.py bdist_wheel+pip install dists/xxx.whl

Citation

If you find this project useful in your research, please consider cite:

@misc{spconv2022,
    title={Spconv: Spatially Sparse Convolution Library},
    author={Spconv Contributors},
    howpublished = {\url{https://github.com/traveller59/spconv}},
    year={2022}
}

Contributers

Note

The work is done when the author is an employee at Tusimple.

LICENSE

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

spconv_cu121-2.3.8-cp313-cp313-win_amd64.whl (74.7 MB view details)

Uploaded CPython 3.13Windows x86-64

spconv_cu121-2.3.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

spconv_cu121-2.3.8-cp312-cp312-win_amd64.whl (74.7 MB view details)

Uploaded CPython 3.12Windows x86-64

spconv_cu121-2.3.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

spconv_cu121-2.3.8-cp311-cp311-win_amd64.whl (74.7 MB view details)

Uploaded CPython 3.11Windows x86-64

spconv_cu121-2.3.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

spconv_cu121-2.3.8-cp310-cp310-win_amd64.whl (74.7 MB view details)

Uploaded CPython 3.10Windows x86-64

spconv_cu121-2.3.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

spconv_cu121-2.3.8-cp39-cp39-win_amd64.whl (74.7 MB view details)

Uploaded CPython 3.9Windows x86-64

spconv_cu121-2.3.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (76.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

File details

Details for the file spconv_cu121-2.3.8-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 85b0e37f342d49cfe0a0025def1243fbd1a1bbba95994c522abadfce62e53ef5
MD5 f8427506475deb1db986cb4fba802516
BLAKE2b-256 b953eb7ed43404d2985237c166fc67d16be418f573cdc2d65ebc0e71e05149da

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d5a213bb7eb15a6ab29180d92a88f2bed42a99322548905e89cd465d0e55be3f
MD5 eecf51dd0b318e934c3b72317e487636
BLAKE2b-256 992dbc6da58f53eb4f067759e38cf18fc00fb58ac0d3a013c721797a1c01e3fa

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 c5138e7f39556421e069d1293dfbc440f30293b3f066f6b3db5a8d4562cce4b4
MD5 3a33f4c3e8a0e2d0bf456d0e7cc58d27
BLAKE2b-256 4492d1ac13ee57fadab286d82a6efdf4d1911e34e84984b864bb08b66f19b255

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 78cecac67607feb1f38c1a7ff4f1b41cf4e1257d829e3b05caffed9ecaf7bbe8
MD5 19c98a8cfb10c644536f3bcfb84caee5
BLAKE2b-256 7474b56163f72a506495cc12ee857e90e74ca7f41e4dd00b1b0f44530bc6afa4

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 817c453e202d61c0bdf77447422bea1066ccb6e5929f200bb5d8aafdc1c74fb3
MD5 eed9a2f799775a6bcb56c1d637c18348
BLAKE2b-256 cc05c07db9bf205e7bcea9dbb81ac76c4ad7181f65a36bef7c4933f609f2cacf

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1740699b947465adfa4c4237c11213c5b18abda7ef1925da6f73fa1ba3c83f8b
MD5 edb3b73c2c56a513c38f68102a204767
BLAKE2b-256 b368737391103484d5abd7fe3685e1826fb08b63d66b21af2de4f53b92d40d2c

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 6930ffc398d66dbb23cf9222862375d9d4bfde4fafabe19dad03e88b06e2f5d9
MD5 b20d176b4ced9046926e56992f59f73c
BLAKE2b-256 2dd4455ad28fee830842c8e5f1043bf00bd8546fd540a819b9f1b87f4a85122c

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 924496482abe7e5777f7604aa4de10b8359057af67714652da9bebcf3ae89a99
MD5 716be0c48c24551c20dc6f588772fcdb
BLAKE2b-256 f88ab6f1153f4150e6ab2dbc5e34689e85f2a78e9bf682b9be2416809559223e

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: spconv_cu121-2.3.8-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 74.7 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.13

File hashes

Hashes for spconv_cu121-2.3.8-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 0cbf7ddbb002cd3e6f2cc4ba5422c24fe0ef1c21d19420bdb20f123eeb9d03f5
MD5 d5d388cc3754c5730f194609e1633cd0
BLAKE2b-256 8f2a12cd5c762d737c715b0bd84d75338bf4108b0a9515fab6d66e39b55de197

See more details on using hashes here.

File details

Details for the file spconv_cu121-2.3.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for spconv_cu121-2.3.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f2550b2a29303164254dfbb908437f0732624ac03da06010360fa37eab158cdc
MD5 308ac40fea0615e99eb62a4f1c76c052
BLAKE2b-256 6530ee53f2aa9c86f5ff57a8e9d94f68fd9d9152225a0904dfea09958597b07f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page