Sparse-dense operators for paddle.
Project description
Sparse-dense operators implementation for Paddle
This module implements coo
, csc
and csr
matrix formats and their inter-ops with dense matrices.
Feel free to open an issue when you feel that something is incorrect.
Requirements
It only needs paddle
. It is tested on paddle >= 2.1.0, <= 2.2.0rc1
, but should work for any recent paddle versions.
Usage
Most functions are implemented within classes that encapsulate sparse formats: COO
, CSR
and CSC
.
Cross-format operators are implemented in dedicated sub-modules: spgemm
and batching
.
Supported operations
Conversion
coo -> csc, csr, dense
csc -> coo
csr -> coo
Batch MVP (Matrix-Vector Product) or SpMM (Sparse-Dense Matmul)
Note that in this library, the batch dimensions are appended instead of prepended to the dot dimension (which makes batch MVP essentially regular matmul). Use utils.swap_axes
or paddle.transpose
when necessary.
coo, dense -> dense
Point-wise
Supports broadcast on the dense side.
coo + coo -> coo
coo * scalar -> coo
coo * dense -> coo (equiv. coo @ diag(vec) if dense is a vector)
SpGEMM (Sparse-Sparse Matmul)
coo, csr -> coo (via row-wise mixed product)
Batching and unbatching
Many batched operations can be efficiently represented via operation on block-diagonal sparse matrix. We also provide batching and unbatching operations for homogeneously-shaped sparse matrices.
For COO matrices, this is constructing (destructing) a block-diagonal COO matrix given (into) several small COO matrices.
If you know the expected shapes of matrices after unbatching you may construct it explicitly by calling BatchingInfo(shapes: [n, 2] numpy array of int)
. Otherwise: 1) most operations keep shapes, and there is no need to change BatchingInfo; 2) batch_info_dot
is provided, for merging info between two batches of matrices that go through SpGeMM
to obtain a final batch of matrices.
batch [coo] -> coo
unbatch coo -> [coo]
Installation
pip install paddle-sparse-dense
Caveats
Currently all stuff is implemented with pure python and no CUDA code has been written. As a result, the routines have good run-time performance in general but have a memory overhead of order O(nnz/n)
.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file paddle_sparse_dense-0.0.5-py3-none-any.whl
.
File metadata
- Download URL: paddle_sparse_dense-0.0.5-py3-none-any.whl
- Upload date:
- Size: 10.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/3.10.0 pkginfo/1.7.1 requests/2.24.0 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3af56c574b41246f15b191d16bc67967fc2e510fb7f55d0bcfcf1a796d5d15f9 |
|
MD5 | 21792fe6d3964dec9347e8aefb812d6b |
|
BLAKE2b-256 | 6094abb11b82ca96afa7700ac2413552867d0fc41fb43674e7930600882881ed |