Skip to main content

Towards Efficient Updating of Truncated Singular Value Decompostion

Project description

IncSVD: A python package for dynamic Truncated SVD

PyPI https://arxiv.org/pdf/2401.09703.pdf

IncSVD is a python package capable of dynamically maintaining the incremental Truncated Singular Value Decomposition (Truncated SVD) of evolving matrices.

This is the code implementation of the paper "Fast Updating Truncated SVD for Representation Learning with Sparse Matrices" (ICLR'24).

Install and Setup.

It is recommended to use python 3.8 or higher and install IncSVD with the following command.

pip install IncSVD

User Guide

Initialize the IncSVD using an $m \times n$ matrix data_init given the rank (dimension) k for which the truncated SVD needs to be computed. Here is an example to initialize a $1000 \times 1000$ matrix with truncated dimension $k = 64$.

class EvolvingMatrix(data, k, sparse = True, method = "RPI")

  • data: data matrix Data matrix used to initialize truncated SVD. The format of this initial matrix can be np.ndarray or scipy.sparse.
  • k: int Truncated dimension in the maintained SVD. It should be a positive integer not greater than the length of the sides of the original matrix.
  • sparse: bool, default=True Set true implies the update process will employ the proposed acceleration methods [1]. Set false implies using the original methods (ZhaSimon's [2], Vecharynski's [3], Yamazaki's [4]).
  • method: {'ZhaSimon', 'GKL', 'RPI'}, default='RPI' Method for SVD update. It can be chosen from ['ZhaSimon', 'GKL', 'RPI'].
import scipy.sparse
import IncSVD.EvolvingMatrix as EM
m, n, k = 1000, 1000, 64

# data_init = scipy.sparse.rand(m, n, density=0.005).tocsr()
M = EM.EvolvingMatrix(data_init, k, sparse=True, method="ZhaSimon")

[!TIP]

We recommend using GKL or RPI when the rank of each update matrix (e.g. adding many rows at once) is large.

Row / Column Update

Expand the original matrix in row or column dimensions and update the corresponding SVD results.

I1

add_row(update_matrix, l=None, t=None)

  • update_matrix: data matrix Data matrix to be appended. This matrix should have the same number of columns as the original matrix. If type sparse is True, this matrix should be in scipy.sparse._csr.csr_matrix.
  • l: {int, None} Approximate dimension for the update matrix. When using GKL or RPI, two methods that require an approximation space, it is necessary to set l. The update matrix of rank s will be approximated with a smaller space of rank l for further speedup. Set to None when GKL and RPI is not applied.
  • t: {int, None} The number of iterations to perform in the random power iteration. It needs to be set when using the RPI. This approximation is more accurate when t is larger, while the computational overhead is larger. Set to None when RPI is not applied.

add_column(update_matrix, l=None, t=None)

  • update_matrix: data matrix Data matrix to be appended. This matrix should have the same number of rows as the original matrix. If type sparse is True, this matrix should be in scipy.sparse._csc.csc_matrix.
  • l: {int, None} Approximate dimension for the update matrix. Set to None when GKL and RPI is not applied.
  • t: {int, None} The number of iterations to perform in the random power iteration. Set to None when RPI is not applied.
''' Add rows '''
# data_append_row = scipy.sparse.rand(m, 10, density=0.005).tocsr() * 10
M.add_row(data_append_row)

''' Add columns '''
# data_append_col = scipy.sparse.rand(n, 10, density=0.005).tocsc() * 10
M.add_column(data_append_col)

[!NOTE]

If type sparse is True.

  • [Add row] The input data_append_row is required to have the same number of columns as the original matrix columns and be in scipy.sparse._csr.csr_matrix type.
  • [Add column] The input data_append_col is required to have the same number of rows as the original matrix rows and be in scipy.sparse._csc.csc_matrix type.

Low-rank Update.

Perform low-rank update $\overline{\mathbf{A}} = \mathbf{A} + \mathbf{D}_m \cdot \mathbf{E}_m^{\top}$, where $\mathbf{A}$ is the original matrix, $\mathbf{D}_m \in \mathbb{R}^{m \times s}$ and $\mathbf{E}_m \in \mathbb{R}^{n \times s}$.

I1
***update_weight(self, update_matrix_B, update_matrix_C, l=None, t=None):***
  • update_matrix_B: data matrix Data matrix to be appended. This matrix should have the same number of columns as the original matrix. If type sparse is True, this matrix should be in scipy.sparse._csc.csc_matrix.
  • update_matrix_C: data matrix Data matrix to be appended. This matrix should have the same number of columns as the original matrix. If type sparse is True, this matrix should be in scipy.sparse._csc.csc_matrix.
  • l: {int, None} Approximate dimension for the update matrix. Set to None when GKL and RPI is not applied.
  • t: {int, None} The number of iterations to perform in the random power iteration. Set to None when RPI is not applied.
''' Low-rank update '''
# data_dm = scipy.sparse.rand(m, 10, density=0.005).tocsc() * 10
# data_em = scipy.sparse.rand(n, 10, density=0.005).tocsc() * 10
M.update_weight(data_dm, data_em)

[!NOTE]

If type sparse is True, the input data_dm and data_em is required to be in scipy.sparse._csc.csc_matrix type.

Get the result.

Queries the current SVD result.

''' Get the result. '''
Uk, Sigmak, Vk = M.Uk, M.Sigmak, M.Vk

''' Get a row of singular vectors. '''
# i = 0
Uk0, Vk0 = M.Uki(i), M.Vki(i)

''' A low-rank approximation of data matrix can be obtained from. '''
Ak = Uk @ np.diag(Sigmak) @ Vk.T

Contact Us

If you have any questions about this code repository, feel free to raise an issue or email denghaoran@zju.edu.cn.

Citations.

If you find our work useful, please consider citing the following paper:

@inproceedings{deng2024incsvd,
title={Fast Updating Truncated {SVD} for Representation Learning with Sparse Matrices},
author={Haoran Deng and Yang Yang and Jiahe Li and Cheng Chen and Weihao Jiang and Shiliang Pu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=CX2RgsS29V}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

IncSVD-0.2.0.tar.gz (16.3 kB view details)

Uploaded Source

Built Distribution

IncSVD-0.2.0-py3-none-any.whl (10.4 kB view details)

Uploaded Python 3

File details

Details for the file IncSVD-0.2.0.tar.gz.

File metadata

  • Download URL: IncSVD-0.2.0.tar.gz
  • Upload date:
  • Size: 16.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.9

File hashes

Hashes for IncSVD-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e697fe04be4ebb76f84422a16859c692c0af4d20b6db0d372c5dac8dcdce5977
MD5 1674b3962cf37371de8bf8c386badcb2
BLAKE2b-256 54d0f521248a8c622feb88dc801515cc35c217781c6556f1a2c2b5cef5a95986

See more details on using hashes here.

File details

Details for the file IncSVD-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: IncSVD-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 10.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.9

File hashes

Hashes for IncSVD-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0439aebd10bf0f59215c7e86b50d162308ba06c915d2998590b180a42bb9ee00
MD5 2ed12e8061390d695664c394a2d2e950
BLAKE2b-256 2c5d901c31f5fe8e635088c5f5c9b992dd793cd7953cb1837c68aef37709ccc9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page