Skip to main content

Python Matrix Factorization Module

Project description

MatrixFact

Install

pip install matrix-fact

What is matrix-fact?

matrix-fact contains modules for constrained/unconstrained matrix factorization (and related) methods for both sparse and dense matrices. The repository can be found at https://github.com/gentaiscool/matrix_fact. The code is based on https://github.com/ChrisSchinnerl/pymf3 and https://github.com/rikkhill/pymf. We updated the code to support the latest library. It requires cvxopt, numpy, scipy and torch. We just added the support for PyTorch-based SNMF.

Packages

The package includes:

  • Non-negative matrix factorization (NMF) [three different optimizations used]
  • Convex non-negative matrix factorization (CNMF)
  • Semi non-negative matrix factorization (SNMF)
  • Archetypal analysis (AA)
  • Simplex volume maximization (SiVM) [and SiVM for CUR, GSAT, ... ]
  • Convex-hull non-negative matrix factorization (CHNMF)
  • Binary matrix factorization (BNMF)
  • Singular value decomposition (SVD)
  • Principal component analysis (PCA)
  • K-means clustering (Kmeans)
  • C-means clustering (Cmeans)
  • CUR decomposition (CUR)
  • Compaxt matrix decomposition (CMD)
  • PyTorch SNMF

Usage

Given a dataset, most factorization methods try to minimize the Frobenius norm |data - W*H| by finding a suitable set of basis vectors W and coefficients H. The syntax for calling the various methods is quite similar. Usually, one has to submit a desired number of basis vectors and the maximum number of iterations. For example, applying NMF to a dataset data aiming at 2 basis vectors within 10 iterations works as follows:

>>> import matrix_fact
>>> import numpy as np
>>> data = np.array([[1.0, 0.0, 2.0], [0.0, 1.0, 1.0]])
>>> nmf_mdl = matrix_fact.NMF(data, num_bases=2, niter=10)
>>> nmf_mdl.initialization()
>>> nmf_mdl.factorize()

The basis vectors are now stored in nmf_mdl.W, the coefficients in nmf_mdl.H. To compute coefficients for an existing set of basis vectors simply copy W to nmf_mdl.W, and set compW to False:

>>> data = np.array([[1.5], [1.2]])
>>> W = np.array([[1.0, 0.0], [0.0, 1.0]])
>>> nmf_mdl = matrix_fact.NMF(data, num_bases=2, niter=1, compW=False)
>>> nmf_mdl.initialization()
>>> nmf_mdl.W = W
>>> nmf_mdl.factorize()

By changing py_fact.NMF to e.g. py_fact.AA or py_fact.CNMF Archetypal Analysis or Convex-NMF can be applied. Some methods might allow other parameters, make sure to have a look at the corresponding >>>help(py_fact.AA) documentation. For example, CUR, CMD, and SVD are handled slightly differently, as they factorize into three submatrices which requires appropriate arguments for row and column sampling.

For PyTorch-SNMF

>>> data = torch.FloatTensor([[1.5], [1.2]])
>>> nmf_mdl = matrix_fact.NMF(data, num_bases=2)
>>> nmf_mdl.factorize(niter=1000)

Very large datasets

For handling larger datasets py_fact supports hdf5 via h5py. Usage is straight forward as h5py allows to map large numpy matrices to disk. Thus, instead of passing data as a np.array, you can simply send the corresponding hdf5 table. The following example shows how to apply py_fact to a random matrix that is entirely stored on disk. In this example the dataset does not have to fit into memory, the resulting low-rank factors W,H have to.

>>> import h5py
>>> import numpy as np
>>> import matrix_fact
>>>
>>> file = h5py.File('myfile.hdf5', 'w')
>>> file['dataset'] = np.random.random((100,1000))
>>> sivm_mdl = matrix_fact.SIVM(file['dataset'], num_bases=10)
>>> sivm_mdl.factorize()

If the low-rank matrices W,H also do not fit into memory, they can be initialized as a h5py matrix.

>>> import h5py
>>> import numpy as np
>>> import matrix_fact
>>>
>>> file = h5py.File('myfile.hdf5', 'w')
>>> file['dataset'] = np.random.random((100,1000))
>>> file['W'] = np.random.random((100,10))
>>> file['H'] = np.random.random((10,1000))
>>> sivm_mdl = matrix_fact.SIVM(file['dataset'], num_bases=10)
>>> sivm_mdl.W = file['W']
>>> sivm_mdl.H = file['H']
>>> sivm_mdl.factorize()

Please note that currently not all methods work well with hdf5. While they all accept hdf5 input matrices, they sometimes lead to very high memory consumption on intermediate computation steps. This is difficult to avoid unless we switch to a completely disk-based storage.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

matrix-fact-1.1.2.tar.gz (34.6 kB view details)

Uploaded Source

File details

Details for the file matrix-fact-1.1.2.tar.gz.

File metadata

  • Download URL: matrix-fact-1.1.2.tar.gz
  • Upload date:
  • Size: 34.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.7.0

File hashes

Hashes for matrix-fact-1.1.2.tar.gz
Algorithm Hash digest
SHA256 5ece899deafbb1912f8c01f92b54dddd889c5202447100044f335ccca52abc6e
MD5 fce3810faa888d0c272c8a86b8e51959
BLAKE2b-256 e0311221f472055654406d1c352f5110f031f4f255d5947692df73557b11f15f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page