Skip to main content

A Python Package for Advanced Tensor Learning Methods

Project description

TensorLearn

TensorLearn is a Python library distributed on Pypi to implement tensor learning methods.

This is a project under development. Yet, the available methods are final and functional. The requirment is Numpy.

Installation

Use the package manager pip to install tensorlearn in Python.

pip install tensorlearn

methods

Decomposition Methods

Tensor Operations for Tensor-Train

Tensor Operations for CANDECOM/PARAFAC (CP)

Tensor Operations

Matrix Operations


auto_rank_tt

tensorlearn.auto_rank_tt(tensor, epsilon)

This implementation of tensor-train decomposition determines the ranks automatically based on a given error bound according to Oseledets (2011). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page tensor-train decomposition.

Arguments

Return

  • TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.

Example


cp_als_rand_init

tensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)

This is an implementation of CANDECOMP/PARAFAC (CP) decomposition using alternating least squares (ALS) algorithm with random initialization of factors.

Arguments

  • tensor < array >: the given tensor to be decomposed

  • rank < int >: number of ranks

  • iterations < int >: the number of iterations of the ALS algorithm

  • random_seed < int >: the seed of random number generator for random initialization of the factor matrices

Return

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Example


tt_to_tensor

tensorlearn.tt_to_tensor(factors)

Returns the full tensor given the TT factors

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • full tensor < numpy array >

Example


tt_compression_ratio

tensorlearn.tt_compression_ratio(factors)

Returns data compression ratio for tensor-train decompostion

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • Compression ratio < float >

Example


cp_to_tensor

Returns the full tensor given the CP factor matrices and weights

tensorlearn.cp_to_tensor(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • full tensor < array >

Example


cp_compression_ratio

Returns data compression ratio for CP- decompostion

tensorlearn.cp_compression_ratio(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • Compression ratio < float >

Example


tensor_resize

tensorlearn.tensor_resize(tensor, new_shape)

This method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to numpy.ndarray.resize()

Arguments

  • tensor < array >: the given tensor

  • new_shape < tuple >: new shape

Return

  • tensor < array >: tensor with new given shape

unfold

tensorlearn.unfold(tensor, n)

Unfold the tensor with respect to dimension n.

Arguments

  • tensor < array >: tensor to be unfolded

  • n < int >: dimension based on which the tensor is unfolded

Return

  • matrix < array >: unfolded tensor with respect to dimension n

tensor_frobenius_norm

tensorlearn.tensor_frobenius_norm(tensor)

Calculates the frobenius norm of the given tensor.

Arguments

  • tensor < array >: the given tensor

Return

  • frobenius norm < float >

Example


error_truncated_svd

tensorlearn.error_truncated_svd(x, error)

This method conducts a compact svd and return sigma (error)-truncated SVD of a given matrix. This is an implementation using numpy.linalg.svd with full_matrices=False. This method is used in TT-SVD algorithm in auto_rank_tt.

Arguments

  • x < 2D array >: the given matrix to be decomposed

  • error < float >: the given error in the range [0,1]

Return

  • r, u, s, vh < int, numpy array, numpy array, numpy array >

column_wise_kronecker

tensorlearn.column_wise_kronecker(a, b)

Returns the column wise Kronecker product (Sometimes known as Khatri Rao) of two given matrices.

Arguments

  • a,b < 2D array >: the given matrices

Return

  • column wise Kronecker product < array >

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorlearn-1.1.1.tar.gz (9.8 kB view details)

Uploaded Source

Built Distribution

tensorlearn-1.1.1-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file tensorlearn-1.1.1.tar.gz.

File metadata

  • Download URL: tensorlearn-1.1.1.tar.gz
  • Upload date:
  • Size: 9.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for tensorlearn-1.1.1.tar.gz
Algorithm Hash digest
SHA256 03e08e1848a074252a157a4be7e30aa56e4270849153ab2940d4d9d3156a9362
MD5 c54738337a364a80d7aac8c03be15977
BLAKE2b-256 9e2b19115a9669947493d8b652e619177282b6282426e60d07627b49c4a5980c

See more details on using hashes here.

File details

Details for the file tensorlearn-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: tensorlearn-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 11.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for tensorlearn-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d1e08071fe631e313ae551b534320d7cb7349e1658160163870a05503b76f341
MD5 1a5e12a52d98714307725ded238e6959
BLAKE2b-256 b03546ed6b1e44613bfa5fe31a27f5dbd7bdf573fcdaef84f07109a5b07ce8d5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page