Skip to main content

A Python Package for Advanced Tensor Learning Methods

Project description

TensorLearn

TensorLearn is a Python library distributed on Pypi to implement tensor learning methods.

This is a project under development. Yet, the available methods are final and functional. The requirment is Numpy.

Installation

Use the package manager pip to install tensorlearn in Python.

pip install tensorlearn

methods

Decomposition Methods

Tensor Operations for Tensor-Train

Tensor Operations for CANDECOMP/PARAFAC (CP)

Tensor Operations

Matrix Operations


auto_rank_tt

tensorlearn.auto_rank_tt(tensor, epsilon)

This implementation of tensor-train decomposition determines the ranks automatically based on a given error bound according to Oseledets (2011). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page tensor-train decomposition.

Arguments

Return

  • TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.

Example


cp_als_rand_init

tensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)

This is an implementation of CANDECOMP/PARAFAC (CP) decomposition using alternating least squares (ALS) algorithm with random initialization of factors.

Arguments

  • tensor < array >: the given tensor to be decomposed

  • rank < int >: number of ranks

  • iterations < int >: the number of iterations of the ALS algorithm

  • random_seed < int >: the seed of random number generator for random initialization of the factor matrices

Return

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Example


tt_to_tensor

tensorlearn.tt_to_tensor(factors)

Returns the full tensor given the TT factors

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • full tensor < numpy array >

Example


tt_compression_ratio

tensorlearn.tt_compression_ratio(factors)

Returns data compression ratio for tensor-train decompostion

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • Compression ratio < float >

Example


cp_to_tensor

Returns the full tensor given the CP factor matrices and weights

tensorlearn.cp_to_tensor(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • full tensor < array >

Example


cp_compression_ratio

Returns data compression ratio for CP- decompostion

tensorlearn.cp_compression_ratio(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • Compression ratio < float >

Example


tensor_resize

tensorlearn.tensor_resize(tensor, new_shape)

This method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to numpy.ndarray.resize()

Arguments

  • tensor < array >: the given tensor

  • new_shape < tuple >: new shape

Return

  • tensor < array >: tensor with new given shape

unfold

tensorlearn.unfold(tensor, n)

Unfold the tensor with respect to dimension n.

Arguments

  • tensor < array >: tensor to be unfolded

  • n < int >: dimension based on which the tensor is unfolded

Return

  • matrix < array >: unfolded tensor with respect to dimension n

tensor_frobenius_norm

tensorlearn.tensor_frobenius_norm(tensor)

Calculates the frobenius norm of the given tensor.

Arguments

  • tensor < array >: the given tensor

Return

  • frobenius norm < float >

Example


error_truncated_svd

tensorlearn.error_truncated_svd(x, error)

This method conducts a compact svd and return sigma (error)-truncated SVD of a given matrix. This is an implementation using numpy.linalg.svd with full_matrices=False. This method is used in TT-SVD algorithm in auto_rank_tt.

Arguments

  • x < 2D array >: the given matrix to be decomposed

  • error < float >: the given error in the range [0,1]

Return

  • r, u, s, vh < int, numpy array, numpy array, numpy array >

column_wise_kronecker

tensorlearn.column_wise_kronecker(a, b)

Returns the column wise Kronecker product (Sometimes known as Khatri Rao) of two given matrices.

Arguments

  • a,b < 2D array >: the given matrices

Return

  • column wise Kronecker product < array >

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorlearn-1.1.2.tar.gz (9.8 kB view details)

Uploaded Source

Built Distribution

tensorlearn-1.1.2-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file tensorlearn-1.1.2.tar.gz.

File metadata

  • Download URL: tensorlearn-1.1.2.tar.gz
  • Upload date:
  • Size: 9.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for tensorlearn-1.1.2.tar.gz
Algorithm Hash digest
SHA256 601780fe90161b0d6150916bd4b1e860501bf969da217c9dc99cd9954164589d
MD5 c9b144b1a30f90a098dbb3ba28c6b186
BLAKE2b-256 8a3f6b52a262836b7612f421021290442ef33f91289f41d42a79ba467f6182c8

See more details on using hashes here.

File details

Details for the file tensorlearn-1.1.2-py3-none-any.whl.

File metadata

  • Download URL: tensorlearn-1.1.2-py3-none-any.whl
  • Upload date:
  • Size: 11.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for tensorlearn-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 64ea5665e4f9f85e084f9c1ecb7f273ef37fbdcbf1aa3e5dea5eeed780c5fcfc
MD5 00fc45ff45cdb7d35c4b4b49a6231c9d
BLAKE2b-256 f4c46bb0418d98eac6464546c3f481cf16ff6683676cf60d86d34baa69e8476a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page