Skip to main content

A Python Package for Advanced Tensor Learning Methods

Project description

TensorLearn

TensorLearn is a Python library distributed on Pypi to implement tensor learning methods.

This is a project under development. Yet, the available methods are final and functional. The requirment is Numpy.

Installation

Use the package manager pip to install tensorlearn in Python.

pip install tensorlearn

methods

Decomposition Methods

Tensor Operations for Tensor-Train

Tensor Operations for CANDECOMP/PARAFAC (CP)

Tensor Operations

Matrix Operations


auto_rank_tt

tensorlearn.auto_rank_tt(tensor, epsilon)

This implementation of tensor-train decomposition determines the ranks automatically based on a given error bound according to Oseledets (2011). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page tensor-train decomposition.

Arguments

Return

  • TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.

Example


cp_als_rand_init

tensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)

This is an implementation of CANDECOMP/PARAFAC (CP) decomposition using alternating least squares (ALS) algorithm with random initialization of factors.

Arguments

  • tensor < array >: the given tensor to be decomposed

  • rank < int >: number of ranks

  • iterations < int >: the number of iterations of the ALS algorithm

  • random_seed < int >: the seed of random number generator for random initialization of the factor matrices

Return

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Example


tt_to_tensor

tensorlearn.tt_to_tensor(factors)

Returns the full tensor given the TT factors

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • full tensor < numpy array >

Example


tt_compression_ratio

tensorlearn.tt_compression_ratio(factors)

Returns data compression ratio for tensor-train decompostion

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • Compression ratio < float >

Example


cp_to_tensor

Returns the full tensor given the CP factor matrices and weights

tensorlearn.cp_to_tensor(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • full tensor < array >

Example


cp_compression_ratio

Returns data compression ratio for CP- decompostion

tensorlearn.cp_compression_ratio(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • Compression ratio < float >

Example


tensor_resize

tensorlearn.tensor_resize(tensor, new_shape)

This method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to numpy.ndarray.resize()

Arguments

  • tensor < array >: the given tensor

  • new_shape < tuple >: new shape

Return

  • tensor < array >: tensor with new given shape

unfold

tensorlearn.unfold(tensor, n)

Unfold the tensor with respect to dimension n.

Arguments

  • tensor < array >: tensor to be unfolded

  • n < int >: dimension based on which the tensor is unfolded

Return

  • matrix < array >: unfolded tensor with respect to dimension n

tensor_frobenius_norm

tensorlearn.tensor_frobenius_norm(tensor)

Calculates the frobenius norm of the given tensor.

Arguments

  • tensor < array >: the given tensor

Return

  • frobenius norm < float >

Example


error_truncated_svd

tensorlearn.error_truncated_svd(x, error)

This method conducts a compact svd and return sigma (error)-truncated SVD of a given matrix. This is an implementation using numpy.linalg.svd with full_matrices=False. This method is used in TT-SVD algorithm in auto_rank_tt.

Arguments

  • x < 2D array >: the given matrix to be decomposed

  • error < float >: the given error in the range [0,1]

Return

  • r, u, s, vh < int, numpy array, numpy array, numpy array >

column_wise_kronecker

tensorlearn.column_wise_kronecker(a, b)

Returns the column wise Kronecker product (Sometimes known as Khatri Rao) of two given matrices.

Arguments

  • a,b < 2D array >: the given matrices

Return

  • column wise Kronecker product < array >

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorlearn-1.1.8.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

tensorlearn-1.1.8-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file tensorlearn-1.1.8.tar.gz.

File metadata

  • Download URL: tensorlearn-1.1.8.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for tensorlearn-1.1.8.tar.gz
Algorithm Hash digest
SHA256 c8cc362a3f693b0fa6a6f080ba6b1931e4ee07d649c26a3f6a415b82a84500e2
MD5 f2789cb0af3d88f2b4b0290f3f477871
BLAKE2b-256 3b0b9e67ad4c0bd26a99d673cf8b60a17faa881c7766fde6c555a8bb1d44d267

See more details on using hashes here.

File details

Details for the file tensorlearn-1.1.8-py3-none-any.whl.

File metadata

  • Download URL: tensorlearn-1.1.8-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.8

File hashes

Hashes for tensorlearn-1.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 b942cecf335b4cc05cbd793867c1ee32de61051aea7cb08497064b866eade092
MD5 a8dbae6fcda4d952ee5b16a0dfbb6b1b
BLAKE2b-256 80ab210c73d00656ca941f164e4aba112b65e5b85827fdf89b4337b97dd8f189

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page