Skip to main content

A library that provides a rich set of tools for tensor analysis, multi-linear algebra, tensor regression and multi-dimensional sparse signal representations.

Project description

tensor-ml

PyPI version Python Tests License: MIT

A Python library for tensor analysis, multilinear algebra, tensor regression, and multidimensional sparse signal representations.

Implements the T-LARS algorithm from:

Ishan Wickramasingha, Ahmed Elrewainy, Michael Sobhy, Sherif S. Sherif; Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals. Neural Computation 2020; 32 (9): 1697–1732. doi: 10.1162/neco_a_01304

Features

  • Backend-agnostic — unified API for NumPy and PyTorch (auto-detects input type)
  • Tensor products — Kronecker, Khatri-Rao, Hadamard, full multilinear product
  • T-LARS — Tensor Least Angle Regression & Selection for sparse tensor recovery
  • scikit-learn-style interfacefit / predict / score / get_params / set_params
  • Device management.to('cuda'), .cpu(), .cuda() for PyTorch backend
  • Validated configuration — Pydantic-based parameter validation with clear error messages

Installation

pip install tensor-ml

With optional PyTorch support:

pip install "tensor-ml[torch]"

Development install from source (using uv):

git clone https://github.com/imaduranga/tensor-ml.git
cd tensor-ml
uv sync --all-extras       # creates .venv and installs all deps
uv run pytest              # run the test suite

Or with pip:

pip install -e ".[torch]"

Quick Start

Tensor Products

import numpy as np
from tensor_ml import TensorProducts

A = np.array([[1, 2], [3, 4]])
B = np.eye(2)

# Kronecker product
K = TensorProducts.kronecker_product([A, B])

# Full multilinear product: X ×₁ A ×₂ B
X = np.random.randn(2, 2)
Y = TensorProducts.full_multilinear_product(X, [A, B])

T-LARS: Sparse Tensor Recovery

import numpy as np
from tensor_ml import TLARS

# Per-mode dictionaries
D1 = np.random.randn(8, 16)
D2 = np.random.randn(8, 16)

# Target tensor
Y = np.random.randn(8, 8)

# Fit
model = TLARS(tolerance=0.01, l0_mode=True)
model.fit(factor_matrices=[D1, D2], Y=Y)

# Predict & score
Y_hat = model.predict([D1, D2])
r2 = model.score([D1, D2], Y)
print(f"R² = {r2:.4f}, iterations = {model.n_iter_}")

PyTorch Backend

import torch
from tensor_ml import TensorProducts

A = torch.randn(3, 3)
B = torch.randn(3, 3)
K = TensorProducts.kronecker_product([A, B])  # auto-uses TorchTensorProducts

Documentation

Resource Description
Quickstart Tutorial Interactive notebook walkthrough
T-LARS Image Reconstruction Visual demo with DCT dictionaries
API Reference Complete class and method reference
User Guide Concepts, architecture, and extension guide

Architecture

tensor_ml/
├── enums.py              # BackendType enum
├── exceptions.py         # Custom exception hierarchy
├── utils.py              # Backend inference
├── tensor_ops/
│   ├── tensor_ops.py     # TensorOps ABC + NumpyOps + TorchOps + Factory
│   ├── tensor_products_base.py   # TensorProductsBase ABC
│   ├── tensor_products_numpy.py  # NumPy backend
│   ├── tensor_products_torch.py  # PyTorch backend
│   └── tensor_products.py        # Static facade + Factory
└── tensor_models/
    ├── base.py            # BaseTensorModel ABC
    └── multilinear/
        ├── multilinear_model.py  # MultilinearModel base
        └── tlars.py              # TLARS algorithm + TLARSConfig

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/my-feature)
  3. Run the test suite (uv run pytest)
  4. Submit a pull request against master

Releases are fully automated — merging a PR that bumps the version in pyproject.toml and src/tensor_ml/__init__.py publishes to PyPI, creates a git tag, and opens a version-bump PR automatically.

Citation

If you use tensor-ml in your research, please cite the underlying T-LARS algorithm:

@article{wickramasingha2020tlars,
  title   = {Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals},
  author  = {Wickramasingha, Ishan and Elrewainy, Ahmed and Sobhy, Michael and Sherif, Sherif S.},
  journal = {Neural Computation},
  year    = {2020},
  volume  = {32},
  number  = {9},
  pages   = {1697--1732},
  doi     = {10.1162/neco_a_01304}
}

You may also cite the software itself:

@software{tensor_ml,
  title   = {tensor-ml: Tensor Machine Learning Library},
  author  = {Wickramasingha, Ishan},
  url     = {https://github.com/imaduranga/tensor-ml},
  license = {MIT}
}

License

MIT — see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensor_ml-0.2.0.tar.gz (72.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tensor_ml-0.2.0-py3-none-any.whl (29.3 kB view details)

Uploaded Python 3

File details

Details for the file tensor_ml-0.2.0.tar.gz.

File metadata

  • Download URL: tensor_ml-0.2.0.tar.gz
  • Upload date:
  • Size: 72.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for tensor_ml-0.2.0.tar.gz
Algorithm Hash digest
SHA256 f1c8de2a6db484e67901c670b887f2e4e6a27591534c3e20a5ac2daa6e829953
MD5 46990d3e7870ab8f63b8a7fbe5e4a9b2
BLAKE2b-256 cf8eac80aabc3f57994bb443124843410a1e651ed72400ecaf5d55554a22a9d2

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensor_ml-0.2.0.tar.gz:

Publisher: publish.yml on imaduranga/tensor-ml

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensor_ml-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: tensor_ml-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 29.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for tensor_ml-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 052e73c5edc645a43dc465a6567ad9cd1cbb89e5fc7944c2fe6226f9d15103cd
MD5 280444e9799ceee908a7b0b44214cd41
BLAKE2b-256 8adf92bea9889194d7f4e3711c79a313025ce4f73118e5db3f6ffc1a493a477e

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensor_ml-0.2.0-py3-none-any.whl:

Publisher: publish.yml on imaduranga/tensor-ml

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page