Tensor Train linear algebra package for efficient high-dimensional tensor operations
Project description
TTGlow
A Tensor Train linear algebra package for efficient high-dimensional tensor operations in PyTorch.
Overview
TTGlow provides an efficient implementation of the Tensor Train (TT) decomposition, a powerful technique for representing and manipulating high-dimensional tensors with reduced memory footprint and computational cost. This package is particularly useful for:
- High-dimensional data analysis
- Quantum many-body physics simulations
- Machine learning with tensor methods
- Efficient storage and computation of large-scale tensors
Installation
Using Pixi (Recommended)
# Clone the repository
git clone https://github.com/romanellerbrock/ttglow.git
cd ttglow
# Install dependencies with pixi
pixi install
# Activate the environment
pixi shell
Using pip
pip install -e .
Quick Start
import torch
from ttglow import TensorTrain, dot, add, scale
# Create a random Tensor Train
dims = [10, 20, 15] # Tensor dimensions
ranks = [3, 5] # TT-ranks
tt = TensorTrain.random(dims, ranks)
# Convert to full tensor
full_tensor = tt.to_tensor()
# Create a constant tensor in TT format
tt_ones = TensorTrain.ones([10, 20, 15], value=3.14)
# Dot product (inner product)
tt1 = TensorTrain.random([4, 5, 6], [2, 3])
tt2 = TensorTrain.random([4, 5, 6], [2, 3])
matrices = dot(tt1, tt2)
inner_product = matrices[-1].item() # Scalar result
norm_squared = dot(tt1, tt1)[-1].item() # Self inner product
# Addition
tt_sum = add(tt1, tt2) # Returns new TT with ranks = r1 + r2
# Scalar multiplication
tt_scaled = scale(tt1, 2.5) # Multiply by scalar
Features
- Efficient TT representation: Store high-dimensional tensors with minimal memory
- Core operations:
dot: Inner product of two TT tensorshadamard: Element-wise multiplicationadd: Summation of TT tensorsscale: Scalar multiplication
- Flexible construction: Create TT tensors from scratch or via decomposition
- PyTorch backend: Full GPU support and automatic differentiation
Development
Running Tests
pixi run test
Code Formatting
pixi run format
Linting
pixi run lint
Test Coverage
pixi run test-cov
Project Structure
ttglow/
├── src/ttglow/ # Main package code
│ ├── __init__.py
│ └── tensortrain.py # TT implementation
├── tests/ # Test suite
│ └── test_tensortrain.py
├── examples/ # Usage examples
├── pixi.toml # Pixi configuration
└── pyproject.toml # Python package metadata
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
References
- Oseledets, I. V. (2011). "Tensor-train decomposition". SIAM J. Sci. Comput., 33(5), 2295-2317.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ttglow-0.1.0.tar.gz.
File metadata
- Download URL: ttglow-0.1.0.tar.gz
- Upload date:
- Size: 63.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9181a6bd164d92adee60e14a57bc42d3d173da45d2c086729af3dcb6e08e1138
|
|
| MD5 |
25b5c5d3c8d813115c86b55163b7bae4
|
|
| BLAKE2b-256 |
638dacb8a43ed3aeb2aa0c3248f3f3acf3e52762eccf709648beffe9d0514393
|
File details
Details for the file ttglow-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ttglow-0.1.0-py3-none-any.whl
- Upload date:
- Size: 40.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6797c8e3abadcc1f7ab80e2ea861f622a0c8e41d54a31e5c9827bf1f24f5141b
|
|
| MD5 |
ec7447303d91454569b30873df82729e
|
|
| BLAKE2b-256 |
bc2337eed1d429b3d23c3efe894ac9cb3986cb81b0f2c1e333d38497b5c88fa8
|