Skip to main content

PyTorch extension for handling deeply nested sequences of variable length

Project description



Tests PyPI Codecov License

FoldedTensor: PyTorch extension for handling deeply nested sequences of variable length

foldedtensor is a PyTorch extension that provides efficient handling of tensors containing deeply nested sequences variable sizes. It enables the flattening/unflattening (or unfolding/folding) of data dimensions based on a inner structure of sequence lengths. This library is particularly useful when working with data that can be split in different ways and enables you to avoid choosing a fixed representation.

Installation

The library can be installed with pip:

pip install foldedtensor

Features

  • Support for arbitrary numbers of nested dimensions
  • No computational overhead when dealing with already padded tensors
  • Dynamic re-padding (or refolding) of data based on stored inner lengths
  • Automatic mask generation and updating whenever the tensor is refolded
  • C++ optimized code for fast data loading from Python lists and refolding
  • Flexibility in data representation, making it easy to switch between different layouts when needed

Examples

At its simplest, foldedtensor can be used to convert nested Python lists into a PyTorch tensor:

from foldedtensor import as_folded_tensor

ft = as_folded_tensor(
    [
        [0, 1, 2],
        [3],
    ],
)
# FoldedTensor([[0, 1, 2],
#               [3, 0, 0]])

You can also specify names and flattened/unflattened dimensions at the time of creation:

import torch
from foldedtensor import as_folded_tensor

# Creating a folded tensor from a nested list
# There are 2 samples, the first with 5 lines, the second with 1 line.
# Each line contain between 1 and 2 words.
ft = as_folded_tensor(
    [
        [[1], [], [], [], [2, 3]],
        [[4, 3]],
    ],
    data_dims=("samples", "words"),
    full_names=("samples", "lines", "words"),
    dtype=torch.long,
)
print(ft)
# FoldedTensor([[1, 2, 3],
#               [4, 3, 0]])

Once created, you can change the shape of the tensor by refolding it:

# Refold on the lines and words dims (flatten the samples dim)
print(ft.refold(("lines", "words")))
# FoldedTensor([[1, 0],
#               [0, 0],
#               [0, 0],
#               [0, 0],
#               [2, 3],
#               [4, 3]])

# Refold on the words dim only: flatten everything
print(ft.refold(("words",)))
# FoldedTensor([1, 2, 3, 4, 3])

The tensor can be further used with standard PyTorch operations:

# Working with PyTorch operations
embedder = torch.nn.Embedding(10, 16)
embedding = embedder(ft.refold(("words",)))
print(embedding.shape)
# torch.Size([5, 16]) # 5 words total, 16 dims

refolded_embedding = embedding.refold(("samples", "words"))
print(refolded_embedding.shape)
# torch.Size([2, 5, 16]) # 2 samples, 5 words max, 16 dims

Benchmarks

View the comparisons of foldedtensor against various alternatives here: docs/benchmarks.

Comparison with alternatives

Unlike other ragged or nested tensor implementations, a FoldedTensor does not enforce a specific structure on the nested data, and does not require padding all dimensions. This provides the user with greater flexibility when working with data that can be arranged in multiple ways depending on the data transformation. Moreover, the C++ optimization ensures high performance, making it ideal for handling deeply nested tensors efficiently.

Here is a comparison with other common implementations for handling nested sequences of variable length:

Feature NestedTensor MaskedTensor FoldedTensor
Inner data structure Flat Padded Arbitrary
Max nesting level 1 1
From nested python lists No No Yes
Layout conversion To padded No Any
Reduction ops w/o padding Yes No No

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

foldedtensor-0.3.4.tar.gz (18.0 kB view hashes)

Uploaded Source

Built Distributions

foldedtensor-0.3.4-cp312-cp312-win_amd64.whl (82.0 kB view hashes)

Uploaded CPython 3.12 Windows x86-64

foldedtensor-0.3.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (119.9 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

foldedtensor-0.3.4-cp312-cp312-macosx_11_0_arm64.whl (82.5 kB view hashes)

Uploaded CPython 3.12 macOS 11.0+ ARM64

foldedtensor-0.3.4-cp312-cp312-macosx_10_9_x86_64.whl (85.9 kB view hashes)

Uploaded CPython 3.12 macOS 10.9+ x86-64

foldedtensor-0.3.4-cp311-cp311-win_amd64.whl (81.7 kB view hashes)

Uploaded CPython 3.11 Windows x86-64

foldedtensor-0.3.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (120.0 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

foldedtensor-0.3.4-cp311-cp311-macosx_11_0_arm64.whl (84.0 kB view hashes)

Uploaded CPython 3.11 macOS 11.0+ ARM64

foldedtensor-0.3.4-cp311-cp311-macosx_10_9_x86_64.whl (86.8 kB view hashes)

Uploaded CPython 3.11 macOS 10.9+ x86-64

foldedtensor-0.3.4-cp310-cp310-win_amd64.whl (80.5 kB view hashes)

Uploaded CPython 3.10 Windows x86-64

foldedtensor-0.3.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (118.4 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

foldedtensor-0.3.4-cp310-cp310-macosx_11_0_arm64.whl (82.7 kB view hashes)

Uploaded CPython 3.10 macOS 11.0+ ARM64

foldedtensor-0.3.4-cp310-cp310-macosx_10_9_x86_64.whl (85.4 kB view hashes)

Uploaded CPython 3.10 macOS 10.9+ x86-64

foldedtensor-0.3.4-cp39-cp39-win_amd64.whl (80.5 kB view hashes)

Uploaded CPython 3.9 Windows x86-64

foldedtensor-0.3.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (118.5 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

foldedtensor-0.3.4-cp39-cp39-macosx_11_0_arm64.whl (82.9 kB view hashes)

Uploaded CPython 3.9 macOS 11.0+ ARM64

foldedtensor-0.3.4-cp39-cp39-macosx_10_9_x86_64.whl (85.5 kB view hashes)

Uploaded CPython 3.9 macOS 10.9+ x86-64

foldedtensor-0.3.4-cp38-cp38-win_amd64.whl (80.6 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

foldedtensor-0.3.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (118.3 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

foldedtensor-0.3.4-cp38-cp38-macosx_11_0_arm64.whl (82.7 kB view hashes)

Uploaded CPython 3.8 macOS 11.0+ ARM64

foldedtensor-0.3.4-cp38-cp38-macosx_10_9_x86_64.whl (85.3 kB view hashes)

Uploaded CPython 3.8 macOS 10.9+ x86-64

foldedtensor-0.3.4-cp37-cp37m-win_amd64.whl (81.3 kB view hashes)

Uploaded CPython 3.7m Windows x86-64

foldedtensor-0.3.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (120.3 kB view hashes)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

foldedtensor-0.3.4-cp37-cp37m-macosx_10_9_x86_64.whl (85.1 kB view hashes)

Uploaded CPython 3.7m macOS 10.9+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page