Skip to main content

No project description provided

Project description

Documentation Python version GitHub license pypi version pypi nightly version Downloads Downloads

TensorDict

TensorDict is a dictionary-like class that inherits properties from tensors, such as indexing, shape operations, casting to device etc.

The main purpose of TensorDict is to make code-bases more readable and modular by abstracting away tailored operations:

for i, tensordict in enumerate(dataset):
    # the model reads and writes tensordicts
    tensordict = model(tensordict)
    loss = loss_module(tensordict)
    loss.backward()
    optimizer.step()
    optimizer.zero_grad()

With this level of abstraction, one can recycle a training loop for highly heterogeneous task. Each individual step of the training loop (data collection and transform, model prediction, loss computation etc.) can be tailored to the use case at hand without impacting the others. For instance, the above example can be easily used across classification and segmentation tasks, among many others.

Installation

To install the latest stable version of tensordict, simply run

pip install tensordict

This will work with python 3.7 and upward as well as pytorch 1.12 and upward.

To enjoy the latest features, one can use

pip install tensordict-nightly

Features

General

A tensordict is primarily defined by its batch_size (or shape) and its key-value pairs:

>>> from tensordict import TensorDict
>>> import torch
>>> tensordict = TensorDict({
...     "key 1": torch.ones(3, 4, 5),
...     "key 2": torch.zeros(3, 4, 5, dtype=torch.bool),
... }, batch_size=[3, 4])

The batch_size and the first dimensions of each of the tensors must be compliant. The tensors can be of any dtype and device. Optionally, one can restrict a tensordict to live on a dedicated device, which will send each tensor that is written there:

>>> tensordict = TensorDict({
...     "key 1": torch.ones(3, 4, 5),
...     "key 2": torch.zeros(3, 4, 5, dtype=torch.bool),
... }, batch_size=[3, 4], device="cuda:0")
>>> tensordict["key 3"] = torch.randn(3, 4, device="cpu")
>>> assert tensordict["key 3"].device is torch.device("cuda:0")

Tensor-like features

TensorDict objects can be indexed exactly like tensors. The resulting of indexing a TensorDict is another TensorDict containing tensors indexed along the required dimension:

>>> tensordict = TensorDict({
...     "key 1": torch.ones(3, 4, 5),
...     "key 2": torch.zeros(3, 4, 5, dtype=torch.bool),
... }, batch_size=[3, 4])
>>> sub_tensordict = tensordict[..., :2]
>>> assert sub_tensordict.shape == torch.Size([3, 2])
>>> assert sub_tensordict["key 1"].shape == torch.Size([3, 2, 5])

Similarly, one can build tensordicts by stacking or concatenating single tensordicts:

>>> tensordicts = [TensorDict({
...     "key 1": torch.ones(3, 4, 5),
...     "key 2": torch.zeros(3, 4, 5, dtype=torch.bool),
... }, batch_size=[3, 4]) for _ in range(2)]
>>> stack_tensordict = torch.stack(tensordicts, 1)
>>> assert stack_tensordict.shape == torch.Size([3, 2, 4])
>>> assert stack_tensordict["key 1"].shape == torch.Size([3, 2, 4, 5])
>>> cat_tensordict = torch.cat(tensordicts, 0)
>>> assert cat_tensordict.shape == torch.Size([6, 4])
>>> assert cat_tensordict["key 1"].shape == torch.Size([6, 4, 5])

TensorDict instances can also be reshaped, viewed, squeezed and unsqueezed:

>>> tensordict = TensorDict({
...     "key 1": torch.ones(3, 4, 5),
...     "key 2": torch.zeros(3, 4, 5, dtype=torch.bool),
... }, batch_size=[3, 4])
>>> print(tensordict.view(-1))
torch.Size([12])
>>> print(tensordict.reshape(-1))
torch.Size([12])
>>> print(tensordict.unsqueeze(-1))
torch.Size([3, 4, 1])

One can also send tensordict from device to device, place them in shared memory, clone them, update them in-place or not, split them, unbind them, expand them etc.

If a functionality is missing, it is easy to call it using apply() or apply_():

tensordict_uniform = tensordict.apply(lambda tensor: tensor.uniform_())

TensorDict for functional programming using FuncTorch

We also provide an API to use TensorDict in conjunction with FuncTorch. For instance, TensorDict makes it easy to concatenate model weights to do model ensembling:

>>> from torch import nn
>>> from tensordict import TensorDict
>>> from tensordict.nn import make_functional
>>> import torch
>>> from functorch import vmap
>>> layer1 = nn.Linear(3, 4)
>>> layer2 = nn.Linear(4, 4)
>>> model = nn.Sequential(layer1, layer2)
>>> # we represent the weights hierarchically
>>> weights1 = TensorDict(layer1.state_dict(), []).unflatten_keys(".")
>>> weights2 = TensorDict(layer2.state_dict(), []).unflatten_keys(".")
>>> params = make_functional(model)
>>> assert (params == TensorDict({"0": weights1, "1": weights2}, [])).all()
>>> # Let's use our functional module
>>> x = torch.randn(10, 3)
>>> out = model(x, params=params)  # params is the last arg (or kwarg)
>>> # an ensemble of models: we stack params along the first dimension...
>>> params_stack = torch.stack([params, params], 0)
>>> # ... and use it as an input we'd like to pass through the model
>>> y = vmap(model, (None, 0))(x, params_stack)
>>> print(y.shape)
torch.Size([2, 10, 4])

Lazy preallocation

Pre-allocating tensors can be cumbersome and hard to scale if the list of preallocated items varies according to the script configuration. TensorDict solves this in an elegant way. Assume you are working with a function foo() -> TensorDict, e.g.

def foo():
    tensordict = TensorDict({}, batch_size=[])
    tensordict["a"] = torch.randn(3)
    tensordict["b"] = TensorDict({"c": torch.zeros(2)}, batch_size=[])
    return tensordict

and you would like to call this function repeatedly. You could do this in two ways. The first would simply be to stack the calls to the function:

tensordict = torch.stack([foo() for _ in range(N)])

However, you could also choose to preallocate the tensordict:

tensordict = TensorDict({}, batch_size=[N])
for i in range(N):
    tensordict[i] = foo()

which also results in a tensordict (when N = 10)

TensorDict(
    fields={
        a: Tensor(torch.Size([10, 3]), dtype=torch.float32),
        b: TensorDict(
            fields={
                c: Tensor(torch.Size([10, 2]), dtype=torch.float32)},
            batch_size=torch.Size([10]),
            device=None,
            is_shared=False)},
    batch_size=torch.Size([10]),
    device=None,
    is_shared=False)

When i==0, your empty tensordict will automatically be populated with empty tensors of batch-size N. After that, updates will be written in-place. Note that this would also work with a shuffled series of indices (pre-allocation does not require you to go through the tensordict in an ordered fashion).

Nesting TensorDicts

It is possible to nest tensordict. The only requirement is that the sub-tensordict should be indexable under the parent tensordict, i.e. its batch size should match (but could be longer than) the parent batch size.

We can switch easily between hierarchical and flat representations. For instance, the following code will result in a single-level tensordict with keys "key 1" and "key 2.sub-key":

>>> tensordict = TensorDict({
...     "key 1": torch.ones(3, 4, 5),
...     "key 2": TensorDict({"sub-key": torch.randn(3, 4, 5, 6)}, batch_size=[3, 4, 5])
... }, batch_size=[3, 4])
>>> tensordict_flatten = tensordict.flatten_keys(separator=".")

Accessing nested tensordicts can be achieved with a single index:

>>> sub_value = tensordict["key 2", "sub-key"]

Disclaimer

TensorDict is at the alpha-stage, meaning that there may be bc-breaking changes introduced at any moment without warranty. Hopefully that should not happen too often, as the current roadmap mostly involves adding new features and building compatibility with the broader pytorch ecosystem.

License

TorchRL is licensed under the MIT License. See LICENSE for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

tensordict_nightly-2022.12.22-py310-none-any.whl (90.0 kB view details)

Uploaded Python 3.10

tensordict_nightly-2022.12.22-py39-none-any.whl (90.0 kB view details)

Uploaded Python 3.9

tensordict_nightly-2022.12.22-py38-none-any.whl (90.0 kB view details)

Uploaded Python 3.8

tensordict_nightly-2022.12.22-py37-none-any.whl (90.0 kB view details)

Uploaded Python 3.7

File details

Details for the file tensordict_nightly-2022.12.22-py310-none-any.whl.

File metadata

  • Download URL: tensordict_nightly-2022.12.22-py310-none-any.whl
  • Upload date:
  • Size: 90.0 kB
  • Tags: Python 3.10
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.8.3 requests/2.27.1 setuptools/44.1.1 requests-toolbelt/0.10.1 tqdm/4.64.1 CPython/2.7.17

File hashes

Hashes for tensordict_nightly-2022.12.22-py310-none-any.whl
Algorithm Hash digest
SHA256 80f6f49e980ecc2ec7bc06c680202aedabcdbf76a73e13e622383158cce98c0b
MD5 fe17c2fa8c4c82d9f0522f59ff761817
BLAKE2b-256 d8fa0fc52631efadd3eaeaf9fd98d49716892573d78a455bf734edc83d2ca012

See more details on using hashes here.

File details

Details for the file tensordict_nightly-2022.12.22-py39-none-any.whl.

File metadata

  • Download URL: tensordict_nightly-2022.12.22-py39-none-any.whl
  • Upload date:
  • Size: 90.0 kB
  • Tags: Python 3.9
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.8.3 requests/2.27.1 setuptools/44.1.1 requests-toolbelt/0.10.1 tqdm/4.64.1 CPython/2.7.17

File hashes

Hashes for tensordict_nightly-2022.12.22-py39-none-any.whl
Algorithm Hash digest
SHA256 4c758c99ada0d4fc33937bb576690cde34c1ccb08a819d4526f37438c4bf0dfa
MD5 5f499fd93da5036f2601a03744c01e16
BLAKE2b-256 db7378592ed7e3f6913d9cfb8b2deb416d0b8d77be195db510d5ee505fe62399

See more details on using hashes here.

File details

Details for the file tensordict_nightly-2022.12.22-py38-none-any.whl.

File metadata

  • Download URL: tensordict_nightly-2022.12.22-py38-none-any.whl
  • Upload date:
  • Size: 90.0 kB
  • Tags: Python 3.8
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.8.3 requests/2.27.1 setuptools/44.1.1 requests-toolbelt/0.10.1 tqdm/4.64.1 CPython/2.7.17

File hashes

Hashes for tensordict_nightly-2022.12.22-py38-none-any.whl
Algorithm Hash digest
SHA256 8484d2d4e3ba6c303c8f60bdebf0b64dd22b8c0a2edd236a289094f6e8ed9128
MD5 98c14816cf74171426e52acf870dc7ca
BLAKE2b-256 94c770de6b4d9b79ae6383df4b8440d2c69a9f7b6078abdfa23f30e47b0acbd6

See more details on using hashes here.

File details

Details for the file tensordict_nightly-2022.12.22-py37-none-any.whl.

File metadata

  • Download URL: tensordict_nightly-2022.12.22-py37-none-any.whl
  • Upload date:
  • Size: 90.0 kB
  • Tags: Python 3.7
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.8.3 requests/2.27.1 setuptools/44.1.1 requests-toolbelt/0.10.1 tqdm/4.64.1 CPython/2.7.17

File hashes

Hashes for tensordict_nightly-2022.12.22-py37-none-any.whl
Algorithm Hash digest
SHA256 5cf2d3c76a74f07f878be6962c26cbb750405fbce1dd078b832dc7d16e78f324
MD5 635b49562697c046af2cda250bfbf84a
BLAKE2b-256 f9182282df09d270560923b16f3cc819c3e42cc40b98b6d87035faef6dad975e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page