Skip to main content

Cache PyTorch module outputs on the fly

Project description

torchcache

Lint and Test Codecov Documentation Status

Effortlessly cache PyTorch module outputs or PyTorch-heavy functions on-the-fly with torchcache.

Particularly useful for caching and serving the outputs of computationally expensive large, pre-trained PyTorch modules, such as vision transformers. Note that gradients will not flow through the cached outputs.

Features

  • Cache PyTorch module outputs or pure Python functions either in-memory or persistently to disk.
  • Simple decorator-based interface for easy usage.
  • Uses an MRU (most-recently-used) cache, which evicts the most recently used items first to manage memory/disk usage in a training setting. Learn more about MRU caches here.

Installation

pip install torchcache

Citation

If you use our work, please consider citing our paper:

@inproceedings{akbiyik2023routeformer,
    title={Leveraging Driver Field-of-View for Multimodal Ego-Trajectory Prediction},
    author={M. Eren Akbiyik, Nedko Savov, Danda Pani Paudel, Nikola Popovic, Christian Vater, Otmar Hilliges, Luc Van Gool, Xi Wang},
    booktitle={International Conference on Learning Representations},
    year={2025}
}

Basic usage

Quickly cache the output of your PyTorch module with a single decorator:

from torchcache import torchcache

@torchcache()
class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 10)

    def forward(self, x):
        # This output will be cached
        return self.linear(x)

input_tensor = torch.ones(10, dtype=torch.float32)
# Output is cached during the first call...
output = model(input_tensor)
# ...and is retrieved from the cache for the next one
output_cached = model(input_tensor)

You can also cache the output of any function, not just PyTorch modules:

from torchcache import torchcache
@torchcache()
def my_function(x):
    # This output will be cached
    return x * 2

See documentation at torchcache.readthedocs.io for more examples.

Assumptions

To ensure seamless operation, torchcache assumes the following:

  • Your module is a subclass of nn.Module.
  • The module's forward method accepts any number of positional or keyword arguments with shapes (B, *), where B is the batch size and * represents any number of dimensions, or any other basic immutable Python types (int, str, float, boolean). All tensors should be on the same device and have the same dtype.
  • The forward method returns a single tensor of shape (B, *).

Contribution

  1. Ensure you have Python installed.
  2. Install poetry.
  3. Run poetry install to set up dependencies.
  4. Run poetry run pre-commit install to install pre-commit hooks.
  5. Create a branch, make your changes, and open a pull request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchcache-0.6.0.tar.gz (13.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchcache-0.6.0-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file torchcache-0.6.0.tar.gz.

File metadata

  • Download URL: torchcache-0.6.0.tar.gz
  • Upload date:
  • Size: 13.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.9.22 Linux/6.8.0-1021-azure

File hashes

Hashes for torchcache-0.6.0.tar.gz
Algorithm Hash digest
SHA256 845f5632e552669b4b93de4491091dc238a2063e6902f83398101c1e3bf0d26f
MD5 1c107eab9515531198eba03e1b10020e
BLAKE2b-256 15ff50180f21b72ec80ec113870f91adc5dcfe5578da88f2ece8839bb262b4e0

See more details on using hashes here.

File details

Details for the file torchcache-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: torchcache-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 12.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.9.22 Linux/6.8.0-1021-azure

File hashes

Hashes for torchcache-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7754b5bec3c328b5c54db17704fb1cb887b5f132ead9905c25fc47f666c4d63c
MD5 03638999fd7a6c304dc1492736ad5ef5
BLAKE2b-256 78223bbe4e0e7c2d7523476d51e387e5cb0fab59e0581bdf9a295601d6745e78

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page