Skip to main content

Cache PyTorch module outputs on the fly

Project description

torchcache

Lint and Test Codecov Documentation Status

Effortlessly cache PyTorch module outputs on-the-fly with torchcache.

Particularly useful for caching and serving the outputs of computationally expensive large, pre-trained PyTorch modules, such as vision transformers. Note that gradients will not flow through the cached outputs.

Features

  • Cache PyTorch module outputs either in-memory or persistently to disk.
  • Simple decorator-based interface for easy usage.
  • Uses an MRU (most-recently-used) cache to limit memory/disk usage

Installation

pip install torchcache

Basic usage

Quickly cache the output of your PyTorch module with a single decorator:

from torchcache import torchcache

@torchcache()
class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 10)

    def forward(self, x):
        # This output will be cached
        return self.linear(x)

input_tensor = torch.ones(10, dtype=torch.float32)
# Output is cached during the first call...
output = model(input_tensor)
# ...and is retrieved from the cache for the next one
output_cached = model(input_tensor)

See documentation at torchcache.readthedocs.io for more examples.

Assumptions

To ensure seamless operation, torchcache assumes the following:

  • Your module is a subclass of nn.Module.
  • The module's forward method accepts any number of positional arguments with shapes (B, *), where B is the batch size and * represents any number of dimensions. All tensors should be on the same device and have the same dtype.
  • The forward method returns a single tensor of shape (B, *).

Contribution

  1. Ensure you have Python installed.
  2. Install poetry.
  3. Run poetry install to set up dependencies.
  4. Run poetry run pre-commit install to install pre-commit hooks.
  5. Create a branch, make your changes, and open a pull request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchcache-0.5.2.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

torchcache-0.5.2-py3-none-any.whl (10.9 kB view details)

Uploaded Python 3

File details

Details for the file torchcache-0.5.2.tar.gz.

File metadata

  • Download URL: torchcache-0.5.2.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.8.18 Linux/6.5.0-1025-azure

File hashes

Hashes for torchcache-0.5.2.tar.gz
Algorithm Hash digest
SHA256 de18475d966aeeb3869105dc5f296329e1533329cb4278d4cd46231472c962ea
MD5 acb9286c0e018dc53dde9789e9680990
BLAKE2b-256 b606afaeaaa7e88c0625e90d3cdf311f172b6c8edaf7080a5a1b46b6fcdd89de

See more details on using hashes here.

File details

Details for the file torchcache-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: torchcache-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 10.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.8.18 Linux/6.5.0-1025-azure

File hashes

Hashes for torchcache-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 67aa0afaa3c9ab41c7e16dcdb709e4269db0e3fc2640afd53ca837ba2a5cf6ff
MD5 91fe61d5cac432e718653c3f6c91fc75
BLAKE2b-256 98b7d0acf7c4f8edfc538449b2e3d37890b286e7a7db506893008e30c0178ce1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page