Skip to main content

PyTorch functions to improve performance, analyse models and make your life easier.

Project description

  • Improve and analyse performance of your neural network (e.g. Tensor Cores compatibility)
  • Record/analyse internal state of torch.nn.Module as data passes through it
  • Do the above based on external conditions (using single Callable to specify it)
  • Day-to-day neural network related duties (model size, seeding, time measurements etc.)
  • Get information about your host operating system, torch.nn.Module device, CUDA capabilities etc.
Version Docs Tests Coverage Style PyPI Python PyTorch Docker Roadmap
Version Documentation Tests Coverage codebeat PyPI Python PyTorch Docker Roadmap

:bulb: Examples

Check documentation here: https://szymonmaszke.github.io/torchfunc

1. Getting performance tips

  • Get instant performance tips about your module. All problems described by comments will be shown by torchfunc.performance.tips:
class Model(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.convolution = torch.nn.Sequential(
            torch.nn.Conv2d(1, 32, 3),
            torch.nn.ReLU(inplace=True),  # Inplace may harm kernel fusion
            torch.nn.Conv2d(32, 128, 3, groups=32),  # Depthwise is slower in PyTorch
            torch.nn.ReLU(inplace=True),  # Same as before
            torch.nn.Conv2d(128, 250, 3),  # Wrong output size for TensorCores
        )

        self.classifier = torch.nn.Sequential(
            torch.nn.Linear(250, 64),  # Wrong input size for TensorCores
            torch.nn.ReLU(),  # Fine, no info about this layer
            torch.nn.Linear(64, 10),  # Wrong output size for TensorCores
        )

    def forward(self, inputs):
        convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten()
        return self.classifier(convolved)

# All you have to do
print(torchfunc.performance.tips(Model()))

2. Seeding, weight freezing and others

  • Seed globaly (including numpy and cuda), freeze weights, check inference time and model size:
# Inb4 MNIST, you can use any module with those functions
model = torch.nn.Linear(784, 10)
torchfunc.seed(0)
frozen = torchfunc.module.freeze(model, bias=False)

with torchfunc.Timer() as timer:
  frozen(torch.randn(32, 784)
  print(timer.checkpoint()) # Time since the beginning
  frozen(torch.randn(128, 784)
  print(timer.checkpoint()) # Since last checkpoint

print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}")

3. Record torch.nn.Module internal state

  • Record and sum per-layer activation statistics as data passes through network:
# Still MNIST but any module can be put in it's place
model = torch.nn.Sequential(
    torch.nn.Linear(784, 100),
    torch.nn.ReLU(),
    torch.nn.Linear(100, 50),
    torch.nn.ReLU(),
    torch.nn.Linear(50, 10),
)
# Recorder which sums all inputs to layers
recorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y)
# Record only for torch.nn.Linear
recorder.children(model, types=(torch.nn.Linear,))
# Train your network normally (or pass data through it)
...
# Activations of all neurons of first layer!
print(recorder[1]) # You can also post-process this data easily with apply

For other examples (and how to use condition), see documentation

:wrench: Installation

:snake: pip

Latest release:

pip install --user torchfunc

Nightly:

pip install --user torchfunc-nightly

:whale2: Docker

CPU standalone and various versions of GPU enabled images are available at dockerhub.

For CPU quickstart, issue:

docker pull szymonmaszke/torchfunc:18.04

Nightly builds are also available, just prefix tag with nightly_. If you are going for GPU image make sure you have nvidia/docker installed and it's runtime set.

:question: Contributing

If you find any issue or you think some functionality may be useful to others and fits this library, please open new Issue or create Pull Request.

To get an overview of things one can do to help this project, see Roadmap.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchfunc-nightly-1646701306.tar.gz (24.5 kB view details)

Uploaded Source

Built Distribution

torchfunc_nightly-1646701306-py3-none-any.whl (31.0 kB view details)

Uploaded Python 3

File details

Details for the file torchfunc-nightly-1646701306.tar.gz.

File metadata

  • Download URL: torchfunc-nightly-1646701306.tar.gz
  • Upload date:
  • Size: 24.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/33.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for torchfunc-nightly-1646701306.tar.gz
Algorithm Hash digest
SHA256 9c86b5da15990cb33f82accb20753d3678466babad1c6ae87e410698298f07da
MD5 428208a272a3ce9fda7145cebbf151eb
BLAKE2b-256 91ee7ec11841010345970a617697f93c80b830678727a0096ab2cda0b70e66a8

See more details on using hashes here.

File details

Details for the file torchfunc_nightly-1646701306-py3-none-any.whl.

File metadata

  • Download URL: torchfunc_nightly-1646701306-py3-none-any.whl
  • Upload date:
  • Size: 31.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/33.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for torchfunc_nightly-1646701306-py3-none-any.whl
Algorithm Hash digest
SHA256 587e0913774d7247a6f489fd5b2642ada659b0d3d440c64e89ebf6d54859421b
MD5 eb0ca84bf6930cd334fd25f885b31982
BLAKE2b-256 1c0a33fbc544fdb491c086518f0c51eba9ffed791fd037b41c61367826d8a2c2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page