PyTorch functions to improve performance, analyse models and make your life easier.
Project description
Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap |
---|---|---|---|---|---|---|---|---|---|
torchfunc is library revolving around PyTorch with a goal to help you with:
- Improving and analysing performance of your neural network
- Daily neural network duties (model size, seeding, performance measurements etc.)
- Plotting and visualizing modules
- Record neuron activity and tailor it to your specific task or target
- Get information about your host operating system, CUDA devices and others
Quick examples
- Seed globaly, Freeze weights, check inference time and model size
# Inb4 MNIST, you can use any module with those functions
model = torch.nn.Linear(784, 10)
frozen = torchfunc.module.freeze(model, bias=False)
with torchfunc.Timer() as timer:
frozen(torch.randn(32, 784)
print(timer.checkpoint()) # Time since the beginning
frozen(torch.randn(128, 784)
print(timer.checkpoint()) # Since last checkpoint
print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}")
- Recorder and sum per-layer activation statistics as data passes through network:
# MNIST classifier
model = torch.nn.Sequential(
torch.nn.Linear(784, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 10),
)
# Recorder which sums layer inputs from consecutive forward calls
recorder = torchfunc.record.ForwardPreRecorder(reduction=lambda x, y: x+y)
# Record inputs going into Linear(100, 50) and Linear(50, 10)
recorder.children(model, indices=(2, 3))
# Train your network normally (or pass data through it)
...
# Save tensors (of shape 100 and 50) in folder, each named 1.pt and 2.pt respectively
recorder.save(pathlib.Path("./analysis"))
For performance tips, plotting and other check torchfunc documentation.
Installation
pip
Latest release:
pip install --user torchfunc
Nightly:
pip install --user torchfunc-nightly
Docker
CPU standalone and various versions of GPU enabled images are available at dockerhub.
For CPU quickstart, issue:
docker pull szymonmaszke/torchfunc:18.04
Nightly builds are also available, just prefix tag with nightly_
. If you are going for GPU
image make sure you have
nvidia/docker installed and it's runtime set.
Contributing
If you find any issue or you think some functionality may be useful to others and fits this library, please open new Issue or create Pull Request.
To get an overview of something which one can done to help this project, see Roadmap
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchfunc-nightly-1569222144.tar.gz
.
File metadata
- Download URL: torchfunc-nightly-1569222144.tar.gz
- Upload date:
- Size: 19.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 17a2dc404b387a8a0e0770d444486257894f94d5a12531b233fe732a645bc9dc |
|
MD5 | 22fd8c8a366e7130989c1635c80be22d |
|
BLAKE2b-256 | 59fdc9834194a14423f950a2f519e663897849e23cd5f4a91e6a53ce497e1e47 |
File details
Details for the file torchfunc_nightly-1569222144-py3-none-any.whl
.
File metadata
- Download URL: torchfunc_nightly-1569222144-py3-none-any.whl
- Upload date:
- Size: 25.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.15.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 68d072f6da8471aabb64c370db741d276824abbe7a28993dce260e4c07b706be |
|
MD5 | d0f6dbd19dc4a590d3ee329739734cc7 |
|
BLAKE2b-256 | 2d603ba516b5a98a35a6e8ec5a0ac458946d4f66717d374afdb3489529399146 |