PyTorch functions to improve performance, analyse models and make your life easier.
Project description
Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap |
---|---|---|---|---|---|---|---|---|---|
torchfunc is library revolving around PyTorch with a goal to help you with:
- Improving and analysing performance of your neural network
- Daily neural network duties (model size, seeding, performance measurements etc.)
- Plotting and visualizing modules
- Record neuron activity and tailor it to your specific task or target
- Get information about your host operating system, CUDA devices and others
Quick examples
- Seed globaly, Freeze weights, check inference time and model size
# Inb4 MNIST, you can use any module with those functions
model = torch.nn.Linear(784, 10)
frozen = torchfunc.module.freeze(model, bias=False)
with torchfunc.Timer() as timer:
frozen(torch.randn(32, 784)
print(timer.checkpoint()) # Time since the beginning
frozen(torch.randn(128, 784)
print(timer.checkpoint()) # Since last checkpoint
print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}")
- Recorder and sum per-layer activation statistics as data passes through network:
# MNIST classifier
model = torch.nn.Sequential(
torch.nn.Linear(784, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 10),
)
# Recorder which sums layer inputs from consecutive forward calls
recorder = torchfunc.record.ForwardPreRecorder(reduction=lambda x, y: x+y)
# Record inputs going into Linear(100, 50) and Linear(50, 10)
recorder.children(model, indices=(2, 3))
# Train your network normally (or pass data through it)
...
# Save tensors (of shape 100 and 50) in folder, each named 1.pt and 2.pt respectively
recorder.save(pathlib.Path("./analysis"))
For performance tips, plotting and other check torchfunc documentation.
Installation
pip
Latest release:
pip install --user torchfunc
Nightly:
pip install --user torchfunc-nightly
Docker
CPU standalone and various versions of GPU enabled images are available at dockerhub.
For CPU quickstart, issue:
docker pull szymonmaszke/torchfunc:18.04
Nightly builds are also available, just prefix tag with nightly_
. If you are going for GPU
image make sure you have
nvidia/docker installed and it's runtime set.
Contributing
If you find any issue or you think some functionality may be useful to others and fits this library, please open new Issue or create Pull Request.
To get an overview of something which one can done to help this project, see Roadmap
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for torchfunc-nightly-1568876524.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1f2ae12155b1f1c83206aa489b700b1b597c1106d8fbbf1fcdd9904adddabe5d |
|
MD5 | a8e2b7359dcd558a568abe27a7fd0f56 |
|
BLAKE2b-256 | 19ecc4566440daa2375406fe6268987abcb0ad6e7714a563b29adb91c9ed7f9d |
Hashes for torchfunc_nightly-1568876524-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8e24efe83521d59b0b5e8d2f3c3f2ee1da17d67d62aa05f1b6a14b0ffa2128ee |
|
MD5 | 355383d3c8a65cfa194502ada181acd9 |
|
BLAKE2b-256 | 168a78b5974313e49f5b098501997ef8e11e9055ceca39f9c95978c497d015f8 |