Running/online statistics for PyTorch
Project description
pytorch_runstats
Running/online statistics for PyTorch.
torch_runstats
implements memory-efficient online reductions on tensors.
Notable features:
- Arbitrary sample shapes beyond single scalars
- Reduction over arbitrary dimensions of each sample
- "Batched"/"binned" reduction into multiple running tallies using a per-sample bin index.
This can be useful, for example, in accumulating statistics over samples by some kind of "type" index or for accumulating statistics per-graph in a
pytorch_geometric
-like batching scheme. (This feature uses and is similar totorch_scatter
.)
Note: the implementations currently heavily uses in-place operations for peformance and memory efficiency. This probably doesn't play nice with the autograd engine — this is currently likely the wrong library for accumulating running statistics you want to backward through. (See TorchMetrics for a possible alternative.)
For more information, please see the docs.
Install
torch_runstats
requires PyTorch and torch_scatter
, but neither is specified in install_requires
for pip
since both require manual installation for correct CUDA compatability.
License
pytorch_runstats
is distributed under an MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for torch_runstats-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 22b0dfbb8fa9b1758602006875696b2a50fa0518ede9dc47f0e86e659807107c |
|
MD5 | 3ed7bdde1c7012b8957f365ad20843ca |
|
BLAKE2b-256 | e4a766bb7fc5be47d58f8ac2da627d1350cc8e0202a54e7ef38ccffc1a0fcd08 |