❤️ Lovely Tensors
Project description
❤️ Lovely Tensors
Read full docs here
Install
pip install lovely-tensors
How to use
How often do you find yourself debugging PyTorch code? You dump a tensor to the cell output, and see this:
numbers
tensor([[[-0.3541, -0.3369, -0.4054, ..., -0.5596, -0.4739, 2.2489],
[-0.4054, -0.4226, -0.4911, ..., -0.9192, -0.8507, 2.1633],
[-0.4739, -0.4739, -0.5424, ..., -1.0390, -1.0390, 2.1975],
...,
[-0.9020, -0.8335, -0.9363, ..., -1.4672, -1.2959, 2.2318],
[-0.8507, -0.7822, -0.9363, ..., -1.6042, -1.5014, 2.1804],
[-0.8335, -0.8164, -0.9705, ..., -1.6555, -1.5528, 2.1119]],
[[-0.1975, -0.1975, -0.3025, ..., -0.4776, -0.3725, 2.4111],
[-0.2500, -0.2325, -0.3375, ..., -0.7052, -0.6702, 2.3585],
[-0.3025, -0.2850, -0.3901, ..., -0.7402, -0.8102, 2.3761],
...,
[-0.4251, -0.2325, -0.3725, ..., -1.0903, -1.0203, 2.4286],
[-0.3901, -0.2325, -0.4251, ..., -1.2304, -1.2304, 2.4111],
[-0.4076, -0.2850, -0.4776, ..., -1.2829, -1.2829, 2.3410]],
[[-0.6715, -0.9853, -0.8807, ..., -0.9678, -0.6890, 2.3960],
[-0.7238, -1.0724, -0.9678, ..., -1.2467, -1.0201, 2.3263],
[-0.8284, -1.1247, -1.0201, ..., -1.2641, -1.1596, 2.3786],
...,
[-1.2293, -1.4733, -1.3861, ..., -1.5081, -1.2641, 2.5180],
[-1.1944, -1.4559, -1.4210, ..., -1.6476, -1.4733, 2.4308],
[-1.2293, -1.5256, -1.5081, ..., -1.6824, -1.5256, 2.3611]]])
Was it really useful for you, as a human, to see all these numbers?
What is the shape?
What are the statistics?
Are any of the values nan
or inf
?
Is it an image of a man holding a tench?
import lovely_tensors as lt
lt.monkey_patch()
__repr__
numbers
tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=-0.388 σ=1.073
Better, huh?
numbers[1,:6,1] # Still shows values if there are not too many.
tensor[6] x∈[-0.443, -0.197] μ=-0.311 σ=0.091 [-0.197, -0.232, -0.285, -0.373, -0.443, -0.338]
spicy = numbers.flatten()[:12].clone()
spicy[0] *= 10000
spicy[1] /= 10000
spicy[2] = float('inf')
spicy[3] = float('-inf')
spicy[4] = float('nan')
spicy = spicy.reshape((2,6))
spicy # Spicy stuff
tensor[2, 6] n=12 x∈[-3.541e+03, -3.369e-05] μ=-393.776 σ=1.180e+03 +inf! -inf! nan!
torch.zeros(10, 10) # A zero tensor - make it obvious
tensor[10, 10] n=100 all_zeros
spicy.verbose
tensor[2, 6] n=12 x∈[-3.541e+03, -3.369e-05] μ=-393.776 σ=1.180e+03 +inf! -inf! nan!
[[-3.5405e+03, -3.3693e-05, inf, -inf, nan, -4.0543e-01],
[-4.2255e-01, -4.9105e-01, -5.0818e-01, -5.5955e-01, -5.4243e-01, -5.0818e-01]]
spicy.plain # The plain old way
[[-3.5405e+03, -3.3693e-05, inf, -inf, nan, -4.0543e-01],
[-4.2255e-01, -4.9105e-01, -5.0818e-01, -5.5955e-01, -5.4243e-01, -5.0818e-01]]
Going .deeper
numbers.deeper
tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=-0.388 σ=1.073
tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036
tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=-0.274 σ=0.973
tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178
# You can go deeper if you need to
dt = numbers[0,:45,0].view(3,3,5)
dt.deeper(2)
tensor[3, 3, 5] n=45 x∈[-0.988, -0.337] μ=-0.538 σ=0.152
tensor[3, 5] n=15 x∈[-0.611, -0.354] μ=-0.526 σ=0.086
tensor[5] x∈[-0.474, -0.354] μ=-0.419 σ=0.044 [-0.354, -0.405, -0.474, -0.440, -0.423]
tensor[5] x∈[-0.611, -0.542] μ=-0.590 σ=0.028 [-0.611, -0.611, -0.594, -0.594, -0.542]
tensor[5] x∈[-0.611, -0.542] μ=-0.570 σ=0.038 [-0.542, -0.542, -0.542, -0.611, -0.611]
tensor[3, 5] n=15 x∈[-0.988, -0.337] μ=-0.597 σ=0.233
tensor[5] x∈[-0.988, -0.560] μ=-0.748 σ=0.199 [-0.560, -0.560, -0.714, -0.919, -0.988]
tensor[5] x∈[-0.971, -0.405] μ=-0.662 σ=0.235 [-0.971, -0.816, -0.645, -0.474, -0.405]
tensor[5] x∈[-0.474, -0.337] μ=-0.381 σ=0.056 [-0.388, -0.371, -0.337, -0.337, -0.474]
tensor[3, 5] n=15 x∈[-0.679, -0.423] μ=-0.490 σ=0.069
tensor[5] x∈[-0.491, -0.423] μ=-0.457 σ=0.027 [-0.457, -0.423, -0.440, -0.474, -0.491]
tensor[5] x∈[-0.491, -0.440] μ=-0.464 σ=0.020 [-0.491, -0.457, -0.440, -0.457, -0.474]
tensor[5] x∈[-0.679, -0.457] μ=-0.549 σ=0.094 [-0.474, -0.457, -0.525, -0.611, -0.679]
Now in .rgb
color
The important queston - is it our man?
numbers.rgb
Maaaaybe? Looks like someone normalized him.
in_stats = ( (0.485, 0.456, 0.406), # mean
(0.229, 0.224, 0.225) ) # std
numbers.rgb(in_stats)
It’s indeed our hero, the Tenchman!
.plt
the statistics
(numbers+3).plt
(numbers+3).plt(center="mean", max_s=1000)
(numbers+3).plt(center="range")
See the .chans
# .chans will map values betwen [0,1] to colors.
# Make our values fit into that range to avoid clipping.
mean = torch.tensor(in_stats[0])[:,None,None]
std = torch.tensor(in_stats[1])[:,None,None]
numbers_01 = (numbers*std + mean)
numbers_01
tensor[3, 196, 196] n=115248 x∈[0., 1.000] μ=0.361 σ=0.248
numbers_01.chans
Let’s try with a Convolutional Neural Network
from torchvision.models import vgg11, VGG11_Weights
features = vgg11().features
# Note: I only saved the first 5 layers in "features.pt"
_ = features.load_state_dict(torch.load("../features.pt"), strict=False)
# Activatons of the second max pool layer of VGG11
print(features[5])
acts = (features[:6](numbers[None])[0]/2) # /2 to reduce clipping
acts
MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
tensor[128, 49, 49] n=307328 x∈[0., 12.508] μ=0.367 σ=0.634 grad DivBackward0
acts.chans
Without .monkey_patch
lt.lovely(spicy)
tensor[2, 6] n=12 x∈[-3.541e+03, -3.369e-05] μ=-393.776 σ=1.180e+03 +inf! -inf! nan!
lt.lovely(spicy, verbose=True)
tensor[2, 6] n=12 x∈[-3.541e+03, -3.369e-05] μ=-393.776 σ=1.180e+03 +inf! -inf! nan!
[[-3.5405e+03, -3.3693e-05, inf, -inf, nan, -4.0543e-01],
[-4.2255e-01, -4.9105e-01, -5.0818e-01, -5.5955e-01, -5.4243e-01, -5.0818e-01]]
lt.lovely(numbers, depth=1)
tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=-0.388 σ=1.073
tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036
tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=-0.274 σ=0.973
tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178
lt.rgb(numbers, in_stats)
lt.plot(numbers, center="mean")
lt.chans(numbers_01)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
lovely-tensors-0.1.1.tar.gz
(6.8 MB
view details)
Built Distribution
File details
Details for the file lovely-tensors-0.1.1.tar.gz
.
File metadata
- Download URL: lovely-tensors-0.1.1.tar.gz
- Upload date:
- Size: 6.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 533279ce5225400b82efcb3d0d323a7562920fd0690924a7b77fd4f72f2910a9 |
|
MD5 | 0600e92677f6fcea49084893a559655f |
|
BLAKE2b-256 | 7282882bedf43f0703f30f9962aafd8ea945a176fd8d5a7b5390039bb68aa4e1 |
File details
Details for the file lovely_tensors-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: lovely_tensors-0.1.1-py3-none-any.whl
- Upload date:
- Size: 30.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ebe533d592c2b4887af07afde376a12b6fcbd8964600315f5d5aec18f5cfa770 |
|
MD5 | 5854cbfbb76a723dbb1852219c7e6d12 |
|
BLAKE2b-256 | 78e1a2e5dc39247e82a519db1b0ce23cb01b2e1b6c84355e0ea49ee29b98dbcd |