Machine Learning
Project description
Table of Contents
- 1 metrics
- 2 model
- 3 multi_transforms
- 3.1 BGR2GRAY : MultiTransform
- 3.2 BGR2RGB : MultiTransform
- 3.3 Brightness : MultiTransform
- 3.4 CenterCrop : MultiTransform
- 3.5 Color : MultiTransform
- 3.6 Compose : builtins.object
- 3.7 GaussianNoise : MultiTransform
- 3.8 MultiTransform : builtins.object
- 3.9 Normalize : MultiTransform
- 3.10 RGB2BGR : BGR2RGB
- 3.11 RandomCrop : MultiTransform
- 3.12 RandomHorizontalFlip : MultiTransform
- 3.13 RandomVerticalFlip : MultiTransform
- 3.14 Resize : MultiTransform
- 3.15 Rotate : MultiTransform
- 3.16 Satturation : MultiTransform
- 3.17 Scale : MultiTransform
- 3.18 Stack : MultiTransform
- 3.19 ToCVImage : MultiTransform
- 3.20 ToNumpy : MultiTransform
- 3.21 ToPILImage : MultiTransform
- 3.22 ToTensor : MultiTransform
- 4 run
1 metrics
1.1 F1_Score
Description
F1 Score. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
F1 Score : float
Equations
$precision = \frac{TP}{TP + FP}$
$recall = \frac{TP}{TP + FN}$
$F_1 = \frac{2 \cdot precision \cdot recall}{precision + recall} = \frac{2 \cdot TP}{2 \cdot TP + FP + FN}$
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
f1score = m.F1_Score(Y, T)
print(f1score) --> 0.5
1.2 FN
Description
False negatives. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
False negatives : int
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
fn = m.FN(Y, T)
print(fn) -> 1
1.3 FP
Description
False positives. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
False positives : int
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
fp = m.FP(Y, T)
print(fp) -> 1
1.4 FPR
Description
False positive rate. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
False positive rate : float
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
fpr = m.FPR(Y, T)
print(fpr) -> 0.08333333333333333
1.5 TN
Description
True negatives. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
True negatives : int
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
tn = m.TN(Y, T)
print(tn) -> 11
1.6 TP
Description
True positives. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
True positives : int
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
tp = m.TP(Y, T)
print(tp) -> 5
1.7 TPR
Description
True positive rate. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
True positive rate : float
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
tpr = m.TPR(Y, T)
print(tpr) -> 0.8333333333333334
1.8 confusion_matrix
Description
Calculates the confusion matrix. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Confusion matrix : torch.Tensor
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
conf_mat = m.confusion_matrix(Y, T)
print(conf_mat) -> tensor([
[1, 1, 0],
[0, 2, 0],
[0, 0, 2]
])
1.9 plot_confusion_matrix
Description
Plot the confusion matrix
Parameters
Name | Type | Description |
---|---|---|
confusion_matrix | torch.Tensor | Confusion matrix |
labels | str, optional, default = None | Class labels -> automatic labeling C000, ..., CXXX if labels is None |
cmap | str, optional, default = 'Blues' | Seaborn cmap, see https://r02b.github.io/seaborn_palettes/ |
xlabel | str, optional, default = 'Predicted label' | X-Axis label |
ylabel | str, optional, default = 'True label' | Y-Axis label |
title | str, optional, default = 'Confusion Matrix' | Title of the plot |
plt_show | bool, optional, default = False | Set to True to show the plot |
save_file_name | str, optional, default = None | If not None, the plot is saved under the specified save_file_name. |
Returns
Image of the confusion matrix : np.array
1.10 precision
Description
Precision. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Precision : float
Equations
$precision = \frac{TP}{TP + FP}$
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
precision = m.precision(Y, T)
print(precision) -> 0.8333333333333334
1.11 recall
Description
Recall. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Recall : float
Equations
$recall = \frac{TP}{TP + FN}$
Example
import rsp.ml.metrics as m
import torch
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
recall = m.recall(Y, T)
print(recall) -> 0.8333333333333334
1.12 top_10_accuracy
Description
Top 10 accuracy. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Top 10 accuracy -> top k accuracy | k = 10 : float
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
top_10_accuracy = m.top_10_accuracy(Y, T, k = 3)
print(top_10_accuracy) --> 1.0
1.13 top_1_accuracy
Description
Top 1 accuracy. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Top 1 accuracy -> top k accuracy | k = 1 : float
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
top_1_accuracy = m.top_1_accuracy(Y, T, k = 3)
print(top_1_accuracy) --> 0.8333333333333334
1.14 top_2_accuracy
Description
Top 2 accuracy. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Top 2 accuracy -> top k accuracy | k = 2 : float
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
top_2_accuracy = m.top_2_accuracy(Y, T, k = 3)
print(top_2_accuracy) --> 1.0
1.15 top_3_accuracy
Description
Top 3 accuracy. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Top 3 accuracy -> top k accuracy | k = 3 : float
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
top_3_accuracy = m.top_3_accuracy(Y, T, k = 3)
print(top_3_accuracy) --> 1.0
1.16 top_5_accuracy
Description
Top 5 accuracy. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Top 5 accuracy -> top k accuracy | k = 5 : float
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
top_5_accuracy = m.top_5_accuracy(Y, T, k = 3)
print(top_5_accuracy) --> 1.0
1.17 top_k_accuracy
Description
Top k accuracy. Expected input shape: (batch_size, num_classes)
Parameters
Name | Type | Description |
---|---|---|
Y | torch.Tensor | Prediction |
T | torch.Tensor | True values |
Returns
Top k accuracy : float
Example
import rsp.ml.metrics as m
Y = torch.tensor([
[0.1, 0.1, 0.8],
[0.03, 0.95, 0.02],
[0.05, 0.9, 0.05],
[0.01, 0.87, 0.12],
[0.04, 0.03, 0.93],
[0.94, 0.02, 0.06]
])
T = torch.tensor([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]
])
top_k_accuracy = m.top_k_accuracy(Y, T, k = 3)
print(top_k_accuracy) --> 1.0
2 model
2.2 Constants
Name | Value | Description |
---|---|---|
TUC_ActionPrediction_model004 | TUC/ActionPrediction/Model4 | TUC Action prediction model 4 CNN with Multihead-Self-Attention Input - batch size - sequence length = 30 - channels = 3 - width = 200 - height = 200 Output - batch size - number of classes = 10 |
TUC_ActionPrediction_model005 | TUC/ActionPrediction/Model5 | TUC Action prediction model 5 CNN with Multihead-Self-Attention Input - batch size - sequence length = 30 - channels = 3 - width = 300 - height = 300 Output - batch size - number of classes = 10 |
URL | https://drive.google.com/drive/folders/1ulNnPqg-5wvenRl2CuJMxMMcaiYfHjQ9?usp=share_link | Google Drive URL |
2.1 load_model
Description
Loads a model from an pretrained PyTorch external source into memory.
See Constants for available models
Parameters
Name | Type | Description |
---|---|---|
model_id | str | ID of the model |
force_reload | bool | Overwrite local file -> forces downlad. |
Returns
Pretrained PyTorch model : torch.nn.Module
Example
import rsp.ml.model as model
model004 = model.load_model(model.TUC_ActionPrediction_model004)
3 multi_transforms
3.1 BGR2GRAY : MultiTransform
Description
Converts a sequence of BGR images to grayscale images
3.1.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.1.2 __init__
Description
Initializes a new instance.
3.2 BGR2RGB : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.2.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.2.2 __init__
Description
Initializes a new instance.
3.3 Brightness : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.3.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.3.2 __init__
Description
Initializes a new instance.
3.4 CenterCrop : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.4.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.4.2 __init__
Description
Initializes a new instance.
3.5 Color : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.5.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.5.2 __init__
Description
Initializes a new instance.
3.6 Compose : builtins.object
Description
Composes several MultiTransforms together.
Example
import rsp.ml.multi_transforms as t
transforms = t.Compose([
t.BGR2GRAY(),
t.Scale(0.5)
])
3.6.1 __call__
Description
Call self as a function.
3.6.2 __init__
Description
Initializes a new instance.
Parameters
Name | Type | Description |
---|---|---|
children | List[MultiTransform] | List of MultiTransforms to compose. |
3.7 GaussianNoise : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.7.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.7.2 __init__
Description
Initializes a new instance.
3.8 MultiTransform : builtins.object
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.8.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.8.2 __init__
Description
Initializes a new instance.
3.9 Normalize : MultiTransform
Description
Normalize images with mean and standard deviation. Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
Based on torchvision.transforms.Normalize
3.9.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.9.2 __init__
Description
Initializes a new instance.
Parameters
Name | Type | Description |
---|---|---|
mean | List[float] | Sequence of means for each channel. |
std | List[float] | Sequence of standard deviations for each channel. |
inplace | bool | Set to True make this operation in-place. |
3.10 RGB2BGR : BGR2RGB
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.10.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.10.2 __init__
Description
Initializes a new instance.
3.11 RandomCrop : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.11.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.11.2 __init__
Description
Test
3.12 RandomHorizontalFlip : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.12.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.12.2 __init__
Description
Initializes a new instance.
3.13 RandomVerticalFlip : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.13.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.13.2 __init__
Description
Initializes a new instance.
3.14 Resize : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.14.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.14.2 __init__
Description
Initializes a new instance.
3.15 Rotate : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.15.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.15.2 __init__
Description
Initializes a new instance.
3.16 Satturation : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.16.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.16.2 __init__
Description
Initializes a new instance.
3.17 Scale : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.17.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.17.2 __init__
Description
Initializes a new instance.
3.18 Stack : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.18.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.18.2 __init__
Description
Initializes a new instance.
3.19 ToCVImage : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.19.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.19.2 __init__
Description
Initializes a new instance.
3.20 ToNumpy : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.20.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.20.2 __init__
Description
Initializes a new instance.
3.21 ToPILImage : MultiTransform
Description
MultiTransform is an extension to keep the same transformation over a sequence of images instead of initializing a new transformation for every single image. It is inspired by torchvision.transforms
and could be used for video augmentation. Use rsp.ml.multi_transforms.Compose
to combine multiple image sequence transformations.
Note
rsp.ml.multi_transforms.MultiTransform
is a base class and should be inherited.
3.21.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.21.2 __init__
Description
Initializes a new instance.
3.22 ToTensor : MultiTransform
Description
Converts a sequence of images to torch.Tensor.
3.22.1 __call__
Description
Call self as a function.
Parameters
Name | Type | Description |
---|---|---|
input | torch.Tensor List[PIL.Image] List[numpy.array] |
Sequence of images |
3.22.2 __init__
Description
Initializes a new instance.
4 run
4.1 Run : builtins.object
4.1.1 __init__
Description
Initialize self. See help(type(self)) for accurate signature.
4.1.2 append
4.1.3 get_avg
4.1.4 get_val
4.1.5 len
4.1.6 load_best_state_dict
4.1.7 load_state_dict
4.1.8 pickle_dump
4.1.9 pickle_load
4.1.10 plot
4.1.11 recalculate_moving_average
4.1.12 save
4.1.13 save_best_state_dict
4.1.14 save_state_dict
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file rsp_ml-0.0.48.tar.gz
.
File metadata
- Download URL: rsp_ml-0.0.48.tar.gz
- Upload date:
- Size: 24.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3bab326093f5751b461cf5bb0a4d53f1efd7d2adb786683e68b316cf4152c116 |
|
MD5 | adc84a267f9783ab4812596700031f89 |
|
BLAKE2b-256 | da5a7a6ed1a32d6b94d5b2fa0516d742cec30f5d7a7c599ece611732135bb203 |
File details
Details for the file rsp_ml-0.0.48-py3-none-any.whl
.
File metadata
- Download URL: rsp_ml-0.0.48-py3-none-any.whl
- Upload date:
- Size: 17.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a70e73821e9be89feb7a80d0fda712793a9a542f72fc28d0ac4f938e1c920c6c |
|
MD5 | 6d6b9e4ba3ae7d07dbb2e44d0247c33c |
|
BLAKE2b-256 | ed4065abf72bdb2f0e83ff079282ac6dff2a0023c55391699a92c63efd701c73 |