SelfProjection is a PyTorch-based neural network layer designed to transform and project high-dimensional data.
Project description
SelfProjection Module for PyTorch
Overview
The SelfProjection
module is a PyTorch-based neural network layer designed to transform and project high-dimensional data. It is particularly useful in contexts requiring sophisticated analysis and representation of feature relationships, such as outputs from Transformer models.
Approach
The SelfProjection
module employs a dual projection mechanism to process input tensors, capturing different perspectives of the data. Key aspects include:
- Original and Permuted Projections: The module processes the input tensor in its original form and a permuted form, creating two distinct representations.
- Relational Interference: By computing relational matrices, the module captures the interplay between different projections, emphasizing the relationships between data dimensions.
- Normalization: Custom normalization steps, which involve mean subtraction and variance scaling similar to Layer Normalization, are applied to the projections, ensuring stable feature scaling and potentially improved model performance.
- Trainable Parameters: The module includes several trainable parameters, allowing it to learn optimal feature transformations during training.
Installation
From source:
To install the SelfProjection
module, clone this repository and import the module into your PyTorch project.
git clone https://github.com/Sombressoul/self-projection ./self_projection
python -m pip install -e ./self_projection
Usage
Here's a simple example of how to use the SelfProjection
with PyTorch:
import torch
from self_projection import SelfProjection
# Define the input tensor dimensions and projection size
input_tensor = torch.randn((batch_size, sequence_length, embedding_dim))
size_projection = 128
# Initialize the SelfProjection module
self_projection = SelfProjection(size_input=input_tensor.size()[1::], size_projection=size_projection)
# Apply the module to the input tensor
projected, relations = self_projection(input_tensor)
print(projected.shape)
# >>> torch.Size([<batch_size>, 128, 128])
print(relations.shape)
# >>> torch.Size([<batch_size>, 128, 128])
Evaluation
The SelfProjection
module has been evaluated using the MNIST dataset under various conditions to test its efficiency in spatial feature extraction and overall performance.
eval_mnist.py - contains an evaluation code for MNIST dataset. By default it is set to extreme conditions with heavy projection reduction (4x4) and high dropout rate (0.75).
Experimental Setup
Two key experimental setups were employed:
-
Heavy Reduction with High Dropout:
- Initial tests with a projection size of 4x4 and a high dropout rate of 0.75 demonstrated the module's capability to still achieve an accuracy of 53% on the MNIST test set. This setup was particularly challenging due to the substantial dimensionality reduction and high dropout rate, putting a stress test on the feature extraction abilities of
SelfProjection
.
- Initial tests with a projection size of 4x4 and a high dropout rate of 0.75 demonstrated the module's capability to still achieve an accuracy of 53% on the MNIST test set. This setup was particularly challenging due to the substantial dimensionality reduction and high dropout rate, putting a stress test on the feature extraction abilities of
-
Standard Conditions:
- Further evaluation under more conventional conditions, with a projection size increased to 8x8 and a moderate dropout rate of 0.25, showed a significant improvement. The model achieved a 95% accuracy on the MNIST test set, aligning well with standard benchmarks. This result highlights the effectiveness of the
SelfProjection
module when integrated into a neural network architecture under typical operating conditions.
- Further evaluation under more conventional conditions, with a projection size increased to 8x8 and a moderate dropout rate of 0.25, showed a significant improvement. The model achieved a 95% accuracy on the MNIST test set, aligning well with standard benchmarks. This result highlights the effectiveness of the
Insights
These evaluations indicate that the SelfProjection
module is capable of extracting meaningful and robust features from the input data, even under stringent constraints. The improvement in performance with larger projection size and lower dropout rate suggests the module's potential in various application scenarios, especially in tasks requiring sophisticated data representation and processing.
Further experiments, including comparisons with baseline models and testing on more complex datasets, are planned to continue exploring the capabilities and optimizing the performance of the SelfProjection
module.
Contribution
Contributions to the SelfProjection
module are welcome. Please submit a pull request or open an issue if you have suggestions or improvements.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file self-projection-0.1.0.tar.gz
.
File metadata
- Download URL: self-projection-0.1.0.tar.gz
- Upload date:
- Size: 8.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 342dc9c8bd709708151d6ef6bbf119d41c5566a7df4793203056565df5237ead |
|
MD5 | 17f73f593ab28a6b3b07506494a815cd |
|
BLAKE2b-256 | c5ddebeb202a51aa80f0a32493c3b23f58c2c327f045b3991365b814cb66c150 |
Provenance
File details
Details for the file self_projection-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: self_projection-0.1.0-py3-none-any.whl
- Upload date:
- Size: 8.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cc9707129d018e96bc6136653cf5e2591f3f9ea3046deb964128ce27b1cce83b |
|
MD5 | fb8741afde7c9e7e37cc235a77e44cea |
|
BLAKE2b-256 | 2280bcb0beaf4201c2ed04620fca3c40eb0f254bbb9f75e650521600ef2d879e |