Skip to main content

Memory Wrap: an extension for image classification models

Project description

Description

Memory Wrap is an extension to image classification models that improves both data-efficiency and model interpretability, adopting a sparse content-attention mechanism between the input and some memories of past training samples.

Installation

This is a PyTorch implementation of Memory Wrap. To install Memory Wrap run the following command:

pip install memorywrap

The library contains two main classes:

  • MemoryWrapLayer: it is the Memory Wrap variant described in the paper that uses both the input encoding and the memory encoding to compute the output;
  • BaselineMemory: it is the baseline that uses only the memory encoding to compute the output.

Usage

Instantiate the layer

memorywrap = MemoryWrapLayer(encoder_output_dim, output_dim, head=None, classifier=None, distance='cosine')

or, for the baseline that uses only the memory to output the prediction:

memorywrap = BaselineMemory(encoder_output_dim, output_dim, head=None, classifier=None, distance='cosine')

where

  • encoder_output_dim (int) is the output dimension of the last layer of the encoder

  • output_dim (int) is the desired output dimensione. In the case of the paper output_dim is equal to the number of classes;

  • head (torch.nn.Module): Read head used to project the key and query. It can be a linear or non-linear layer. Input dimensions must be equal to encoder_output_dim (in this case 1280). If None, it is fixed as a linear layer with input and output dimension equal to the input dimension of MemoryWrap(encoder_output_dim). (See https://www.nature.com/articles/nature20101 for further information)

  • classifier (torch.nn.Module): Classifier on top of MemoryWrap. Inputs dimensions must be equal at encoder_output_dim*2 for MemoryWrapLayer and encoder_output_dim for BaselineMemory. By default is an MLP as described in the paper. An alternative is to use a linear layer. (e.g. torch.nn.Linear(encoder_output_dim*2, output_dim). Default: torch.nn.Sequential( torch.nn.Linear(encoder_output_dim*2, encoder_output_dim*4), torch.nn.ReLU(), torch.nn.Linear(encoder_output_dim*4, output_dim)

  • distance (str): Distance to use to compute the similarity between input and memory set. Allowed values are: cosine, l2 and dot for respectively cosine similarity, l2 distance and dot product distance. Default=cosine

Forward call

Add the forward call to your forward function.

output_memorywrap = memorywrap(input_encoding, memory_encoding, return_weights=False)

where input_encoding and memory_encoding are the outputs of the the encoder of rispectively the current input and the memory set.
The last argument of the Memory Wrap's call function is a boolean flag controlling the number of outputs returned. If the flag is True, then the layer returns both the output and the sparse attention weight associated to each memory sample; if the flag is False, then the layer return only the output.

Additional information

Here you can find link to additional source of information about Memory Wrap:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memorywrap-1.1.5.tar.gz (4.4 kB view details)

Uploaded Source

Built Distribution

memorywrap-1.1.5-py3-none-any.whl (4.8 kB view details)

Uploaded Python 3

File details

Details for the file memorywrap-1.1.5.tar.gz.

File metadata

  • Download URL: memorywrap-1.1.5.tar.gz
  • Upload date:
  • Size: 4.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.12

File hashes

Hashes for memorywrap-1.1.5.tar.gz
Algorithm Hash digest
SHA256 c71a6a541152b685ff265d4bd58bddf971ce3cf9c2de77298d57946709df13e8
MD5 3b7a06679b131dd5e0f0fefb5286f1a8
BLAKE2b-256 cab55d23976760e3cc0d5fa3a6001447b1a9b0069ba9fbcd949c6bb0c42d0148

See more details on using hashes here.

File details

Details for the file memorywrap-1.1.5-py3-none-any.whl.

File metadata

  • Download URL: memorywrap-1.1.5-py3-none-any.whl
  • Upload date:
  • Size: 4.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.12

File hashes

Hashes for memorywrap-1.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 e8987a151ba1ba98e1649acd2ae950c8a6dd34997d8a74722ca3746dfed5c99b
MD5 5e32d2765c6fa9d4b4670e2f80037e72
BLAKE2b-256 a91e6685d4bd98de47a679870159bac82843b0033ea4ed1a00fac07c44d0cef9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page