Skip to main content

Torch modules and utilities of equivariant/invariant learning

Project description

Symmetric Learning

PyPI version Python Version

Lightweight python package for doing geometric deep learning using ESCNN. This package simply holds:

  • Generic equivariant torch models and modules that are not present in ESCNN.
  • Linear algebra utilities when working with symmetric vector spaces.
  • Statistics utilities for symmetric random variables.

Installation

pip install symm-learning
# or
git clone https://github.com/Danfoa/symmetric_learning
cd symmetric_learning
pip install -e .

Structure


Linear Algebra

  • lstsq: Symmetry-aware computation of the least-squares solution to a linear system of equations with symmetric input-output data.
  • invariant_orthogonal_projector: Computes the orthogonal projection to the invariant subspace of a symmetric vector space.

Statistics

  • var_mean: Symmetry-aware computation of the variance and mean of a symmetric random variable.
  • cov: Symmetry-aware computation of the covariance / cross-covariance of two symmetric random variables.

Models

  • iMLP: Invariant MLP for learning invariant functions.
  • eMLP: Equivariant MLP for learning equivariant functions.

Torch Modules

Change2DisentangledBasis

Module for changing the basis of a tensor to a disentangled / isotypic basis.

IrrepSubspaceNormPooling

Module for extracting invariant features from a geometric tensor, giving one feature per irreducible subspace/representation.%

eConv1D

Equivariant 1D convolutional layer for processing an array of multiple symmetric signals (e.g., a time series of a symmetric random variable). Given the feature spaces $\mathcal{X}$ and $\mathcal{Y}$, this layer takes an array of symmetric signals $x \in \mathcal{X}$ of shape $(\text{batch size}, |\mathcal{X}|, \text{H})$ and outputs an array of symmetric signals $y \in \mathcal{Y}$ of shape $(\text{batch size}, |\mathcal{Y}|, \text{H}_ {\text{out}})$, where $\text{H}$ is the 1D/time-dimension of the input signals and $\text{H}_ {\text{out}}$ is the resulting 1D dimension after the convolution operation (see torch.nn.Conv1D for details).

To use it follow the example below:

>>>    from escnn.group import DihedralGroup
>>>    from escnn.nn import FieldType
>>>    from symm_learning.nn import eConv1D, GSpace1D
>>>    G = DihedralGroup(10)
>>>    # Custom (hacky) 1D G-space needed to use `GeometricTensor`
>>>    gspace = GSpace1D(G) # Note G does not act on points in the 1D space.
>>>    in_type = FieldType(gspace, [G.regular_representation])
>>>    out_type = FieldType(gspace, [G.regular_representation] * 2)
>>>
>>>    H, kernel_size, batch_size = 10, 3, 5
>>>    # Inputs to Conv1D/eConv1D are of shape (B, in_type.size, T) where B is the batch size, C is the number of channels and T is the time dimension.
>>>    x = in_type(torch.randn(batch_size, in_type.size, H))
>>>    # Instance of eConv1D
>>>    conv_layer = eConv1D(in_type, out_type, kernel_size=3, stride=1, padding=0, bias=True)
>>>    # Forward pass
>>>    y = conv_layer(x)  # (B, out_type.size, H_out)
>>>    # After training you can export this `EquivariantModule` to a `torch.nn.Module` by:
>>>    conv1D = conv_layer.export()

EquivMultivariateNormal

Utility layer to parameterize a G-equivariant multivariate Gaussian/Normal distribution:

\begin{aligned}
y &\sim \mathcal{N} \bigl(\mu(x), \Sigma(x)\bigr)& \\
\text{s.t.}
&\rho_Y(g)\mu(x) = \mu \bigl(\rho_X(g)\cdot x\bigr) \\
&\rho_Y(g)\Sigma(x)\rho_Y(g)^{\top} = \Sigma\bigl(\rho_X(g)\cdot x\bigr),
\quad \forall\, g \in G.
\end{aligned}

Such that the conditional probability distribution of y given x is $\mathbb{G}$-invariant to the simultaneous group action on $\mathcal{X}$ and $\mathcal{Y}$:

$$ P(y \mid x) = P(\rho_Y(g) y \mid \rho_X(g) x) \quad \forall g \in \mathbb{G}. $$

This means that if you want to parameterize a $\mathbb{G}$-equivariant stochastic function $y = f(x)$ using neural networks, you can use any backbone architecture whose output are the input parameters of a EquivMultivariateNormal distribution, as shown below:

from escnn.group import CyclicGroup
from escnn.nn import FieldType
from symm_learning.models import EMLP
from symm_learning.nn import EquivMultivariateNormal

G = CyclicGroup(3)
x_type = FieldType(escnn.gspaces.no_base_space(G), representations=[G.regular_representation])
y_type = FieldType(escnn.gspaces.no_base_space(G), representations=[G.regular_representation] * 1)
# Instanciate the output equivariant multivariate normal distribution in order to get the NN output type
e_normal = EquivMultivariateNormal(y_type, diagonal=True)
# Instanciate your NN model to output the parameters of the distribution
nn = EMLP(in_type=x_type, out_type=e_normal.in_type)
# Sample from the distribution
x = x_type(torch.randn(10, x_type.size))
z = nn(x) # (B, dim_y + n_dof_cov)
dist = e_normal.get_distribution(z) # instance of  torch.distributions.MultivariateNormal
y = dist.sample()  # (B, n)

Here, $z$ is a (batch_size, dim_y + n_dof_cov) input tensor with the first dim_y entries defining the mean of the distribution $\mu(x)$ and the next n_dof_cov entries define the free degrees of freedom from the symmetry constrained covariance matrix. See below

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

symm_learning-0.2.8.tar.gz (33.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

symm_learning-0.2.8-py3-none-any.whl (38.9 kB view details)

Uploaded Python 3

File details

Details for the file symm_learning-0.2.8.tar.gz.

File metadata

  • Download URL: symm_learning-0.2.8.tar.gz
  • Upload date:
  • Size: 33.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for symm_learning-0.2.8.tar.gz
Algorithm Hash digest
SHA256 2ac5ce6477c5690f0bed1bec65533eb2b11f88c1cf14f8ba1e77e08e2c52c896
MD5 2a35dd86e09c89f1a9a2cd15abc44e01
BLAKE2b-256 2061724d8bee7b795a53d8c2023c24374e04ee2bb98131aa0b9106743522c044

See more details on using hashes here.

Provenance

The following attestation bundles were made for symm_learning-0.2.8.tar.gz:

Publisher: publish2pypi.yaml on Danfoa/symmetric_learning

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file symm_learning-0.2.8-py3-none-any.whl.

File metadata

  • Download URL: symm_learning-0.2.8-py3-none-any.whl
  • Upload date:
  • Size: 38.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for symm_learning-0.2.8-py3-none-any.whl
Algorithm Hash digest
SHA256 8fa3ac6ead3e3106b95c3c1bc80a1205abfea719a6f601399e6497dccecfb92f
MD5 b06021d42f9fc6b7fefb1c96d2a30f07
BLAKE2b-256 1c2effcbf9ebef4a1124a7e28f86cc5d357d03ea44c51c51320f09f559a46666

See more details on using hashes here.

Provenance

The following attestation bundles were made for symm_learning-0.2.8-py3-none-any.whl:

Publisher: publish2pypi.yaml on Danfoa/symmetric_learning

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page