Skip to main content

Torch modules and utilities of equivariant/invariant learning

Project description

Symmetric Learning

PyPI version Python Version

Lightweight python package for doing geometric deep learning using ESCNN. This package simply holds:

  • Generic equivariant torch models and modules that are not present in ESCNN.
  • Linear algebra utilities when working with symmetric vector spaces.
  • Statistics utilities for symmetric random variables.

Installation

pip install symm-learning
# or
git clone https://github.com/Danfoa/symmetric_learning
cd symmetric_learning
pip install -e .

Structure


Linear Algebra

  • lstsq: Symmetry-aware computation of the least-squares solution to a linear system of equations with symmetric input-output data.
  • invariant_orthogonal_projector: Computes the orthogonal projection to the invariant subspace of a symmetric vector space.

Statistics

  • var_mean: Symmetry-aware computation of the variance and mean of a symmetric random variable.
  • cov: Symmetry-aware computation of the covariance / cross-covariance of two symmetric random variables.

Models

  • iMLP: Invariant MLP for learning invariant functions.
  • eMLP: Equivariant MLP for learning equivariant functions.

Torch Modules

Change2DisentangledBasis

Module for changing the basis of a tensor to a disentangled / isotypic basis.

IrrepSubspaceNormPooling

Module for extracting invariant features from a geometric tensor, giving one feature per irreducible subspace/representation.%

eConv1D

Equivariant 1D convolutional layer for processing an array of multiple symmetric signals (e.g., a time series of a symmetric random variable). Given the feature spaces $\mathcal{X}$ and $\mathcal{Y}$, this layer takes an array of symmetric signals $x \in \mathcal{X}$ of shape $(\text{batch size}, |\mathcal{X}|, \text{H})$ and outputs an array of symmetric signals $y \in \mathcal{Y}$ of shape $(\text{batch size}, |\mathcal{Y}|, \text{H}_ {\text{out}})$, where $\text{H}$ is the 1D/time-dimension of the input signals and $\text{H}_ {\text{out}}$ is the resulting 1D dimension after the convolution operation (see torch.nn.Conv1D for details).

To use it follow the example below:

>>>    from escnn.group import DihedralGroup
>>>    from escnn.nn import FieldType
>>>    from symm_learning.nn import eConv1D, GSpace1D
>>>    G = DihedralGroup(10)
>>>    # Custom (hacky) 1D G-space needed to use `GeometricTensor`
>>>    gspace = GSpace1D(G) # Note G does not act on points in the 1D space.
>>>    in_type = FieldType(gspace, [G.regular_representation])
>>>    out_type = FieldType(gspace, [G.regular_representation] * 2)
>>>
>>>    H, kernel_size, batch_size = 10, 3, 5
>>>    # Inputs to Conv1D/eConv1D are of shape (B, in_type.size, T) where B is the batch size, C is the number of channels and T is the time dimension.
>>>    x = in_type(torch.randn(batch_size, in_type.size, H))
>>>    # Instance of eConv1D
>>>    conv_layer = eConv1D(in_type, out_type, kernel_size=3, stride=1, padding=0, bias=True)
>>>    # Forward pass
>>>    y = conv_layer(x)  # (B, out_type.size, H_out)
>>>    # After training you can export this `EquivariantModule` to a `torch.nn.Module` by:
>>>    conv1D = conv_layer.export()

EquivMultivariateNormal

Utility layer to parameterize a G-equivariant multivariate Gaussian/Normal distribution:

\begin{aligned}
y &\sim \mathcal{N} \bigl(\mu(x), \Sigma(x)\bigr)& \\
\text{s.t.}
&\rho_Y(g)\mu(x) = \mu \bigl(\rho_X(g)\cdot x\bigr) \\
&\rho_Y(g)\Sigma(x)\rho_Y(g)^{\top} = \Sigma\bigl(\rho_X(g)\cdot x\bigr),
\quad \forall\, g \in G.
\end{aligned}

Such that the conditional probability distribution of y given x is $\mathbb{G}$-invariant to the simultaneous group action on $\mathcal{X}$ and $\mathcal{Y}$:

$$ P(y \mid x) = P(\rho_Y(g) y \mid \rho_X(g) x) \quad \forall g \in \mathbb{G}. $$

This means that if you want to parameterize a $\mathbb{G}$-equivariant stochastic function $y = f(x)$ using neural networks, you can use any backbone architecture whose output are the input parameters of a EquivMultivariateNormal distribution, as shown below:

from escnn.group import CyclicGroup
from escnn.nn import FieldType
from symm_learning.models import EMLP
from symm_learning.nn import EquivMultivariateNormal

G = CyclicGroup(3)
x_type = FieldType(escnn.gspaces.no_base_space(G), representations=[G.regular_representation])
y_type = FieldType(escnn.gspaces.no_base_space(G), representations=[G.regular_representation] * 1)
# Instanciate the output equivariant multivariate normal distribution in order to get the NN output type
e_normal = EquivMultivariateNormal(y_type, diagonal=True)
# Instanciate your NN model to output the parameters of the distribution
nn = EMLP(in_type=x_type, out_type=e_normal.in_type)
# Sample from the distribution
x = x_type(torch.randn(10, x_type.size))
z = nn(x) # (B, dim_y + n_dof_cov)
dist = e_normal.get_distribution(z) # instance of  torch.distributions.MultivariateNormal
y = dist.sample()  # (B, n)

Here, $z$ is a (batch_size, dim_y + n_dof_cov) input tensor with the first dim_y entries defining the mean of the distribution $\mu(x)$ and the next n_dof_cov entries define the free degrees of freedom from the symmetry constrained covariance matrix. See below

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

symm_learning-0.2.7.tar.gz (29.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

symm_learning-0.2.7-py3-none-any.whl (34.4 kB view details)

Uploaded Python 3

File details

Details for the file symm_learning-0.2.7.tar.gz.

File metadata

  • Download URL: symm_learning-0.2.7.tar.gz
  • Upload date:
  • Size: 29.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for symm_learning-0.2.7.tar.gz
Algorithm Hash digest
SHA256 a46237c3cd32c8ea4aecbf2e3bb381f912fee5bebd44f138e69803cd5ab0cde5
MD5 fdfeddf21f332e3fe68b5d8fc375c10f
BLAKE2b-256 934ffc039d1d963b8db8b78869efd82e5dde5718dfc0dd43cc2d27e6b81941e8

See more details on using hashes here.

Provenance

The following attestation bundles were made for symm_learning-0.2.7.tar.gz:

Publisher: publish2pypi.yaml on Danfoa/symmetric_learning

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file symm_learning-0.2.7-py3-none-any.whl.

File metadata

  • Download URL: symm_learning-0.2.7-py3-none-any.whl
  • Upload date:
  • Size: 34.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for symm_learning-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 6655fde73dd71e46a597909cdae442f69c0f8c87a6e072842d202e6c0b9d77a4
MD5 e70e23f7d7a08e159c0253f193d86a5a
BLAKE2b-256 e1ba07e897491f443f10d89c3b8bd482bd5692098ecdfa1a5403e74ef2d432e1

See more details on using hashes here.

Provenance

The following attestation bundles were made for symm_learning-0.2.7-py3-none-any.whl:

Publisher: publish2pypi.yaml on Danfoa/symmetric_learning

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page