Skip to main content

High order layers in pytorch

Project description

Build Status

Functional Layers in PyTorch

This is a PyTorch implementation of my tensorflow repository and is more complete due to the flexibility of PyTorch.

Lagrange Polynomial, Piecewise Lagrange Polynomial, Discontinuous Piecewise Lagrange Polynomial, Fourier Series, sum and product layers in PyTorch. The sparsity of using piecewise polynomial layers means that by adding new segments the representational power of your network increases, but the time to complete a forward step remains constant. Implementation includes simple fully connected layers and convolution layers using these models. More details to come. This is a PyTorch implementation of this paper including extension to Fourier Series and convolutional neural networks.

The layers used here do not require additional activation functions and use a simple sum or product in place of the activation. Product is performed in this manner

The 1 is added to each function output to as each of the sub products is also computed. The linear part is controlled by the alpha parameter.

Fully Connected Layer Types

All polynomials are Lagrange polynomials with Chebyshev interpolation points.

A helper function is provided in selecting and switching between these layers

from high_order_layers_torch.layers import *
layer1 = high_order_fc_layers(
    layer_type=layer_type,
    n=n, 
    in_features=784,
    out_features=100,
    segments=segments,
    alpha=linear_part
)

where layer_type is one of

layer_type representation
continuous piecewise polynomial using sum at the neuron
continuous_prod piecewise polynomial using products at the neuron
discontinuous discontinuous piecewise polynomial with sum at the neuron
discontinuous_prod discontinous piecewise polynomial with product at the neuron
polynomial single polynomial (non piecewise) with sum at the neuron
polynomial_prod single polynomial (non piecewise) with product at the neuron
product Product
fourier fourier series with sum at the neuron

n is the number of interpolation points per segment for polynomials or the number of frequencies for fourier series, segments is the number of segments for piecewise polynomials, alpha is used in product layers and when set to 1 keeps the linear part of the product, when set to 0 it subtracts the linear part from the product.

Product Layers

Product layers

Convolutional Layer Types

conv_layer = high_order_convolution_layers(layer_type=layer_type, n=n, in_channels=3, out_channels=6, kernel_size=5, segments=segments, rescale_output=rescale_output, periodicity=periodicity)

All polynomials are Lagrange polynomials with Chebyshev interpolation points.

layer_type representation
continuous piecewise continuous polynomial
discontinuous piecewise discontinuous polynomial
polynomial single polynomial
fourier fourier series convolution

Installing

Installing locally

This repo uses poetry, so run

poetry install

and then

poetry shell

Installing from pypi

pip install high-order-layers-torch

or

poetry add high-order-layers-torch

Examples

Simple function approximation

Approximating a simple function using a single input and single output (single layer) with no hidden layers to approximate a function using continuous and discontinuous piecewise polynomials (with 5 pieces) and simple polynomials and fourier series. The standard approach using ReLU is non competitive. To see more complex see the implicit representation page here.

piecewise continuous polynomial piecewise discontinuous polynomial polynomial fourier series

mnist (convolutional)

python mnist.py max_epochs=1 train_fraction=0.1 layer_type=continuous n=4 segments=2

cifar100 (convolutional)

python cifar100.py -m max_epochs=20 train_fraction=1.0 layer_type=polynomial segments=2 n=7 nonlinearity=False rescale_output=False periodicity=2.0 lr=0.001 linear_output=False

invariant mnist (fully connected)

python invariant_mnist.py max_epochs=100 train_fraction=1 layer_type=polynomial n=5

Constructing the network

self.layer1 = high_order_fc_layers(
    layer_type=cfg.layer_type, n=cfg.n, in_features=784, out_features=100, segments=cfg.segments, alpha=cfg.linear_part)
self.layer2 = nn.LayerNorm(100)
self.layer3 = high_order_fc_layers(
    layer_type=cfg.layer_type, n=cfg.n, in_features=100, out_features=10, segments=cfg.segments, alpha=cfg.linear_part)
self.layer4 = nn.LayerNorm(10)

Implicit Representation

An example of implicit representation can be found here

Reference

@misc{Loverich2020,
  author = {Loverich, John},
  title = {High Order Layers Torch},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/jloveric/high-order-layers-torch}},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

high-order-layers-torch-1.0.4.tar.gz (12.1 kB view details)

Uploaded Source

Built Distribution

high_order_layers_torch-1.0.4-py3-none-any.whl (13.1 kB view details)

Uploaded Python 3

File details

Details for the file high-order-layers-torch-1.0.4.tar.gz.

File metadata

  • Download URL: high-order-layers-torch-1.0.4.tar.gz
  • Upload date:
  • Size: 12.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.6 CPython/3.8.10 Linux/5.8.0-63-generic

File hashes

Hashes for high-order-layers-torch-1.0.4.tar.gz
Algorithm Hash digest
SHA256 ad830cfef8af623fbecca35a937dd46dd85800abcaa5adb1a01a9b7abe241077
MD5 cfe0a929b013ade9757fe4441aa2bf90
BLAKE2b-256 400facc825352d0364271abd8d8c98a14d34f554b158a421a25b20c492075554

See more details on using hashes here.

File details

Details for the file high_order_layers_torch-1.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for high_order_layers_torch-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 e66c074206e211d7603d5f7f7a47a2d454c5d526e5d340d8fec1c860a3039fe1
MD5 1b42dae5352d9107c01dcb7f0b89dde1
BLAKE2b-256 dd9a6fc8f3ec76cfad4b0ab2a41a4c2f439b37d76969d451925771781a25875a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page