Skip to main content

A meta-language to build PyTorch networks dynamically

Project description

MODTORCH

MODTORCH is a powerful meta-language for building PyTorch networks on the fly, without having to write custom PyTorch classes manually.

This is currently a beta version. Contributions, suggestions, and forks are welcome.

MOE model creation is already working, but the documentation is still in progress.

Overview

In MODTORCH, a network is defined as a list of dictionaries. Each dictionary represents a layer or a tensor operation.

This makes it possible to define complex architectures dynamically, including custom tensor manipulations, multiple inputs, saved intermediate tensors, and reusable named outputs.

Installation

pip install modtorch

Rules

Each dictionary in the list describes one step of the network.

  • Each dictionary can contain the name of the module where the layer is defined.

  • If omitted, the default is 'module': 'basic', which refers to basic.py and includes standard PyTorch layers.

  • You can define your own custom layers (see modlib.py) and register them in MODTORCH (see custom.py).

  • To use custom layers, set 'module': 'custom' or another registered module name.

  • Each dictionary can also contain the name of the layer to execute.

  • If omitted, the default is 'layer': 'Identity'.

  • Any additional key-value pairs in the dictionary are passed as arguments to the selected layer.

  • For standard PyTorch layers, use the same argument names as in PyTorch.

Inputs

The model definition must begin with layers used to load the inputs.

Given a list of input tensors, you can load them in two ways:

  • Use 'add_input': True to load inputs sequentially.
  • Use 'sel_input': N to select the input at position N in the input list.

Saving and reusing tensors

  • Use 'save': 'xyz' to store the input or output of a layer under the name 'xyz'.
  • You can also save lists when needed.
  • Use 'name': 'xyz' to load a previously saved tensor and use it as the input of the current layer.
  • If 'name' is not specified, the output of the previous layer is used.
  • If a layer requires multiple inputs, use 'name_list': ['abc', 'xyz'].

Defining custom layers

To extend modtorch with custom layers, create an importable Python module that the library can link to, similar to custom.py.

Custom flags

You can add custom flags to any dictionary for later use during training or post-processing.

For example:

  • 'encoder': True can be used to mark layers involved in the encoder path.
  • These flags can then be used by helper methods or custom workflows.

Basic example

from modtorch import NN_Model
import torch

static = torch.rand(1, 10)

nn_layers = [
    {'add_input': True},
    {'layer': 'Linear', 'in_features': 10, 'out_features': 10},
    {'layer': 'LayerNorm', 'normalized_shape': 10},
    {'layer': 'SiLU'},
    {'layer': 'Linear', 'in_features': 10, 'out_features': 1},
]

NN = NN_Model(nn_layers)
prev = NN([static])
print(f'Output: {prev.detach()}')

Encoder example

from modtorch import NN_Model
import torch

static_0 = torch.rand(1, 10)
static_1 = torch.rand(1, 4)

nn_layers = [
    {'add_input': True, 'save': 'static_0', 'encoder': True},
    {'add_input': True, 'save': 'static_1', 'encoder': True},

    {'layer': 'Linear', 'in_features': 10, 'out_features': 4, 'name': 'static_0', 'encoder': True},
    {'layer': 'SiLU', 'save': 'static_0->feat', 'encoder': True},

    {'layer': 'Linear', 'in_features': 4, 'out_features': 2, 'name': 'static_1', 'encoder': True},
    {'layer': 'SiLU', 'save': 'static_1->feat', 'encoder': True},

    {'module': 'custom', 'layer': 'Concatenate', 'dim': 1, 'name_list': ['static_0->feat', 'static_1->feat'], 'encoder': True},
    {'layer': 'Linear', 'in_features': 6, 'out_features': 6, 'encoder': True},
    {'layer': 'SiLU', 'save': 'encoder_out', 'encoder': True},

    {'layer': 'Linear', 'in_features': 6, 'out_features': 10, 'name': 'encoder_out', 'save': 'static_0->encoder'},
    {'layer': 'Linear', 'in_features': 6, 'out_features': 4, 'name': 'encoder_out', 'save': 'static_1->encoder'},

    {'output_list': ['static_0->encoder', 'static_1->encoder']}
]

NN = NN_Model(nn_layers)
prev = NN([static_0, static_1])
enc = NN.encoder([static_0, static_1])

print(
    f'Output 1: {prev.detach()} - '
    f'Output 2: {prev.detach()} - '
    f'Encoded: {enc.detach()}'
)

Advanced example

from modtorch import NN_Model
import torch

ts = torch.rand(1, 20, 6)
img = torch.rand(1, 3, 64, 64)
static = torch.rand(1, 10)

nn_layers = [
    {'add_input': True, 'save': 'time_series'},
    {'add_input': True, 'save': 'image'},
    {'add_input': True, 'save': 'static'},

    {'layer': 'Linear', 'in_features': 6, 'out_features': 4, 'name': 'time_series'},
    {'layer': 'LayerNorm', 'normalized_shape': 4, 'save': 'time_series->proj'},

    {'layer': 'Conv2d', 'in_channels': 3, 'out_channels': 6, 'kernel_size': 3, 'padding': 1, 'bias': False, 'name': 'image'},
    {'layer': 'BatchNorm2d', 'num_features': 6},
    {'layer': 'SiLU'},
    {'layer': 'StochasticDepth', 'p': 0.1, 'save': 'image->feat'},

    {'layer': 'AvgPool2d', 'kernel_size': 3, 'stride': 1, 'padding': 1, 'name': 'image'},
    {'layer': 'Conv2d', 'in_channels': 3, 'out_channels': 6, 'kernel_size': 1, 'bias': False},
    {'layer': 'BatchNorm2d', 'num_features': 6, 'save': 'image->res'},
    {'module': 'custom', 'layer': 'Add', 'name_list': ['image->feat','image->res']},
    {'module': 'custom', 'layer': 'GCBlock', 'channels': 6, 'reduction': 4, 'activation': 'SiLU'},
    {'layer': 'AdaptiveAvgPool2d', 'output_size': 1},
    {'layer': 'Flatten', 'start_dim': 2, 'end_dim': 3},
    {'module': 'custom', 'layer': 'Transpose', 'dim0': 2, 'dim1': 1},
    {'layer': 'Linear', 'in_features': 6, 'out_features': 4, 'save': 'image->feat'},
    {'module': 'custom', 'layer': 'Multiply', 'name_list': ['time_series->proj','image->feat'], 'save': 'ts+image'},

    {'module': 'custom', 'layer': 'TSMixer', 'n_lag': 20, 'n_features': 4, 'n_output': 4, 'n_mixer': 1,
        'activation': 'fastglu', 'dropout': 0.1, 'normalization': 'BatchNorm', 'name': 'ts+image', 'save': 'ts_mixer->out'},

    {'layer': 'Linear', 'in_features': 10, 'out_features': 4, 'name': 'static'},
    {'layer': 'LayerNorm', 'normalized_shape': 4},
    {'layer': 'Softmax', 'dim': 1, 'save': 'gate'},
    {'module': 'custom', 'layer': 'Multiply', 'name_list': ['ts_mixer->out','gate']},
    {'module': 'custom', 'layer': 'Split', 'indices': [2], 'dim': 1, 'save': ['out1','out2']},
    {'output_list': ['out1','out2', 'gate']}
]

NN = NN_Model(nn_layers)
prev = NN([ts, img, static])
print(f'Output 1: {prev[0].detach()} - Output 2: {prev[1].detach()} - Gate: {prev[2].detach()}')

More standard PyTorch or custom layers could be easily added to the libraries.

Notes

  • MODTORCH is designed for flexibility and rapid experimentation.
  • It is especially useful when you want to define architectures dynamically from configuration files or Python dictionaries.
  • Custom modules and tensor routing make it possible to build more advanced architectures without writing dedicated PyTorch model classes.

Project status

MODTORCH is still under active development.

Some features, such as MOE model creation, are already functional but not yet fully documented.

Contributions, issues, and forks are welcome.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modtorch-0.1.2.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modtorch-0.1.2-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file modtorch-0.1.2.tar.gz.

File metadata

  • Download URL: modtorch-0.1.2.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for modtorch-0.1.2.tar.gz
Algorithm Hash digest
SHA256 39b3f4728705d4036476ed7fc1244934e2513f0402739d1e85ccb00f38ed14fc
MD5 742ae22c0f14d6e626b9a392b1afc39b
BLAKE2b-256 7c783a82ee167539504650d7a7854fd0a521bba1a89ac42f8ff49bfde8a52b67

See more details on using hashes here.

Provenance

The following attestation bundles were made for modtorch-0.1.2.tar.gz:

Publisher: publish.yml on riccstef/modtorch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file modtorch-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: modtorch-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 12.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for modtorch-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e4f35a3794d7b9940c5480adf2577ec061873411fd5068f9220b7c6f59ae72f5
MD5 c694b64de4e01eb5ae242d60e1b7392c
BLAKE2b-256 257525a8d4b9c5f72388e81a0a14cdaf1c02278212d0d737527f3aa359401ede

See more details on using hashes here.

Provenance

The following attestation bundles were made for modtorch-0.1.2-py3-none-any.whl:

Publisher: publish.yml on riccstef/modtorch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page