A meta-language to build PyTorch networks dynamically
Project description
MODTORCH
MODTORCH is a powerful meta-language for building PyTorch networks on the fly, without having to write custom PyTorch classes manually.
This is currently a beta version. Contributions, suggestions, and forks are welcome.
MOE model creation is already working, but the documentation is still in progress.
Overview
In MODTORCH, a network is defined as a list of dictionaries. Each dictionary represents a layer or a tensor operation.
This makes it possible to define complex architectures dynamically, including custom tensor manipulations, multiple inputs, saved intermediate tensors, and reusable named outputs.
Rules
Each dictionary in the list describes one step of the network.
-
Each dictionary can contain the name of the module where the layer is defined.
-
If omitted, the default is
'module': 'basic', which refers tobasic.pyand includes standard PyTorch layers. -
You can define your own custom layers (see
modlib.py) and register them in MODTORCH (seecustom.py). -
To use custom layers, set
'module': 'custom'or another registered module name. -
Each dictionary can also contain the name of the layer to execute.
-
If omitted, the default is
'layer': 'Identity'. -
Any additional key-value pairs in the dictionary are passed as arguments to the selected layer.
-
For standard PyTorch layers, use the same argument names as in PyTorch.
Inputs
The model definition must begin with layers used to load the inputs.
Given a list of input tensors, you can load them in two ways:
- Use
'add_input': Trueto load inputs sequentially. - Use
'sel_input': Nto select the input at positionNin the input list.
Saving and reusing tensors
- Use
'save': 'xyz'to store the input or output of a layer under the name'xyz'. - You can also save lists when needed.
- Use
'name': 'xyz'to load a previously saved tensor and use it as the input of the current layer. - If
'name'is not specified, the output of the previous layer is used. - If a layer requires multiple inputs, use
'name_list': ['abc', 'xyz'].
Custom flags
You can add custom flags to any dictionary for later use during training or post-processing.
For example:
'encoder': Truecan be used to mark layers involved in the encoder path.- These flags can then be used by helper methods or custom workflows.
Basic example
from modtorch import NN_Model
import torch
static = torch.rand(1, 10)
nn_layers = [
{'add_input': True},
{'layer': 'Linear', 'in_features': 10, 'out_features': 10},
{'layer': 'LayerNorm', 'normalized_shape': 10},
{'layer': 'SiLU'},
{'layer': 'Linear', 'in_features': 10, 'out_features': 1},
]
NN = NN_Model(nn_layers)
prev = NN([static])
print(f'Output: {prev.detach()}')
Encoder example
from modtorch import NN_Model
import torch
static_0 = torch.rand(1, 10)
static_1 = torch.rand(1, 4)
nn_layers = [
{'add_input': True, 'save': 'static_0', 'encoder': True},
{'add_input': True, 'save': 'static_1', 'encoder': True},
{'layer': 'Linear', 'in_features': 10, 'out_features': 4, 'name': 'static_0', 'encoder': True},
{'layer': 'SiLU', 'save': 'static_0->feat', 'encoder': True},
{'layer': 'Linear', 'in_features': 4, 'out_features': 2, 'name': 'static_1', 'encoder': True},
{'layer': 'SiLU', 'save': 'static_1->feat', 'encoder': True},
{'module': 'custom', 'layer': 'Concatenate', 'dim': 1, 'name_list': ['static_0->feat', 'static_1->feat'], 'encoder': True},
{'layer': 'Linear', 'in_features': 6, 'out_features': 6, 'encoder': True},
{'layer': 'SiLU', 'save': 'encoder_out', 'encoder': True},
{'layer': 'Linear', 'in_features': 6, 'out_features': 10, 'name': 'encoder_out', 'save': 'static_0->encoder'},
{'layer': 'Linear', 'in_features': 6, 'out_features': 4, 'name': 'encoder_out', 'save': 'static_1->encoder'},
{'output_list': ['static_0->encoder', 'static_1->encoder']}
]
NN = NN_Model(nn_layers)
prev = NN([static_0, static_1])
enc = NN.encoder([static_0, static_1])
print(
f'Output 1: {prev.detach()} - '
f'Output 2: {prev.detach()} - '
f'Encoded: {enc.detach()}'
)
Advanced example
from modtorch import NN_Model
import torch
ts = torch.rand(1, 20, 6)
img = torch.rand(1, 3, 64, 64)
static = torch.rand(1, 10)
nn_layers = [
{'add_input': True, 'save': 'time_series'},
{'add_input': True, 'save': 'image'},
{'add_input': True, 'save': 'static'},
{'layer': 'Linear', 'in_features': 6, 'out_features': 4, 'name': 'time_series'},
{'layer': 'LayerNorm', 'normalized_shape': 4, 'save': 'time_series->proj'},
{'layer': 'Conv2d', 'in_channels': 3, 'out_channels': 6, 'kernel_size': 3, 'padding': 1, 'bias': False, 'name': 'image'},
{'layer': 'BatchNorm2d', 'num_features': 6},
{'layer': 'SiLU'},
{'layer': 'StochasticDepth', 'p': 0.1, 'save': 'image->feat'},
{'layer': 'AvgPool2d', 'kernel_size': 3, 'stride': 1, 'padding': 1, 'name': 'image'},
{'layer': 'Conv2d', 'in_channels': 3, 'out_channels': 6, 'kernel_size': 1, 'bias': False},
{'layer': 'BatchNorm2d', 'num_features': 6, 'save': 'image->res'},
{'module': 'custom', 'layer': 'Add', 'name_list': ['image->feat','image->res']},
{'module': 'custom', 'layer': 'GCBlock', 'channels': 6, 'reduction': 4, 'activation': 'SiLU'},
{'layer': 'AdaptiveAvgPool2d', 'output_size': 1},
{'layer': 'Flatten', 'start_dim': 2, 'end_dim': 3},
{'module': 'custom', 'layer': 'Transpose', 'dim0': 2, 'dim1': 1},
{'layer': 'Linear', 'in_features': 6, 'out_features': 4, 'save': 'image->feat'},
{'module': 'custom', 'layer': 'Multiply', 'name_list': ['time_series->proj','image->feat'], 'save': 'ts+image'},
{'module': 'custom', 'layer': 'TSMixer', 'n_lag': 20, 'n_features': 4, 'n_output': 4, 'n_mixer': 1,
'activation': 'fastglu', 'dropout': 0.1, 'normalization': 'BatchNorm', 'name': 'ts+image', 'save': 'ts_mixer->out'},
{'layer': 'Linear', 'in_features': 10, 'out_features': 4, 'name': 'static'},
{'layer': 'LayerNorm', 'normalized_shape': 4},
{'layer': 'Softmax', 'dim': 1, 'save': 'gate'},
{'module': 'custom', 'layer': 'Multiply', 'name_list': ['ts_mixer->out','gate']},
{'module': 'custom', 'layer': 'Split', 'indices': [2], 'dim': 1, 'save': ['out1','out2']},
{'output_list': ['out1','out2', 'gate']}
]
NN = NN_Model(nn_layers)
prev = NN([ts, img, static])
print(f'Output 1: {prev[0].detach()} - Output 2: {prev[1].detach()} - Gate: {prev[2].detach()}')
More standard PyTorch or custom layers could be easily added to the libraries.
Notes
- MODTORCH is designed for flexibility and rapid experimentation.
- It is especially useful when you want to define architectures dynamically from configuration files or Python dictionaries.
- Custom modules and tensor routing make it possible to build more advanced architectures without writing dedicated PyTorch model classes.
Project status
MODTORCH is still under active development.
Some features, such as MOE model creation, are already functional but not yet fully documented.
Contributions, issues, and forks are welcome.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file modtorch-0.1.0.tar.gz.
File metadata
- Download URL: modtorch-0.1.0.tar.gz
- Upload date:
- Size: 11.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f15b6e4bee17e532c5dc99e53a69314b0899405c08039b412b316934500c8a1b
|
|
| MD5 |
6fc8c0076ebfd5d97378876fa630d77e
|
|
| BLAKE2b-256 |
42217b6eee62f1e275abfcdcf72ed050b986baf651c015119b2f1f2548573cb3
|
Provenance
The following attestation bundles were made for modtorch-0.1.0.tar.gz:
Publisher:
publish.yml on riccstef/modtorch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
modtorch-0.1.0.tar.gz -
Subject digest:
f15b6e4bee17e532c5dc99e53a69314b0899405c08039b412b316934500c8a1b - Sigstore transparency entry: 1362740590
- Sigstore integration time:
-
Permalink:
riccstef/modtorch@d629c3b7701ba2916c1a6e24b729f56d23d4167b -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/riccstef
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d629c3b7701ba2916c1a6e24b729f56d23d4167b -
Trigger Event:
push
-
Statement type:
File details
Details for the file modtorch-0.1.0-py3-none-any.whl.
File metadata
- Download URL: modtorch-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8ba2af506d7df44408e6a5724886ef64d22dfc21d9184726923be1f337ad909d
|
|
| MD5 |
a70df0fdfd05d7eff525d9a36998e6f4
|
|
| BLAKE2b-256 |
6537935db1fd6c9106709c5b5a564c6648b9d3c2b9c125b5467da9706f01b77d
|
Provenance
The following attestation bundles were made for modtorch-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on riccstef/modtorch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
modtorch-0.1.0-py3-none-any.whl -
Subject digest:
8ba2af506d7df44408e6a5724886ef64d22dfc21d9184726923be1f337ad909d - Sigstore transparency entry: 1362740711
- Sigstore integration time:
-
Permalink:
riccstef/modtorch@d629c3b7701ba2916c1a6e24b729f56d23d4167b -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/riccstef
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d629c3b7701ba2916c1a6e24b729f56d23d4167b -
Trigger Event:
push
-
Statement type: