Skip to main content

Typed pluggable Transformer blocks with registries for attention, norm, feedforward, adapters, and position layers.

Project description

torchblocks-vp

torchblocks-vp is a typed PyTorch building-block library with a registry-driven API for assembling neural architectures out of interchangeable modules.

PyPI package name:

pip install torchblocks-vp

Import name:

import torchblocks

The package is meant for projects that want pluggable model components without copying the same attention, norm, feedforward, or adapter code into every application.

Design goals

torchblocks-vp is built around a few principles:

  • reusable blocks instead of one fixed architecture
  • explicit registration and lookup
  • publishable, typed interfaces
  • easy extension from downstream packages

It works especially well in codebases where model components should be configurable from TOML or Python configs.

Registry API

The central API is:

  • register(category, name)
  • get(category, name)
  • list_modules(category=None)

Example:

import torch
import torch.nn as nn

from torchblocks import get, register


@register("feedforward", "my_ff")
class MyFeedForward(nn.Module):
    def __init__(self, d_model: int, dim_ff: int, dropout: float = 0.1) -> None:
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(d_model, dim_ff),
            nn.GELU(),
            nn.Dropout(dropout),
            nn.Linear(dim_ff, d_model),
        )

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return self.net(x)


ff_cls = get("feedforward", "my_ff")

This allows applications to select blocks dynamically without importing implementation modules directly everywhere.

Included module families

Attention

Registered attention modules:

  • gqa
  • mha
  • cross

These cover grouped-query attention, standard multi-head self-attention, and cross-attention.

Feedforward

Registered feedforward modules:

  • swiglu
  • gelu

Normalization

Registered norm modules:

  • rmsnorm
  • layernorm

Adapters

Registered adapter modules:

  • language_conditioned
  • bottleneck
  • none

Convolution

Registered convolution modules:

  • local
  • none

Position

Registered positional module:

  • rope

Installation

Requirements:

  • Python >=3.14
  • PyTorch >=2.0

Install from PyPI:

pip install torchblocks-vp

Common usage pattern

Most downstream code uses the registry as a factory layer:

from typing import cast

from torchblocks import get

norm_cls = get("norm", "rmsnorm")
ff_cls = get("feedforward", "swiglu")

norm = norm_cls(768)
ff = ff_cls(768, 3072, dropout=0.1)

Applications can keep architectural choices in config files while the runtime maps names to modules.

Included implementations

The current package exports the registry and auto-registers the bundled modules on import.

Implemented classes include:

  • GroupedQueryAttention
  • MultiHeadAttention
  • CrossAttention
  • SwiGLUFeedForward
  • GeLUFeedForward
  • RMSNorm
  • LayerNorm
  • LanguageConditionedAdapter
  • BottleneckAdapter
  • NoAdapter
  • LocalConvModule
  • NoConv
  • RotaryEmbedding

Helper functions:

  • rotate_half
  • apply_rotary_emb

Extending the package

You do not need to modify torchblocks-vp itself to add new blocks. A downstream package can register its own modules at startup:

from torchblocks import register


@register("attention", "my_attention")
class MyAttention(...):
    ...

As long as the constructor and forward signature match what the consuming model expects, the registry is enough.

Why the package is split out

Keeping these blocks in their own package helps with:

  • reuse across multiple models
  • cleaner application packages
  • separate publishing and versioning
  • stronger type boundaries between architecture code and task code

That matters in this repository because the libraries are published independently rather than bundled into one monolithic distribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchblocks_vp-2.1.3.tar.gz (8.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchblocks_vp-2.1.3-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file torchblocks_vp-2.1.3.tar.gz.

File metadata

  • Download URL: torchblocks_vp-2.1.3.tar.gz
  • Upload date:
  • Size: 8.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for torchblocks_vp-2.1.3.tar.gz
Algorithm Hash digest
SHA256 750929a6c0ffb5c8cabc531f96cb74e2db78033fbc955d19a06f7d9b67b14ec7
MD5 828faa971994220103034eb08e12d004
BLAKE2b-256 377cf5ae46860a18684edb244a5c1f3b2e97c27f930c3c1d9411103b535f93ac

See more details on using hashes here.

File details

Details for the file torchblocks_vp-2.1.3-py3-none-any.whl.

File metadata

  • Download URL: torchblocks_vp-2.1.3-py3-none-any.whl
  • Upload date:
  • Size: 9.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for torchblocks_vp-2.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a80a30e1dfd412b9b5d63940e9160b6838fd1d318400f60b6d536f61f751c6f6
MD5 469cd5443f4f921f566882fa6747ee07
BLAKE2b-256 23427ad25700ecf3030764a607cb7feaab9fac6e1782caf1a311e6893ebcd101

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page