Skip to main content

Typed pluggable Transformer blocks with registries for attention, norm, feedforward, adapters, and position layers.

Project description

torchblocks-vp

torchblocks-vp is a typed PyTorch building-block library with a registry-driven API for assembling neural architectures out of interchangeable modules.

PyPI package name:

pip install torchblocks-vp

Import name:

import torchblocks

The package is meant for projects that want pluggable model components without copying the same attention, norm, feedforward, or adapter code into every application.

Design goals

torchblocks-vp is built around a few principles:

  • reusable blocks instead of one fixed architecture
  • explicit registration and lookup
  • publishable, typed interfaces
  • easy extension from downstream packages

It works especially well in codebases where model components should be configurable from TOML or Python configs.

Registry API

The central API is:

  • register(category, name)
  • get(category, name)
  • list_modules(category=None)

Example:

import torch
import torch.nn as nn

from torchblocks import get, register


@register("feedforward", "my_ff")
class MyFeedForward(nn.Module):
    def __init__(self, d_model: int, dim_ff: int, dropout: float = 0.1) -> None:
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(d_model, dim_ff),
            nn.GELU(),
            nn.Dropout(dropout),
            nn.Linear(dim_ff, d_model),
        )

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return self.net(x)


ff_cls = get("feedforward", "my_ff")

This allows applications to select blocks dynamically without importing implementation modules directly everywhere.

Included module families

Attention

Registered attention modules:

  • gqa
  • mha
  • cross

These cover grouped-query attention, standard multi-head self-attention, and cross-attention.

Feedforward

Registered feedforward modules:

  • swiglu
  • gelu

Normalization

Registered norm modules:

  • rmsnorm
  • layernorm

Adapters

Registered adapter modules:

  • language_conditioned
  • bottleneck
  • none

Convolution

Registered convolution modules:

  • local
  • none

Position

Registered positional module:

  • rope

Installation

Requirements:

  • Python >=3.14
  • PyTorch >=2.0

Install from PyPI:

pip install torchblocks-vp

Common usage pattern

Most downstream code uses the registry as a factory layer:

from typing import cast

from torchblocks import get

norm_cls = get("norm", "rmsnorm")
ff_cls = get("feedforward", "swiglu")

norm = norm_cls(768)
ff = ff_cls(768, 3072, dropout=0.1)

Applications can keep architectural choices in config files while the runtime maps names to modules.

Included implementations

The current package exports the registry and auto-registers the bundled modules on import.

Implemented classes include:

  • GroupedQueryAttention
  • MultiHeadAttention
  • CrossAttention
  • SwiGLUFeedForward
  • GeLUFeedForward
  • RMSNorm
  • LayerNorm
  • LanguageConditionedAdapter
  • BottleneckAdapter
  • NoAdapter
  • LocalConvModule
  • NoConv
  • RotaryEmbedding

Helper functions:

  • rotate_half
  • apply_rotary_emb

Extending the package

You do not need to modify torchblocks-vp itself to add new blocks. A downstream package can register its own modules at startup:

from torchblocks import register


@register("attention", "my_attention")
class MyAttention(...):
    ...

As long as the constructor and forward signature match what the consuming model expects, the registry is enough.

Why the package is split out

Keeping these blocks in their own package helps with:

  • reuse across multiple models
  • cleaner application packages
  • separate publishing and versioning
  • stronger type boundaries between architecture code and task code

That matters in this repository because the libraries are published independently rather than bundled into one monolithic distribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchblocks_vp-2.1.1.tar.gz (8.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchblocks_vp-2.1.1-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file torchblocks_vp-2.1.1.tar.gz.

File metadata

  • Download URL: torchblocks_vp-2.1.1.tar.gz
  • Upload date:
  • Size: 8.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for torchblocks_vp-2.1.1.tar.gz
Algorithm Hash digest
SHA256 0ca1e561e52bfd0d22b6dc55c7b76e70ac1f4f591f8b85ca569be6f6faceb90b
MD5 8657524272121de1d16b9440671127bf
BLAKE2b-256 8a2e3e1467d8d4a1d71e754fb4b8952b1773350b7ea5ad827cd346e5f98dc552

See more details on using hashes here.

File details

Details for the file torchblocks_vp-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: torchblocks_vp-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 9.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for torchblocks_vp-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1b789e3df778955ed985edefb047ff56c534d870cddbd1acfe7c73d0a2b6d499
MD5 7acfacc434ba30f3e32daabab31a6c18
BLAKE2b-256 2c6a9c4f396d8cfdd1fb0e8d3af4ce641f397f1da8ec812eedf1aa09b6b4beae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page