Skip to main content

Typed pluggable Transformer blocks with registries for attention, norm, feedforward, adapters, and position layers.

Project description

torchblocks-vp

torchblocks-vp is a typed PyTorch building-block library with a registry-driven API for assembling neural architectures out of interchangeable modules.

PyPI package name:

pip install torchblocks-vp

Import name:

import torchblocks

The package is meant for projects that want pluggable model components without copying the same attention, norm, feedforward, or adapter code into every application.

Design goals

torchblocks-vp is built around a few principles:

  • reusable blocks instead of one fixed architecture
  • explicit registration and lookup
  • publishable, typed interfaces
  • easy extension from downstream packages

It works especially well in codebases where model components should be configurable from TOML or Python configs.

Registry API

The central API is:

  • register(category, name)
  • get(category, name)
  • list_modules(category=None)

Example:

import torch
import torch.nn as nn

from torchblocks import get, register


@register("feedforward", "my_ff")
class MyFeedForward(nn.Module):
    def __init__(self, d_model: int, dim_ff: int, dropout: float = 0.1) -> None:
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(d_model, dim_ff),
            nn.GELU(),
            nn.Dropout(dropout),
            nn.Linear(dim_ff, d_model),
        )

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return self.net(x)


ff_cls = get("feedforward", "my_ff")

This allows applications to select blocks dynamically without importing implementation modules directly everywhere.

Included module families

Attention

Registered attention modules:

  • gqa
  • mha
  • cross

These cover grouped-query attention, standard multi-head self-attention, and cross-attention.

Feedforward

Registered feedforward modules:

  • swiglu
  • gelu

Normalization

Registered norm modules:

  • rmsnorm
  • layernorm

Adapters

Registered adapter modules:

  • language_conditioned
  • bottleneck
  • none

Convolution

Registered convolution modules:

  • local
  • none

Position

Registered positional module:

  • rope

Installation

Requirements:

  • Python >=3.14
  • PyTorch >=2.0

Install from PyPI:

pip install torchblocks-vp

Common usage pattern

Most downstream code uses the registry as a factory layer:

from typing import cast

from torchblocks import get

norm_cls = get("norm", "rmsnorm")
ff_cls = get("feedforward", "swiglu")

norm = norm_cls(768)
ff = ff_cls(768, 3072, dropout=0.1)

Applications can keep architectural choices in config files while the runtime maps names to modules.

Included implementations

The current package exports the registry and auto-registers the bundled modules on import.

Implemented classes include:

  • GroupedQueryAttention
  • MultiHeadAttention
  • CrossAttention
  • SwiGLUFeedForward
  • GeLUFeedForward
  • RMSNorm
  • LayerNorm
  • LanguageConditionedAdapter
  • BottleneckAdapter
  • NoAdapter
  • LocalConvModule
  • NoConv
  • RotaryEmbedding

Helper functions:

  • rotate_half
  • apply_rotary_emb

Extending the package

You do not need to modify torchblocks-vp itself to add new blocks. A downstream package can register its own modules at startup:

from torchblocks import register


@register("attention", "my_attention")
class MyAttention(...):
    ...

As long as the constructor and forward signature match what the consuming model expects, the registry is enough.

Why the package is split out

Keeping these blocks in their own package helps with:

  • reuse across multiple models
  • cleaner application packages
  • separate publishing and versioning
  • stronger type boundaries between architecture code and task code

That matters in this repository because the libraries are published independently rather than bundled into one monolithic distribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchblocks_vp-2.1.5.tar.gz (8.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchblocks_vp-2.1.5-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file torchblocks_vp-2.1.5.tar.gz.

File metadata

  • Download URL: torchblocks_vp-2.1.5.tar.gz
  • Upload date:
  • Size: 8.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for torchblocks_vp-2.1.5.tar.gz
Algorithm Hash digest
SHA256 2968a711f1252ba6e3bba6943ff2c00b41e026b8a80077f3bfac96d692520e81
MD5 c730bc8b89963bb92de84e28d44d29b6
BLAKE2b-256 f175327659c4e6859ef6d915ca0ae187f98ea36fb4b5f03f4671cb84b57656fa

See more details on using hashes here.

File details

Details for the file torchblocks_vp-2.1.5-py3-none-any.whl.

File metadata

  • Download URL: torchblocks_vp-2.1.5-py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for torchblocks_vp-2.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 b8796cfe3c11e173bc4f1d8d5b79dfec193114dc596a65f8518b51271b4f429c
MD5 27d9846f8e8ed0bd613a382f879818a9
BLAKE2b-256 5ed103ab5b5d0dd7347e4b65203ea02f220685225bfae226cc4f38d8b1bde2ba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page