Skip to main content

ArchIt: A framework for base-and-head language models, and toolkit for converting in-place modifications of PyTorch objects into class code.

Project description

ArchIt: Automatic PyTorch architectures

ArchIt lets you put heads on top of models without having to write dedicated task classes again and again. It also helps you rewrite PyTorch code for base models augmented at runtime.

NOTICE: from Transformers v4.50.0 onwards, the implementation of from_pretrained was changed so significantly that ArchIt is broken for all versions beyond v4.49.0. This will be fixed eventually. See the table below.

archit.instantiation: Add heads to a base model, without needing to write YourModelForThatTask classes

Why in the heavens do we need separate classes for RobertaForTokenClassification and DebertaForTokenclassification? The base model encodes tokens into embeddings, and the head, which only cares about the resulting embeddings, converts them to logits. Separation of concerns. There is no need to rewrite "model-with-head" classes over and over again for each model augmentation.

This part of ArchIt is the backbone behind the LaMoTO package.

archit.declaration: Convert a PyTorch instance into PyTorch architecture classes.

Recursively rewrite a class hierarchy (i.e. generate Python code of PyTorch architectures) so that in-place modifications are now defined explicitly.

As an example of this: I'm involved in two projects where I replace the embedding matrix of a RobertaForMaskedLM by a new class. If I want to load a checkpoint of that model, I need to write a new class definition for the RobertaEmbeddings that uses my replacement of the Embedding, a new RobertaModel using the new embeddings, and a new RobertaForMaskedLM using that new model. ArchIt writes that code for you.

Installation

Due to severe implementational changes in the transformers package, the version of ArchIt you need depends on which version of transformers you are on:

Transformers version ArchIt version
<= v4.49.0 <= 2026.3.1
>= v4.50.0 > 2026.3.1

To install a specific version, replace YOUR_VERSION_HERE below according to the table:

pip install "archit[all] == YOUR_VERSION_HERE"

Usage

Instantiation

You have some kind of model architecture that generates token embeddings (e.g. some variant of RobertaModel) and you want to put a head on it for fine-tuning. Because you are a sane individual, you'd prefer not writing code for a head that has been defined hundreds of times before by others, and you also don't want to write a class for each model-head combination.

With ArchIt, you can just build the architecture at runtime:

from transformers import RobertaConfig

from archit.instantiation.basemodels import RobertaBaseModel
from archit.instantiation.tasks import ForDependencyParsing
from archit.instantiation.heads import DependencyParsingHeadConfig

model_with_head = ForDependencyParsing.fromModelAndHeadConfig(
    RobertaBaseModel(RobertaConfig()),
    DependencyParsingHeadConfig()
)

"What if I have a pre-trained checkpoint of my core model?" No problem! The from_pretrained of these predefined model-with-head architectures will read your checkpoint and put the weights into the right parts of the model:

from archit.instantiation.basemodels import RobertaBaseModel
from archit.instantiation.tasks import ForDependencyParsing
from archit.instantiation.heads import DependencyParsingHeadConfig

model_with_head = ForDependencyParsing.from_pretrained(
    "path/to/core-checkpoint", 
    RobertaBaseModel, 
    DependencyParsingHeadConfig()
)

All you need to give is your checkpoint, a wrapper to put around the specific implementation of your core embedding model, and -- if the checkpoint is not already a checkpoint of ForDependencyParsing -- a config for the head you put on it.

Declaration

You've defined a PyTorch architecture in code. Now you reassign one of its fields. This new PyTorch architecture exists in memory, but not in code.

With ArchIt, the code for the modified architecture can be generated automatically:

from transformers import RobertaForMaskedLM
import torch

class CoolNewEmbeddingMatrix(torch.nn.Module):
    def forward(self, input_ids):
        pass

model_with_head = RobertaForMaskedLM.from_pretrained("roberta-base")
model_with_head.roberta.embeddings.word_embeddings = CoolNewEmbeddingMatrix()
# ^--- This works, but there is no class definition declaring word_embeddings as a CoolNewEmbeddingMatrix.

from archit.declaration.compiler import printDifference
printDifference(model_with_head, RobertaForMaskedLM)  # Outputs Python code for 3 new classes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

archit-2026.4.1.tar.gz (41.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

archit-2026.4.1-py3-none-any.whl (43.0 kB view details)

Uploaded Python 3

File details

Details for the file archit-2026.4.1.tar.gz.

File metadata

  • Download URL: archit-2026.4.1.tar.gz
  • Upload date:
  • Size: 41.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Hatch/1.16.5 cpython/3.13.12 HTTPX/0.28.1

File hashes

Hashes for archit-2026.4.1.tar.gz
Algorithm Hash digest
SHA256 3f50c093fb854c853410c9ee8b80cdfe79c215e7a731ea5a78e86ab7a7eab196
MD5 adebc156e31b9b58651f5cc6be9fa6e4
BLAKE2b-256 94219cd87e2ba5243f31cdfad2d84fe4c8b1db35f934f091954cdd17dcf9267b

See more details on using hashes here.

File details

Details for the file archit-2026.4.1-py3-none-any.whl.

File metadata

  • Download URL: archit-2026.4.1-py3-none-any.whl
  • Upload date:
  • Size: 43.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Hatch/1.16.5 cpython/3.13.12 HTTPX/0.28.1

File hashes

Hashes for archit-2026.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 69119a6e941d87b6146aad5243bd7a5840d0d1ee70b7947f3044e8dfe7d0337d
MD5 035549aeb4871b60b7211696be4897a2
BLAKE2b-256 0189ee3e8d29fed4d5815476304d93b0770e3458efc0a063554dcf90c406801d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page