Skip to main content

Reverse engineer transformer models.

Project description

Deformers apehex logo

License Latest

Experiments with modular neural networks by patching pre-trained LLMs.

Original Model

All experiments use the open-source model qwen/qwen3.5-9b.

Relevant configuration:

  • hidden size: 4096
  • number of layers: 32
  • vocabulary size: 248320
  • embedding size: 4096
  • positional encoding: rotary
  • embedding weights are not tied with the output head

The embedding layer and output head each contain approximately:

4096 × 248320 ~ 1.02B parameters

Patching Layers

Composite Embedding (Prefix Patch)

Frozen components

  • tokenization (Qwen BPE tokenizer)
  • transformer trunk (all the hidden transformer layers)
  • positional encoding
  • output head

Replaced component

The original token embedding Embedding(V=248320, D=4096) is replaced with a composite embedding layer with:

  • a group dimension of 32
  • an input dimension of 256 (byte values)
  • an embedding dimension of 128
  • for a total of 256 * 128 = 32768 parameters

So that a tensor of shape (B, 32 * S) is processed into (B, S, 32 * 128) = (B, S, 4096).

Then a regular transformer block maps these composite embeddings with the original emebddings of the Qwen model.

Input representation

Input text is tokenized using the original Qwen tokenizer.

Each token string is encoded as UTF-8 bytes.

Shorter tokens are then padded into a block of 32 bytes and the longer tokens are truncated.

Patch Training

The training was performed on a multilingual corpora with a custom loss:

$$L_{k} = || H_{k, patch}(x) − H_{k, qwen}(x) ||^{2}$$

Where:

  • $k$ is the depth inside the original Qwen 3.5 model
  • $H_{k, qwen}$ is the hidden state at depth $k$ in the original model
  • $H_{k, patch}$ is the hiden state obtained when replacing the embedding layer

Hierarchical Softmax Head

Frozen components

  • tokenizer
  • embedding layer
  • positional encoding
  • transformer trunk (all the hidden layers)

Replaced component

Original output head Linear(4096 => 248320) is replaced by a hierarchical softmax tree.

Tokens are organized in a binary tree with a depth of 18 ~ log_2(248320)

Each token corresponds to a unique path from root to leaf.

Patch Training

Here a simple cross entropy loss is enough.

License

Licensed under the aGPLv3.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deformers-0.2.2.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deformers-0.2.2-py3-none-any.whl (21.8 kB view details)

Uploaded Python 3

File details

Details for the file deformers-0.2.2.tar.gz.

File metadata

  • Download URL: deformers-0.2.2.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for deformers-0.2.2.tar.gz
Algorithm Hash digest
SHA256 64a37d24003e11ae21f1ee8ebdbbbc76ffc55120a1ca6899640ad9701efe06aa
MD5 1220a88172f2236d7859f7362ff5dd18
BLAKE2b-256 ccfd03437bc0703397ac5a29c3e097bfc957b28b315897a19f66473aa7586cee

See more details on using hashes here.

File details

Details for the file deformers-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: deformers-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 21.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for deformers-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 987ea71a45ac859c1402efd8553b1d89b6accaae1f97e731a8d22f32f11f4d81
MD5 b929a0566ec229b8117c449544fcfc4c
BLAKE2b-256 d1ee6aeca63e5f60ff3ea3f35afd3265913a9aa58bce8cbb9909d076245c03ef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page