Reverse engineer transformer models.
Project description
Deformers 
Experiments with modular neural networks by patching pre-trained LLMs.
Original Model
All experiments use the open-source model qwen/qwen3.5-9b.
Relevant configuration:
- hidden size: 4096
- number of layers: 32
- vocabulary size: 248320
- embedding size: 4096
- positional encoding: rotary
- embedding weights are not tied with the output head
The embedding layer and output head each contain approximately:
4096 × 248320 ~ 1.02B parameters
Patching Layers
Composite Embedding (Prefix Patch)
Frozen components
- tokenization (Qwen BPE tokenizer)
- transformer trunk (all the hidden transformer layers)
- positional encoding
- output head
Replaced component
The original token embedding Embedding(V=248320, D=4096) is replaced with a composite embedding layer with:
- a group dimension of 32
- an input dimension of 256 (byte values)
- an embedding dimension of 128
- for a total of 256 * 128 = 32768 parameters
So that a tensor of shape (B, 32 * S) is processed into (B, S, 32 * 128) = (B, S, 4096).
Then a regular transformer block maps these composite embeddings with the original emebddings of the Qwen model.
Input representation
Input text is tokenized using the original Qwen tokenizer.
Each token string is encoded as UTF-8 bytes.
Shorter tokens are then padded into a block of 32 bytes and the longer tokens are truncated.
Patch Training
The training was performed on a multilingual corpora with a custom loss:
$$L_{k} = || H_{k, patch}(x) − H_{k, qwen}(x) ||^{2}$$
Where:
- $k$ is the depth inside the original Qwen 3.5 model
- $H_{k, qwen}$ is the hidden state at depth $k$ in the original model
- $H_{k, patch}$ is the hiden state obtained when replacing the embedding layer
Hierarchical Softmax Head
Frozen components
- tokenizer
- embedding layer
- positional encoding
- transformer trunk (all the hidden layers)
Replaced component
Original output head Linear(4096 => 248320) is replaced by a hierarchical softmax tree.
Tokens are organized in a binary tree with a depth of 18 ~ log_2(248320)
Each token corresponds to a unique path from root to leaf.
Patch Training
Here a simple cross entropy loss is enough.
License
Licensed under the aGPLv3.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file deformers-0.4.1.tar.gz.
File metadata
- Download URL: deformers-0.4.1.tar.gz
- Upload date:
- Size: 20.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bd9f92950a896408518240d71141263225e03e977e0a155eae9cf68302a28291
|
|
| MD5 |
13484b3d93ca71506a0fbb33d42659e5
|
|
| BLAKE2b-256 |
31c48b60d14e34d00294f435e778cdc41163e0ca974afa7da008ca0c537c5e1c
|
File details
Details for the file deformers-0.4.1-py3-none-any.whl.
File metadata
- Download URL: deformers-0.4.1-py3-none-any.whl
- Upload date:
- Size: 24.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
23df828ed8d0ec784e6deff8eb0f7c83dc89a8347cf37c788b9c45a6f15f7fbf
|
|
| MD5 |
1e5525f8ff17ba399a9c7af73924f5ce
|
|
| BLAKE2b-256 |
6d2f79f75bcb4729b97d97c83680ee2633c401897efb73327717285d4ca12f54
|