Skip to main content

Reverse engineer transformer models.

Project description

Deformers apehex logo

License Latest

Experiments with modular neural networks by patching pre-trained LLMs.

Original Model

All experiments use the open-source model qwen/qwen3.5-9b.

Relevant configuration:

  • hidden size: 4096
  • number of layers: 32
  • vocabulary size: 248320
  • embedding size: 4096
  • positional encoding: rotary
  • embedding weights are not tied with the output head

The embedding layer and output head each contain approximately:

4096 × 248320 ~ 1.02B parameters

Patching Layers

Composite Embedding (Prefix Patch)

Frozen components

  • tokenization (Qwen BPE tokenizer)
  • transformer trunk (all the hidden transformer layers)
  • positional encoding
  • output head

Replaced component

The original token embedding Embedding(V=248320, D=4096) is replaced with a composite embedding layer with:

  • a group dimension of 32
  • an input dimension of 256 (byte values)
  • an embedding dimension of 128
  • for a total of 256 * 128 = 32768 parameters

So that a tensor of shape (B, 32 * S) is processed into (B, S, 32 * 128) = (B, S, 4096).

Then a regular transformer block maps these composite embeddings with the original emebddings of the Qwen model.

Input representation

Input text is tokenized using the original Qwen tokenizer.

Each token string is encoded as UTF-8 bytes.

Shorter tokens are then padded into a block of 32 bytes and the longer tokens are truncated.

Patch Training

The training was performed on a multilingual corpora with a custom loss:

$$L_{k} = || H_{k, patch}(x) − H_{k, qwen}(x) ||^{2}$$

Where:

  • $k$ is the depth inside the original Qwen 3.5 model
  • $H_{k, qwen}$ is the hidden state at depth $k$ in the original model
  • $H_{k, patch}$ is the hiden state obtained when replacing the embedding layer

Hierarchical Softmax Head

Frozen components

  • tokenizer
  • embedding layer
  • positional encoding
  • transformer trunk (all the hidden layers)

Replaced component

Original output head Linear(4096 => 248320) is replaced by a hierarchical softmax tree.

Tokens are organized in a binary tree with a depth of 18 ~ log_2(248320)

Each token corresponds to a unique path from root to leaf.

Patch Training

Here a simple cross entropy loss is enough.

License

Licensed under the aGPLv3.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deformers-0.2.4.tar.gz (20.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deformers-0.2.4-py3-none-any.whl (23.5 kB view details)

Uploaded Python 3

File details

Details for the file deformers-0.2.4.tar.gz.

File metadata

  • Download URL: deformers-0.2.4.tar.gz
  • Upload date:
  • Size: 20.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for deformers-0.2.4.tar.gz
Algorithm Hash digest
SHA256 c07ad1b7cf5015c1516ebc20e37261a6fcadefb5d2e1719bbf3e76930ff1eb29
MD5 09989a167590a58b08843947df051e13
BLAKE2b-256 dd68e918627479ae0ad89af38198ad5646b83029e966cad83f0f0bddc97b706c

See more details on using hashes here.

File details

Details for the file deformers-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: deformers-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 23.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for deformers-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 db6a5c9e97016d1a2948aff650223efeb2a1d6804cb502bdcc0eefe2fa3c6b0a
MD5 e1032d6918290fa1490eea361e3fe1cf
BLAKE2b-256 ba93e06a6756a19e159e81b7464edc66eff3f1e2c4adb125e0364c18ea5f84b8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page