Skip to main content

A small TinyStories LM with SAEs and transcoders

Project description

TinyModel

TinyModel is a 4 layer, 44M parameter model trained on TinyStories V2 for mechanistic interpretability. It uses ReLU activations and no layernorms. It comes with trained SAEs and transcoders.

It can be installed with pip install tinymodel for Python 3.11 and higher.

This library is in an alpha state, it probably has some bugs. Please let me know if you find any or you're having any trouble with the library, I can be emailed at my full name @ gmail.com or messaged on twitter. You can also add GitHub issues.

from tinymodel import TinyModel, tokenizer

lm = TinyModel()

# for inference
tok_ids, attn_mask = tokenizer(['Once upon a time', 'In the forest'])
logprobs = lm(tok_ids)

# Get SAE/transcoder acts
# See 'SAEs/Transcoders' section for more information.
feature_acts = lm['M1N123'](tok_ids)
all_feat_acts = lm['M2'](tok_ids)

# Generation
lm.generate('Once upon a time, Ada was happily walking through a magical forest with')

# To decode tok_ids you can use
tokenizer.decode(tok_ids)

It was trained for 3 epochs on a preprocessed version of TinyStoriesV2. Pre-tokenized dataset here. I recommend using this dataset for getting SAE/transcoder activations.

SAEs/transcoders

Some sparse SAEs/transcoders are provided along with the model.

For example, acts = lm['M2N100'](tok_ids)

To get sparse acts, choose which part of the transformer block you want to look at (currently sparse MLP/transcoder and SAEs on attention out are available, under the tags 'M' and 'A' respectively). Residual stream and MLP out SAEs exist, they just haven't been added yet, bug me on e.g. Twitter if you want this to happen fast.

Then, add the layer. A sparse MLP at layer 2 would be 'M2'. Finally, optionally add a particular neuron. For example 'M0N10000'.

Tokenization

Tokenization is done as follows:

  • the top-10K most frequent tokens using the GPT-NeoX tokenizer are selected and sorted by frequency.
  • To tokenize a document, first tokenize with the GPT-NeoX tokenizer. Then replace tokens not in the top 10K tokens with a special [UNK] token id. All token ids are then mapped to be between 1 and 10K, roughly sorted from most frequent to least.
  • Finally, prepend the document with a [BEGIN] token id.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinymodel-0.1.2.2.post7.tar.gz (77.4 kB view details)

Uploaded Source

Built Distribution

tinymodel-0.1.2.2.post7-py3-none-any.whl (76.1 kB view details)

Uploaded Python 3

File details

Details for the file tinymodel-0.1.2.2.post7.tar.gz.

File metadata

  • Download URL: tinymodel-0.1.2.2.post7.tar.gz
  • Upload date:
  • Size: 77.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for tinymodel-0.1.2.2.post7.tar.gz
Algorithm Hash digest
SHA256 5187291a39a8ed6a06ab0eba5615e9a4937512f951e477d8d3a648b7e72aedd0
MD5 faa0d9b0b9dd4c2034d4d5bac17e1bf2
BLAKE2b-256 e87347cbe04bc077e9c3401afa1cc91251a5aa436c49f3300ce8884e36c9457e

See more details on using hashes here.

File details

Details for the file tinymodel-0.1.2.2.post7-py3-none-any.whl.

File metadata

File hashes

Hashes for tinymodel-0.1.2.2.post7-py3-none-any.whl
Algorithm Hash digest
SHA256 5b1c3223c9f2e7cb38bd72a3c163a8221d27456cbb6dd70f3f6dcb7b5164784e
MD5 82f6ef5d9a627faabf43a34fbc2ff45b
BLAKE2b-256 8be2d311b8165964b4c267bd8ddb04dbda9963070b9b437470e510ed28913b79

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page