Skip to main content

A small TinyStories LM with SAEs and transcoders

Project description

TinyModel

TinyModel is a 4 layer, 44M parameter model trained on TinyStories V2 for mechanistic interpretability. It uses ReLU activations and no layernorms. It comes with trained SAEs and transcoders.

It can be installed with pip install tinymodel

from tiny_model import TinyModel, tokenizer

lm = TinyModel()

# for inference
tok_ids, attn_mask = tokenizer(['Once upon a time', 'In the forest'])
logprobs = lm(tok_ids)

# Get SAE/transcoder acts
# See 'SAEs/Transcoders' section for more information.
feature_acts = lm['M1N123'](tok_ids)
all_feat_acts = lm['M2'](tok_ids)

# Generation
lm.generate('Once upon a time, Ada was happily walking through a magical forest with')

# To decode tok_ids you can use
tokenizer.decode(tok_ids)

It was trained for 3 epochs on a preprocessed version of TinyStoriesV2. Pre-tokenized dataset here. I recommend using this dataset for getting SAE/transcoder activations.

SAEs/transcoders

Some sparse SAEs/transcoders are provided along with the model.

For example, acts = lm['M2N100'](tok_ids)

To get sparse acts, choose which part of the transformer block you want to look at (currently sparse MLP/transcoder and SAEs on attention out are available, under the tags 'M' and 'A' respectively). Residual stream and MLP out SAEs exist, they just haven't been added yet, bug me on e.g. Twitter if you want this to happen fast.

Then, add the layer. A sparse MLP at layer 2 would be 'M2'. Finally, optionally add a particular neuron. For example 'M0N10000'.

Tokenization

Tokenization is done as follows:

  • the top-10K most frequent tokens using the GPT-NeoX tokenizer are selected and sorted by frequency.
  • To tokenize a document, first tokenize with the GPT-NeoX tokenizer. Then replace tokens not in the top 10K tokens with a special [UNK] token id. All token ids are then mapped to be between 1 and 10K, roughly sorted from most frequent to least.
  • Finally, prepend the document with a [BEGIN] token id.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinymodel-0.1.1.post15.tar.gz (92.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tinymodel-0.1.1.post15-py3-none-any.whl (91.0 kB view details)

Uploaded Python 3

File details

Details for the file tinymodel-0.1.1.post15.tar.gz.

File metadata

  • Download URL: tinymodel-0.1.1.post15.tar.gz
  • Upload date:
  • Size: 92.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for tinymodel-0.1.1.post15.tar.gz
Algorithm Hash digest
SHA256 3998716430ef92ea2963f13ddc650cc5c72772484e1e160f11787651a8157679
MD5 daa1e333e29b7d9484fd8988489341fc
BLAKE2b-256 dfd7669483369d8e200b7974f5bf4238b662127fb68a7659ad1a949b6062dc40

See more details on using hashes here.

File details

Details for the file tinymodel-0.1.1.post15-py3-none-any.whl.

File metadata

  • Download URL: tinymodel-0.1.1.post15-py3-none-any.whl
  • Upload date:
  • Size: 91.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for tinymodel-0.1.1.post15-py3-none-any.whl
Algorithm Hash digest
SHA256 f432142e4a34d760d080add8855fc0a77b3465a09f44e70c2626496c371eefe6
MD5 6176543e8e37bfa28ecc24d5bb006395
BLAKE2b-256 c44c62df7ed1dd8172b6915575a24e8a8639f4cd61d3716aa37fb490ae4d74c3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page