Skip to main content

A small TinyStories LM with SAEs and transcoders

Project description

TinyModel

TinyModel is a 4 layer, 44M parameter model trained on TinyStories V2 for mechanistic interpretability. It uses ReLU activations and no layernorms. It comes with trained SAEs and transcoders.

It can be installed with pip install tinymodel

from tiny_model import TinyModel, tokenizer

lm = TinyModel()

# for inference
tok_ids, attn_mask = tokenizer(['Once upon a time', 'In the forest'])
logprobs = lm(tok_ids)

# Get SAE/transcoder acts
# See 'SAEs/Transcoders' section for more information.
feature_acts = lm['M1N123'](tok_ids)
all_feat_acts = lm['M2'](tok_ids)

# Generation
lm.generate('Once upon a time, Ada was happily walking through a magical forest with')

# To decode tok_ids you can use
tokenizer.decode(tok_ids)

It was trained for 3 epochs on a preprocessed version of TinyStoriesV2. Pre-tokenized dataset here. I recommend using this dataset for getting SAE/transcoder activations.

SAEs/transcoders

Some sparse SAEs/transcoders are provided along with the model.

For example, acts = lm['M2N100'](tok_ids)

To get sparse acts, choose which part of the transformer block you want to look at (currently sparse MLP/transcoder and SAEs on attention out are available, under the tags 'M' and 'A' respectively). Residual stream and MLP out SAEs exist, they just haven't been added yet, bug me on e.g. Twitter if you want this to happen fast.

Then, add the layer. A sparse MLP at layer 2 would be 'M2'. Finally, optionally add a particular neuron. For example 'M0N10000'.

Tokenization

Tokenization is done as follows:

  • the top-10K most frequent tokens using the GPT-NeoX tokenizer are selected and sorted by frequency.
  • To tokenize a document, first tokenize with the GPT-NeoX tokenizer. Then replace tokens not in the top 10K tokens with a special [UNK] token id. All token ids are then mapped to be between 1 and 10K, roughly sorted from most frequent to least.
  • Finally, prepend the document with a [BEGIN] token id.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinymodel-0.1.1.post11.tar.gz (83.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tinymodel-0.1.1.post11-py3-none-any.whl (76.0 kB view details)

Uploaded Python 3

File details

Details for the file tinymodel-0.1.1.post11.tar.gz.

File metadata

  • Download URL: tinymodel-0.1.1.post11.tar.gz
  • Upload date:
  • Size: 83.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for tinymodel-0.1.1.post11.tar.gz
Algorithm Hash digest
SHA256 04d5dd3af0ce4b47b68abe10ab1b7f8a0fe6849378e4ed85984eb9ea3a43a1d9
MD5 9da26416686fe3f3cd0b7906e9691847
BLAKE2b-256 a3d1c5415d8c8a396899157c43aa4cc87416960c8680fd79612f2f0dbd4696b5

See more details on using hashes here.

File details

Details for the file tinymodel-0.1.1.post11-py3-none-any.whl.

File metadata

  • Download URL: tinymodel-0.1.1.post11-py3-none-any.whl
  • Upload date:
  • Size: 76.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for tinymodel-0.1.1.post11-py3-none-any.whl
Algorithm Hash digest
SHA256 ec413e8de8a1c849b890f5095a9a383253d3a919a12cb1876a4e19489a7fc11d
MD5 eaad71de29f58e21d797b21101793af7
BLAKE2b-256 5ce803b15f51f32bb9a9d459c5c485b513019c565586dbed69c9a64591cd90eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page