Skip to main content

A small TinyStories LM with SAEs and transcoders

Project description

TinyModel

TinyModel is a 4 layer, 44M parameter model trained on TinyStories V2 for mechanistic interpretability. It uses ReLU activations and no layernorms. It comes with trained SAEs and transcoders.

It can be installed with pip install tinymodel

from tiny_model import TinyModel, tokenizer

lm = TinyModel()

# for inference
tok_ids, attn_mask = tokenizer(['Once upon a time', 'In the forest'])
logprobs = lm(tok_ids)

# Get SAE/transcoder acts
# See 'SAEs/Transcoders' section for more information.
feature_acts = lm['M1N123'](tok_ids)
all_feat_acts = lm['M2'](tok_ids)

# Generation
lm.generate('Once upon a time, Ada was happily walking through a magical forest with')

# To decode tok_ids you can use
tokenizer.decode(tok_ids)

It was trained for 3 epochs on a preprocessed version of TinyStoriesV2. Pre-tokenized dataset here. I recommend using this dataset for getting SAE/transcoder activations.

SAEs/transcoders

Some sparse SAEs/transcoders are provided along with the model.

For example, acts = lm['M2N100'](tok_ids)

To get sparse acts, choose which part of the transformer block you want to look at (currently sparse MLP/transcoder and SAEs on attention out are available, under the tags 'M' and 'A' respectively). Residual stream and MLP out SAEs exist, they just haven't been added yet, bug me on e.g. Twitter if you want this to happen fast.

Then, add the layer. A sparse MLP at layer 2 would be 'M2'. Finally, optionally add a particular neuron. For example 'M0N10000'.

Tokenization

Tokenization is done as follows:

  • the top-10K most frequent tokens using the GPT-NeoX tokenizer are selected and sorted by frequency.
  • To tokenize a document, first tokenize with the GPT-NeoX tokenizer. Then replace tokens not in the top 10K tokens with a special [UNK] token id. All token ids are then mapped to be between 1 and 10K, roughly sorted from most frequent to least.
  • Finally, prepend the document with a [BEGIN] token id.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinymodel_feature_circuits-0.1.1.post8.tar.gz (78.4 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file tinymodel_feature_circuits-0.1.1.post8.tar.gz.

File metadata

File hashes

Hashes for tinymodel_feature_circuits-0.1.1.post8.tar.gz
Algorithm Hash digest
SHA256 d42e19087860bc489fca5c5601e21efa0d784856544be47e401b5fe9fb361878
MD5 7d7b53dc60ba9ee3a2cbaca702a18eb6
BLAKE2b-256 773eefcb9187c17fe1cd03f6d0ffdbb320ae5aeb367d563dbc4cfc5213ad8bf4

See more details on using hashes here.

File details

Details for the file tinymodel_feature_circuits-0.1.1.post8-py3-none-any.whl.

File metadata

File hashes

Hashes for tinymodel_feature_circuits-0.1.1.post8-py3-none-any.whl
Algorithm Hash digest
SHA256 f1393d2ceab389d82224f314c130dc2813a5cd3e29cdbd78d79f0fe72675a947
MD5 9826211537f4801cbb68dee739e36ee5
BLAKE2b-256 028c37e33ad9de3d3050fcedfac19be50f59ffc9f229cad980fa385e5314635b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page