No project description provided
Project description
TinyModel
TinyModel is a 4 layer, 44M parameter model trained on TinyStories V2 for mechanistic interpretability. It uses ReLU activations and no layernorms. It comes with trained SAEs and transcoders.
It can be installed with pip install tinystoriesmodel
from tiny_model import TinyModel, tokenizer
lm = TinyModel()
# for inference
tok_ids, attn_mask = tokenizer(['Once upon a time', 'In the forest'])
logprobs = lm(tok_ids)
# Get SAE/transcoder acts
# See 'SAEs/Transcoders' section for more information.
sae_acts = lm['A1N123'](tok_ids)
transcoder_acts = lm['M2'](tok_ids)
# or
lm.generate('Once upon a time, Ada was happily walking through a magical forest with')
# To decode tok_ids you can use
tokenizer.decode(tok_ids)
It was trained for 3 epochs on a preprocessed version of TinyStoriesV2. I recommend using this dataset for getting SAE/transcoder activations.
SAEs/transcoders
Some sparse SAEs/transcoders are provided along with the model.
For example, acts = lm['M2N100'](tok_ids)
To get sparse acts, choose which part of the transformer block you want to look at (currently sparse MLP/transcoder and SAEs on attention out are available, under the tags 'M'
and 'A'
respectively). Residual stream and MLP out SAEs exist, they just haven't been added yet, bug me on e.g. Twitter if you want this to happen fast.
Then, add the layer. A sparse MLP at layer 2 would be 'M2'
.
Finally, optionally add a particular neuron. For example 'A0N10000'
.
Tokenization
Tokenization is done as follows:
- the top-10K most frequent tokens using the GPT-NeoX tokenizer are selected and sorted by frequency.
- To tokenize a document, first tokenize with the GPT-NeoX tokenizer. Then replace tokens not in the top 10K tokens with a special [UNK] token id. All token ids are then mapped to be between 1 and 10K, roughly sorted from most frequent to least.
- Finally, prepend the document with a [BEGIN] token id.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tinystoriesmodel-0.1.3.tar.gz
.
File metadata
- Download URL: tinystoriesmodel-0.1.3.tar.gz
- Upload date:
- Size: 76.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.5 Linux/6.5.0-35-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 17e60f1fef3f385ff76090745229ef50a95a78cf68cfb3452b67d87f2d5a0ca7 |
|
MD5 | 02e6a52d692c037cf62a4bf1e64ed361 |
|
BLAKE2b-256 | ba40a355ae4aec79b7ef8c9875a05e0e773bad962224504b56ebf6121494272d |
File details
Details for the file tinystoriesmodel-0.1.3-py3-none-any.whl
.
File metadata
- Download URL: tinystoriesmodel-0.1.3-py3-none-any.whl
- Upload date:
- Size: 75.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.5 Linux/6.5.0-35-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9dcd1264d0e8deddd56932a44bf18444703196ced58d26170fe8c3880e9eaea9 |
|
MD5 | 0c650809fb2a5ee6f2d9883b003c4756 |
|
BLAKE2b-256 | 42d11c3570818ecd17a36f8389d13793b2eb1dc86584f7b01ba82979d99d64e9 |