Skip to main content

No project description provided

Project description

TinyModel

TinyModel is a 44M parameter model trained on TinyStories V2 for mechanistic interpretability with trained SAEs and transcoders.

It can be installed with pip install tinystoriesmodel

from tiny_model import TinyModel, tokenizer

lm = TinyModel()

# for inference
tok_ids, attn_mask = tokenizer(['Once upon a time', 'In the forest'])
logprobs = lm(tok_ids)

# Get SAE/transcoder acts
# See 'Sparse MLP/SAE' section for more information.
sae_acts = lm['A1N123'](tok_ids)

# or
lm.generate('Once upon a time, Ada was happily walking through a magical forest with')

# To decode tok_ids you can use
tokenizer.decode(tok_ids)

It has 4 layers, uses ReLU activations, and has no layernorms.

It was trained for 3 epochs on a preprocessed version of TinyStoriesV2.

SAE/transcoders

Some sparse SAEs/transcoders are provided along with the model.

For example, acts = lm['M2N100'](tok_ids)

To get sparse acts, choose which part of the transformer block you want to look at (currently sparse MLP/transcoder and SAEs on attention out are available, under the tags 'M' and 'A' respectively). Residual stream and MLP out SAEs exist, they just haven't been added yet, bug me on e.g. Twitter if you want this to happen fast.

Then, add the layer. A sparse MLP at layer 2 would be 'M2'. Finally, optionally add a particular neuron. For example 'A0N10000'.

Tokenization

Tokenization is done as follows:

  • the top-10K most frequent tokens using the GPT-NeoX tokenizer are selected and sorted by frequency.
  • To tokenize a document, first tokenize with the GPT-NeoX tokenizer. Then replace tokens not in the top 10K tokens with a special [UNK] token id. All token ids are then mapped to be between 1 and 10K, roughly sorted from most frequent to least.
  • Finally, prepend the document with a [BEGIN] token id.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinystoriesmodel-0.1.2.tar.gz (76.5 kB view details)

Uploaded Source

Built Distribution

tinystoriesmodel-0.1.2-py3-none-any.whl (75.6 kB view details)

Uploaded Python 3

File details

Details for the file tinystoriesmodel-0.1.2.tar.gz.

File metadata

  • Download URL: tinystoriesmodel-0.1.2.tar.gz
  • Upload date:
  • Size: 76.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.5 Linux/6.5.0-35-generic

File hashes

Hashes for tinystoriesmodel-0.1.2.tar.gz
Algorithm Hash digest
SHA256 9b1952e28bc3e83c6c703b3974e78e5009b4e8a010bdeffd8ff27832c7549e10
MD5 f8dbaa69d94512b4f056c0d09cc4c95d
BLAKE2b-256 71fc9765817985776bdf26c80483b5d013332304aa631ed4721137ab3e2015d3

See more details on using hashes here.

File details

Details for the file tinystoriesmodel-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: tinystoriesmodel-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 75.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.5 Linux/6.5.0-35-generic

File hashes

Hashes for tinystoriesmodel-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 45cdf27de64af80e14e1d6d0f93768d56e67f8baef6b4422ecad11192b66efe7
MD5 6743f4a23c856a365d6cee2f8f12a161
BLAKE2b-256 7b2c3f9ac9c7d5b93f0c23c4eef17d92a9a9c90aac40cb65ff608b055ac3889d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page