LatentSAE: Training and inference for SAEs on embeddings
Project description
latent-sae
This is essentially a fork of EleutherAI/sae focused on training Sparse Autodencoders on Sentence transformer embeddings. The main differences are:
- Focus on training only one model on one set of input
- Load training data (embeddings) quickly from disk
Inference
# !pip install latentsae
from latentsae import Sae
sae_model = Sae.load_from_hub("enjalot/sae-nomic-text-v1.5-FineWeb-edu-100BT", "64_32")
# or from disk
sae_model = Sae.load_from_disk("models/sae_64_32.3mq7ckj7")
# Get some embeddings
texts = ["Hello world", "Will I ever halt?", "Goodbye world"]
from sentence_transformers import SentenceTransformer
emb_model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True)
embeddings = emb_model.encode(texts, convert_to_tensor=True, normalize_embeddings=True)
features = sae_model.encode(normalized_embeddings)
print(features.top_indices)
print(features.top_acts)
See notebooks/eval.ipynb for an example of how to use the model for extracting features from an embedding dataset.
Training
I focused on training with Modal Labs GPUs, I found an A10G to be sufficiently fast & cheap.
modal run train_modal.py --batch-size 512 --grad-acc-steps 4 --k 64 --expansion-factor 128
You can also train locally with a CPU or GPU.
python train_local.py --batch-size 512 --grad-acc-steps 4 --k 64 --expansion-factor 128
Data Preparation
I wrote a detailed article on the methodology behind the data, training and analysis of the SAEs trained with this repo: Latent Taxonomy Methodology
I used Modal Labs to rent VMs and GPUs for the data preprocessing and training. See enjalot/fineweb-modal for the scripts used to preprocess the FineWeb-EDU 10BT and 100BT samples and embed them with nomic-embed-text-v1.5.
I first trained on the 10BT sample, chunked into 500 token chunks which is available on HuggingFace. This gave 25 million embeddings to train on. From the wandb charts it looked like the model could improve further with more data so I then prepared 10x the embeddings with the 100BT sample. I'm working on uploading that to HF still.
For locally testing the code I downloaded a single parquet file from the dataset. For the full training run, I downloaded the whole dataset to disk in a modal volume, then processed it into sharded torch .pt files using this script: torched.py
Parameters
The main parameters I tried to change were:
- batch-size: how many embeddings in a batch (bigger is better?) settled on 512 for performance tradeoff
- grad-acc-steps: how many steps to skip updating gradient. simulates bigger batch size. not sure the penalty for making this really big. settled on 4 with batch size of 512
- k: sparsity; how many top features to consider. fewer is sparser and more interpretable, but worse error. tried 64 and 128 but unsure how to measure quality differences yet
- expansion factor: multiply times dimensions of input embedding (768 in case of nomic). chose 32 and 128 to give ~25k and ~100k features respectively.
Open questions
Another thought I have that I might try is to process the data into even smaller chunks. At 500 tokens the samples are quite large and I believe we are essentially aggregating a lot of features across those tokens. If we chunked at something like 100 tokens each sample would be much more granular and we would also have 5x more training data. Again, I'm not sure how I'd evaluate the quality tradeoff of this yet.
Part of the motivation with this repo and the fineweb-modal repo is to make it easier to train SAEs on other datasets. FineWeb-EDU has certain desirable properties for some down-stream tasks, but I can imagine training on a large dataset of code or a more general corpus like RedPajama v2.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file latentsae-0.1.1.tar.gz
.
File metadata
- Download URL: latentsae-0.1.1.tar.gz
- Upload date:
- Size: 21.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 77bc20d8b1f319a37c91ee0191d6a0d2fec008f68c1efe28b3bccf1f4372f6bc |
|
MD5 | bb451b4d137361a33a97b074a1eca52a |
|
BLAKE2b-256 | a44ae23b8491c94ef223333f58caf8a45e92a30780ef537ae14317df458c96ae |
File details
Details for the file latentsae-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: latentsae-0.1.1-py3-none-any.whl
- Upload date:
- Size: 22.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 58c81e120ce88da2b383f1ceaccfa97733c8f5b1aaf7c0cdb69dd64386cdccb7 |
|
MD5 | 369fef5221351b10fbefd660903c6694 |
|
BLAKE2b-256 | e178a81ec7e28766c6b38ce7864a54927c7d56872e72889945928f7e267f5ef9 |