Sparse probing benchmark for Sparse Autoencoders derived from the paper "Are Sparse Autoencoders Useful? A Case Study in Sparse Probing"
Project description
SAE Probes Benchmark
This repository contains the code for the paper Are Sparse Autoencoders Useful? A Case Study in Sparse Probing, but has been reformatted into a Python package that will work with any SAE that can be loaded in SAELens. This makes it easy to use the sparse probing tasks from the paper as a standalone SAE benchmark.
Installation
pip install sae-probes
Running evaluations
You can run benchmarks directly; any missing model activations are generated on demand. If you don't pass a model_cache_path, a temporary directory is used and cleaned up when the function completes. To persist activations across runs (recommended for repeated experiments), provide a model_cache_path.
Training Probes
Probes can be trained directly on the model activations (baselines) or on SAE activations. In both cases, the following test data-balance settings are available: "normal", "scarcity", and "imbalance". For more details about these settings, see the original paper. For the most standard sparse-probing benchmark, use the normal setting.
SAE Probes
The most standard use of this library is as a sparse probing benchmark for SAEs using the normal setting. This is demonstrated below:
from sae_probes import run_sae_evals
from sae_lens import SAE
# run the benchmark on a Gemma Scope SAE
release = "gemma-scope-2b-pt-res-canonical"
sae_id = "layer_12/width_16k/canonical"
sae = SAE.from_pretrained(release, sae_id)
run_sae_evals(
sae=sae,
model_name="gemma-2-2b",
hook_name="blocks.12.hook_resid_post",
reg_type="l1",
setting="normal",
results_path="/results/output/path",
# model_cache_path is optional; if omitted, a temp dir is used and cleared after
model_cache_path="/path/to/saved/activations",
ks=[1, 16],
)
The sparse probing results for each dataset will be saved to results_path as a JSON file per dataset.
Baseline Probes
You can now run baseline probes using a unified API that matches the SAE evaluation interface:
from sae_probes import run_baseline_evals
# Run baseline probes with consistent API
run_baseline_evals(
model_name="gemma-2-2b",
hook_name="blocks.12.hook_resid_post",
setting="normal", # or "scarcity", "imbalance"
results_path="/results/output/path",
# model_cache_path is optional; if omitted, a temp dir is used and cleared after
model_cache_path="/path/to/saved/activations",
)
Output Format
Both SAE and baseline probes now save results as JSON files with consistent structure:
- SAE results:
sae_probes_{model_name}/{setting}_setting/{dataset}_{hook_name}_{reg_type}.json - Baseline results:
baseline_results_{model_name}/{setting}_setting/{dataset}_{hook_name}_{method}.json
Each JSON file contains a list with metrics and metadata for easy comparison between SAE and baseline approaches.
Optional: Pre-generating model activations
Pre-generating can speed up repeated runs and lets you inspect the saved tensors. It's optional because benchmarks will auto-generate missing activations on their first run if missing.
from sae_probes import generate_dataset_activations
generate_dataset_activations(
model_name="gemma-2-2b", # the TransformerLens name of the model
hook_names=["blocks.12.hook_resid_post"], # Any TLens hook names
batch_size=64,
device="cuda",
model_cache_path="/path/to/save/activations",
)
If you skip pre-generation, the benchmarks will create any missing activations automatically. Passing a model_cache_path persists them; if omitted, activations will be written to a temporary directory that is deleted after the run.
Citation
If you use this code in your research, please cite:
@inproceedings{kantamnenisparse,
title={Are Sparse Autoencoders Useful? A Case Study in Sparse Probing},
author={Kantamneni, Subhash and Engels, Joshua and Rajamanoharan, Senthooran and Tegmark, Max and Nanda, Neel},
booktitle={Forty-second International Conference on Machine Learning}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sae_probes-0.2.1.tar.gz.
File metadata
- Download URL: sae_probes-0.2.1.tar.gz
- Upload date:
- Size: 45.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
885a192db80fa1b315aa8d05350f6caacaf70c62c16a75cc9582694bd9998ac0
|
|
| MD5 |
e806ee58234f93205e7c4b02af76a7d3
|
|
| BLAKE2b-256 |
006dafe8102233c83a326e093b395790b1c31cde270a33f9962bb7410638a129
|
Provenance
The following attestation bundles were made for sae_probes-0.2.1.tar.gz:
Publisher:
ci.yaml on sae-probes/sae-probes
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sae_probes-0.2.1.tar.gz -
Subject digest:
885a192db80fa1b315aa8d05350f6caacaf70c62c16a75cc9582694bd9998ac0 - Sigstore transparency entry: 583932338
- Sigstore integration time:
-
Permalink:
sae-probes/sae-probes@471e57af8d21096620bdd78c76268a5f30a6d583 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/sae-probes
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yaml@471e57af8d21096620bdd78c76268a5f30a6d583 -
Trigger Event:
push
-
Statement type:
File details
Details for the file sae_probes-0.2.1-py3-none-any.whl.
File metadata
- Download URL: sae_probes-0.2.1-py3-none-any.whl
- Upload date:
- Size: 45.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c2c0eb2bef07fc3f9eebfbfe5c7cedc6cf01942af4ce20acac3ba72ed76ee75
|
|
| MD5 |
6f1399a6a6e3e4f018f8c49365108a8f
|
|
| BLAKE2b-256 |
54e80b2f0a497089b90ec0d32f71f7739459cc3b12f9c519a057ba228016a1c6
|
Provenance
The following attestation bundles were made for sae_probes-0.2.1-py3-none-any.whl:
Publisher:
ci.yaml on sae-probes/sae-probes
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sae_probes-0.2.1-py3-none-any.whl -
Subject digest:
4c2c0eb2bef07fc3f9eebfbfe5c7cedc6cf01942af4ce20acac3ba72ed76ee75 - Sigstore transparency entry: 583932340
- Sigstore integration time:
-
Permalink:
sae-probes/sae-probes@471e57af8d21096620bdd78c76268a5f30a6d583 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/sae-probes
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yaml@471e57af8d21096620bdd78c76268a5f30a6d583 -
Trigger Event:
push
-
Statement type: