Skip to main content

A framework for evaluating sparse autoencoders

Project description

SAE Bench

Table of Contents

CURRENT REPO STATUS: SAE Bench is currently a beta release. This repo is still under development as we clean up some of the rough edges left over from the research process. However, it is usable in the current state for both SAE Lens SAEs and custom SAEs.

Overview

SAE Bench is a comprehensive suite of 8 evaluations for Sparse Autoencoder (SAE) models:

For more information, refer to our blog post.

Supported Models and SAEs

  • SAE Lens Pretrained SAEs: Supports evaluations on any SAE Lens SAE.
  • dictionary_learning SAES: We support evaluations on any SAE trained with the dictionary_learning repo (see Custom SAE Usage).
  • Custom SAEs: Supports any general SAE object with encode() and decode() methods (see Custom SAE Usage).

Installation

Set up a virtual environment with python >= 3.10.

git clone https://github.com/adamkarvonen/SAEBench.git
cd SAEBench
pip install -e .

Alternative, you can install from pypi:

pip install sae-bench

All evals can be ran with current batch sizes on Gemma-2-2B on a 24GB VRAM GPU (e.g. a RTX 3090). By default, some evals cache LLM activations, which can require up to 100 GB of disk space. However, this can be disabled.

Autointerp requires the creation of openai_api_key.txt. Unlearning requires requesting access to the WMDP bio dataset (refer to unlearning/README.md).

Getting Started

We recommend to get starting by going through the sae_bench_demo.ipynb notebook. In this notebook, we load both a custom SAE and an SAE Lens SAE, run both of them on multiple evaluations, and plot graphs of the results.

Running Evaluations

Each evaluation has an example command located in its respective main.py file. To run all evaluations on a selection of SAE Lens SAEs, refer to shell_scripts/README.md. Here's an example of how to run a sparse probing evaluation on a single SAE Bench Pythia-70M SAE:

python -m sae_bench.evals.sparse_probing.main \
    --sae_regex_pattern "sae_bench_pythia70m_sweep_standard_ctx128_0712" \
    --sae_block_pattern "blocks.4.hook_resid_post__trainer_10" \
    --model_name pythia-70m-deduped

The results will be saved to the eval_results/sparse_probing directory.

We use regex patterns to select SAE Lens SAEs. For more examples of regex patterns, refer to sae_regex_selection.ipynb.

Every eval folder contains an eval_config.py, which contains all relevant hyperparamters for that evaluation. The values are currently set to the default recommended values.

For a tutorial of using SAE Lens SAEs, including calculating L0 and Loss Recovered and getting a set of tokens from The Pile, refer to this notebook: https://github.com/jbloomAus/SAELens/blob/main/tutorials/basic_loading_and_analysing.ipynb

Custom SAE Usage

Our goal is to have first class support for custom SAEs as the field is rapidly evolving. Our evaluations can run on any SAE object with encode(), decode(), and a few config values. We recommend referring to sae_bench_demo.ipynb. In this notebook, we load a custom SAE and an SAE Bench baseline SAE, run them on two evals, and graph the results. There is additional information about custom SAE usage in custom_saes/README.md.

If your SAEs are trained with the dictionary_learning repo, you can evaluate your SAEs by passing in the name of the HuggingFace repo containing your SAEs. Refer to SAEBench/custom_saes/run_all_evals_dictionary_learning_saes.py.

There are two ways to evaluate custom SAEs:

  1. Using Evaluation Templates:

    • Use the secondary if __name__ == "__main__" block in each main.py
    • Results are saved in SAE Bench format for easy visualization
    • Compatible with provided plotting tools
  2. Direct Function Calls:

    • Use run_eval_single_sae() in each main.py
    • Simpler interface requiring only model, SAE, and config values
    • Graphing will require manual formatting

We currently have a suite of SAE Bench SAEs on layers 3 and 4 of Pythia-70M and layers 5, 12, and 19 of Gemma-2-2B, each trained on 200M tokens with checkpoints at various points. These SAEs can serve as baselines for any new custom SAEs. We also have baseline eval results, saved at TODO.

Training Your Own SAEs

You can deterministically replicate the training of our SAEs using scripts provided here, or implement your own SAE, or make a change to one of our SAE implementations. Once you train your new version, you can benchmark against our existing SAEs for a true apples to apples comparison.

Graphing Results

If evaluating your own SAEs, we recommend using the graphing cells in sae_bench_demo.ipynb. To replicate all SAE Bench plots, refer to graphing.ipynb. In this notebook, we download all SAE Bench data and create a variety of plots.

Development

This project uses Poetry for dependency management and packaging.

To install the development dependencies, run:

poetry install

Unit tests can be run with:

poetry run pytest tests/unit

These test will be run automatically on every PR in CI.

There are also acceptance tests than can be run with:

poetry run pytest tests/acceptance

These tests are expensive and will not be run automatically in CI, but are worth running manually before large changes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sae_bench-0.3.1.tar.gz (188.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sae_bench-0.3.1-py3-none-any.whl (250.7 kB view details)

Uploaded Python 3

File details

Details for the file sae_bench-0.3.1.tar.gz.

File metadata

  • Download URL: sae_bench-0.3.1.tar.gz
  • Upload date:
  • Size: 188.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for sae_bench-0.3.1.tar.gz
Algorithm Hash digest
SHA256 9f82c6159e5d9622a9b104732880758c9953a8ead024c66874191d5561bd56e4
MD5 5dcb29d54785e7c778ccd9885caeea2a
BLAKE2b-256 dc5df0dcd2f63efacccbf285f51160da997448a2bc1a75352ede08198f0e4f69

See more details on using hashes here.

Provenance

The following attestation bundles were made for sae_bench-0.3.1.tar.gz:

Publisher: build.yml on adamkarvonen/SAEBench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sae_bench-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: sae_bench-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 250.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for sae_bench-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f93ca3fdaffa6e75960346659538c5a2976a936efd0ecb75b19662a9aec52368
MD5 8b4aa4f53e001e2a46170f51751f1177
BLAKE2b-256 1d91b017933774e913d1c538faefeb319d03166ef4b26cf6dc02f1bc5db46e31

See more details on using hashes here.

Provenance

The following attestation bundles were made for sae_bench-0.3.1-py3-none-any.whl:

Publisher: build.yml on adamkarvonen/SAEBench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page