Toolkit for analyzing unstructured datasets with sparse autoencoders
Project description
InterpEmbed
interp_embed is a toolkit for analyzing unstructured (ex. text) datasets with sparse autoencoders (SAEs). It can quickly compute and efficiently store feature activations for data analysis. Given a dataset of documents, interp_embed creates sparse, high-dimensional, interpretable embeddings, where each dimension maps to a concept like syntax or topic, for a variety of downstream analysis tasks like dataset diffing, concept correlations, and directed clustering.
Setup
With uv (recommended):
uv sync # To install uv, see https://docs.astral.sh/uv/getting-started/installation/
Without uv (using pip):
pip install -r requirements.txt
Create a .env file that has OPENROUTER_API_KEY and OPENAI_KEY. We use these models for creating feature labels if they don't exist.
Quickstart
First, create a dataset object. We currently support SAEs from SAELens (LocalSAE) and Goodfire (GoodfireSAE).
from interp_embed import Dataset
from interp_embed.saes import GoodfireSAE
import pandas as pd
# 1. Load a Goodfire SAE or SAE supported through the SAELens package
sae = GoodfireSAE(
variant_name="Llama-3.1-8B-Instruct-SAE-l19", # or "Llama-3.3-70B-Instruct-SAE-l50" for higher quality features
device="cuda:0", # optional
quantize=True # optional
)
# 2. Prepare your data as a DataFrame
df = pd.DataFrame({
"text": ["Good morning!", "Hello there!", "Good afternoon."],
"date": ["2022-01-10", "2021-08-23", "2023-03-14"] # Metadata column
})
# 3. Create dataset - computes and saves feature activations
dataset = Dataset(
data=df,
sae=sae,
field="text", # Optional. Column containing text to analyze
save_path="my_dataset.pkl" # Optional. Auto-saves progress, which enables recovery if computations fail
)
# 4. In the future, load saved dataset to skip expensive recomputation.
dataset = Dataset.load_from_file("my_dataset.pkl") # # If some activations failed, use 'resume=True' to continue.
Here are some commonly used methods.
# Get feature activations as a sparse matrix of shape (N = # documents, F = # features)
embeddings = dataset.latents()
# Get the feature labels if they exist from the SAE
labels = dataset.feature_labels()
# Pass in a feature index to get a more accurate label
new_label = await dataset.label_feature(feature = 65478) # example: "Friendly greetings"
# Annotate a document for a given feature, marking activating tokens with << >>.
annotated_document = dataset[0].token_activations(feature = 65478)
# Extract a list of top documents for a given feature
top_documents = dataset.top_documents_for_feature(feature = 65478)
For analyses (e.g. dataset diffing, correlations) done on example datasets, see the examples/ folder.
How does this work?
To embed a document, we pass the data into a "reader" LLM and use a sparse autoencoder (SAE) to decompose its internal representation into interpretable concepts known as "features". The number of features per SAE varies from 1000 - 100000. A SAE produces a sparse, high-dimensional vector of feature activations per token that we aggregate into a single document embedding.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file interp_embed-0.1.0.tar.gz.
File metadata
- Download URL: interp_embed-0.1.0.tar.gz
- Upload date:
- Size: 22.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e3753fba0c0605e86f24d86c557b58e3dcc05aab2c6e2c3348daae4337baffe
|
|
| MD5 |
6db11a7eb88c9bc364eeefb6ad6b5508
|
|
| BLAKE2b-256 |
6ee9be4d8aa4d005853ce3e264289c80d692c35e6bc9ab5e74cf82441b66a0eb
|
File details
Details for the file interp_embed-0.1.0-py3-none-any.whl.
File metadata
- Download URL: interp_embed-0.1.0-py3-none-any.whl
- Upload date:
- Size: 24.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5fdfb1ebc2e457ddabda20383efbe5a2a330cafc7e350a4c7951f98d8698503b
|
|
| MD5 |
98c1d97e40dee60f6c4c88221635c995
|
|
| BLAKE2b-256 |
5228ec7e33341713ffb2b65e871768789791973f0394b80ce577583344e452a4
|