For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research. Open-sourced and constantly updated.
Project description
Llamascopium
[!NOTE] This package was previously published as
lm-saes(project name: Language-Model-SAEs) and now it has been renamed tollamascopium.
llamascopium is a comprehensive, fully-distributed framework designed for training, analyzing and visualizing Sparse Autoencoders (SAEs), empowering scalable and systematic Mechanistic Interpretability research.
News
-
2026.2.12 We introduce Complete Replacement Models (CRMs), which combine transcoders and Lorsas to fully sparsify language models. Link: Bridging the Attention Gap: Complete Replacement Models for Complete Circuit Tracing.
-
2025.9.23 We leverage Crosscoder to track feature evolution across pre-training snapshots. Link: Evolution of Concepts in Language Model Pre-Training (ICLR 2026).
-
2025.8.23 We identify a prevalent low-rank structure in attention outputs as the key cause of dead features, and propose Active Subspace Initialization to improve sparse dictionary learning on these low-rank activations. Link: Attention Layers Add Into Low-Dimensional Residual Subspaces.
-
2025.4.29 We introduce Low-Rank Sparse Attention (Lorsa) to attack attention superposition, extracting tens of thousands of true attention units from LLM attention layers. Link: Towards Understanding the Nature of Attention with Low-Rank Sparse Decomposition (ICLR 2026).
-
2024.10.29 We introduce Llama Scope, our first contribution to the open-source Sparse Autoencoder ecosystem. Stay tuned! Link: Llama Scope: Extracting Millions of Features from Llama-3.1-8B with Sparse Autoencoders.
-
2024.10.9 Transformers and Mambas are mechanistically similar in both feature and circuit level. Can we follow this line and find universal motifs and fundamental differences between language model architectures? Link: Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures (ICLR 2025).
-
2024.5.22 We propose hierarchical tracing, a promising method to scale up sparse feature circuit analysis to industrial sized language models! Link: Automatically Identifying Local and Global Circuits with Linear Computation Graphs (ICML 2024 MI Workshop).
-
2024.2.19 Our first attempt on SAE-based circuit analysis for Othello-GPT leads us to an example of Attention Superposition in the wild! Link: Dictionary learning improves patch-free circuit discovery in mechanistic interpretability: A case study on othello-gpt.
Features
- Scalability: Our framework is fully distributed with arbitrary combinations of data, model, and head parallelism for both training and analysis. Enjoy training SAEs with millions of features!
- Flexibility: We support a wide range of SAE variants, including vanilla SAEs, Lorsa (Low-rank Sparse Attention), CLT (Cross-layer Transcoder), MOLT (Mixture of Linear Transforms), Crosscoder, and more. Each variant can be combined with different activation functions (e.g., ReLU, JumpReLU, TopK, BatchTopK) and sparsity penalties (e.g., L1, Tanh).
- Easy to Use: We provide high-level
runnersAPIs to quickly launch experiments with simple configurations. Check our examples for verified hyperparameters. - Visualization: We provide a unified web interface to visualize learned SAE variants and their features.
Installation
Use pip to install Llamascopium:
pip install llamascopium==2.0.0b34
We also highly recommend using uv to manage your own project dependencies. You can use
uv add llamascopium==2.0.0b34
to add Llamascopium as your project dependency.
Development
We use uv to manage the dependencies, which is an alternative to poetry or pdm. To install the required packages, just install uv, and run the following command:
uv sync
This will install all the required packages for the codebase in .venv directory. For Ascend NPU support, run
uv sync --extra npu
If you want to use the visualization tools, you also need to install the required packages for the frontend, which uses bun for dependency management. Follow the instructions on the website to install it, and then run the following command:
cd ui
bun install
Launch an Experiment
Explore the examples to check the basic usage of training/analyzing SAEs in different configurations. Note a MongoDB is recommended for recording the model/dataset/SAE configurations and required for storing analyses. For more advanced usage, you may explore src/llamascopium/runners folder for the interface for generating activations and training & analyzing SAE variants, and directly write your own variant of training/analyzing script at the runner level.
Visualizing the Learned Dictionary
The analysis results will be saved using MongoDB, and you can use the provided visualization tools to visualize the learned dictionary. First, start the FastAPI server by running the following command:
uvicorn server.app:app --port 24577 --env-file server/.env
Then, copy the ui/.env.example file to ui/.env and modify the BACKEND_URL to fit your server settings (by default, it's http://localhost:24577), and start the frontend by running the following command:
cd ui
bun dev --port 24576
That's it! You can now go to http://localhost:24576 to visualize the learned dictionary and its features.
Development
We highly welcome contributions to this project. If you have any questions or suggestions, feel free to open an issue or a pull request. We are looking forward to hearing from you!
TODO: Add development guidelines
Acknowledgement
The design of the pipeline (including the configuration and some training details) is highly inspired by the mats_sae_training project (now known as SAELens) and heavily relies on the TransformerLens library. We thank the authors for their great work.
Citation
Please cite this library as:
@misc{Ge2024OpenMossSAEs,
title = {OpenMoss Language Model Sparse Autoencoders},
author = {Xuyang Ge, Wentao Shu, Junxuan Wang, Guancheng Zhou, Jiaxing Wu, Fukang Zhu, Lingjie Chen, Zhengfu He},
url = {https://github.com/OpenMOSS/Llamascopium},
year = {2024}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llamascopium-2.0.0b34.tar.gz.
File metadata
- Download URL: llamascopium-2.0.0b34.tar.gz
- Upload date:
- Size: 197.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
43abd3a15378181d9d0c28a9725487e6ef601b8336eadd174c840f380975a905
|
|
| MD5 |
ac559f9361c2ad996d9d0fb39a1ddf4c
|
|
| BLAKE2b-256 |
9ae4372e1feb0938f954f918216c23334b9be60bcbfb3de69b9720ddb74aa98a
|
Provenance
The following attestation bundles were made for llamascopium-2.0.0b34.tar.gz:
Publisher:
publish.yml on OpenMOSS/Llamascopium
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llamascopium-2.0.0b34.tar.gz -
Subject digest:
43abd3a15378181d9d0c28a9725487e6ef601b8336eadd174c840f380975a905 - Sigstore transparency entry: 1339321849
- Sigstore integration time:
-
Permalink:
OpenMOSS/Llamascopium@cfd202b18b22007f3a293670b7bbac972bd62093 -
Branch / Tag:
refs/tags/v2.0.0b34 - Owner: https://github.com/OpenMOSS
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@cfd202b18b22007f3a293670b7bbac972bd62093 -
Trigger Event:
push
-
Statement type:
File details
Details for the file llamascopium-2.0.0b34-py3-none-any.whl.
File metadata
- Download URL: llamascopium-2.0.0b34-py3-none-any.whl
- Upload date:
- Size: 242.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8f325ec0f2766f68744fc6eb0fa0453de70c51c5e21f368eadee89a261af792
|
|
| MD5 |
6d36a8202865426c8489be7bb72f4a21
|
|
| BLAKE2b-256 |
5e9a45a8ab6335866327bed8be043234f5d521fcb8522b3f52d1a052950304c2
|
Provenance
The following attestation bundles were made for llamascopium-2.0.0b34-py3-none-any.whl:
Publisher:
publish.yml on OpenMOSS/Llamascopium
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llamascopium-2.0.0b34-py3-none-any.whl -
Subject digest:
f8f325ec0f2766f68744fc6eb0fa0453de70c51c5e21f368eadee89a261af792 - Sigstore transparency entry: 1339321851
- Sigstore integration time:
-
Permalink:
OpenMOSS/Llamascopium@cfd202b18b22007f3a293670b7bbac972bd62093 -
Branch / Tag:
refs/tags/v2.0.0b34 - Owner: https://github.com/OpenMOSS
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@cfd202b18b22007f3a293670b7bbac972bd62093 -
Trigger Event:
push
-
Statement type: