Training and Analyzing Sparse Autoencoders (SAEs)
Project description
SAE Lens
SAELens exists to help researchers:
- Train sparse autoencoders.
- Analyse sparse autoencoders / research mechanistic interpretability.
- Generate insights which make it easier to create safe and aligned AI systems.
Please refer to the documentation for information on how to:
- Download and Analyse pre-trained sparse autoencoders.
- Train your own sparse autoencoders.
- Generate feature dashboards with the SAE-Vis Library.
SAE Lens is the result of many contributors working collectively to improve humanities understanding of neural networks, many of whom are motivated by a desire to safeguard humanity from risks posed by artificial intelligence.
This library is maintained by Joseph Bloom and David Chanin.
Loading Pre-trained SAEs.
Pre-trained SAEs for various models can be imported via SAE Lens. See this page in the readme for a list of all SAEs.
Tutorials
- Loading and Analysing Pre-Trained Sparse Autoencoders
- Understanding SAE Features with the Logit Lens
- Training a Sparse Autoencoder
Join the Slack!
Feel free to join the Open Source Mechanistic Interpretability Slack for support!
Citations and References
Research:
Reference Implementations:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for sae_lens-3.11.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3f8eadf23375c947c683dbcd46f698e9ae775f9cc98df7f3f24852fcb5a47889 |
|
MD5 | 363d24c367a7181f99b378740eca21d8 |
|
BLAKE2b-256 | a3fcc92cb55b698e29d3c7378ae06a3b0c04ae70f5979bab25caf6e9a3868d2b |