Training and Analyzing Sparse Autoencoders (SAEs)
Project description
SAE Lens
SAELens exists to help researchers:
- Train sparse autoencoders.
- Analyse sparse autoencoders / research mechanistic interpretability.
- Generate insights which make it easier to create safe and aligned AI systems.
Please refer to the documentation for information on how to:
- Download and Analyse pre-trained sparse autoencoders.
- Train your own sparse autoencoders.
- Generate feature dashboards with the SAE-Vis Library.
SAE Lens is the result of many contributors working collectively to improve humanities understanding of neural networks, many of whom are motivated by a desire to safeguard humanity from risks posed by artificial intelligence.
This library is maintained by Joseph Bloom and David Channin.
Join the Slack!
Feel free to join the Open Source Mechanistic Interpretability Slack for support!
Citations and References
Research:
Reference Implementations:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sae_lens-0.5.0.tar.gz
(46.1 kB
view hashes)
Built Distribution
sae_lens-0.5.0-py3-none-any.whl
(56.4 kB
view hashes)