SAELens exists to help researchers:
- Train sparse autoencoders.
- Analyse sparse autoencoders / research mechanistic interpretability.
- Generate insights which make it easier to create safe and aligned AI systems.
SAELens inference works with any PyTorch-based model, not just TransformerLens. While we provide deep integration with TransformerLens via HookedSAETransformer, SAEs can be used with Hugging Face Transformers, NNsight, or any other framework by extracting activations and passing them to the SAE's encode() and decode() methods.
Please refer to the documentation for information on how to:
- Download and Analyse pre-trained sparse autoencoders.
- Train your own sparse autoencoders.
- Generate feature dashboards with the SAE-Vis Library.
SAE Lens is the result of many contributors working collectively to improve humanity's understanding of neural networks, many of whom are motivated by a desire to safeguard humanity from risks posed by artificial intelligence.
This library is maintained by Joseph Bloom, Curt Tigges, Anthony Duong and David Chanin.
Pre-trained SAEs for various models can be imported via SAE Lens. See this page for a list of all SAEs.
The new v6 update is a major refactor to SAELens and changes the way training code is structured. Check out the migration guide for more details.
- SAE Lens + Neuronpedia
- Loading and Analysing Pre-Trained Sparse Autoencoders
- Understanding SAE Features with the Logit Lens
- Training a Sparse Autoencoder
- Training SAEs on Synthetic Data
Feel free to join the Open Source Mechanistic Interpretability Slack for support!
- dictionary-learning: An SAE training library that focuses on having hackable code.
- Sparsify: A lean SAE training library focused on TopK SAEs.
- Overcomplete: SAE training library focused on vision models.
- SAE-Vis: A library for visualizing SAE features, works with SAELens.
- SAEBench: A suite of LLM SAE benchmarks, works with SAELens.
Please cite the package as follows:
@misc{bloom2024saetrainingcodebase,
title = {SAELens},
author = {Bloom, Joseph and Tigges, Curt and Duong, Anthony and Chanin, David},
year = {2024},
howpublished = {\url{https://github.com/decoderesearch/SAELens}},
}