Skip to content

decoderesearch/SAELens

Repository files navigation

saes_pic

SAE Lens

PyPI License: MIT build Deploy Docs codecov

SAELens exists to help researchers:

  • Train sparse autoencoders.
  • Analyse sparse autoencoders / research mechanistic interpretability.
  • Generate insights which make it easier to create safe and aligned AI systems.

SAELens inference works with any PyTorch-based model, not just TransformerLens. While we provide deep integration with TransformerLens via HookedSAETransformer, SAEs can be used with Hugging Face Transformers, NNsight, or any other framework by extracting activations and passing them to the SAE's encode() and decode() methods.

Please refer to the documentation for information on how to:

  • Download and Analyse pre-trained sparse autoencoders.
  • Train your own sparse autoencoders.
  • Generate feature dashboards with the SAE-Vis Library.

SAE Lens is the result of many contributors working collectively to improve humanity's understanding of neural networks, many of whom are motivated by a desire to safeguard humanity from risks posed by artificial intelligence.

This library is maintained by Joseph Bloom, Curt Tigges, Anthony Duong and David Chanin.

Loading Pre-trained SAEs.

Pre-trained SAEs for various models can be imported via SAE Lens. See this page for a list of all SAEs.

Migrating to SAELens v6

The new v6 update is a major refactor to SAELens and changes the way training code is structured. Check out the migration guide for more details.

Tutorials

Join the Slack!

Feel free to join the Open Source Mechanistic Interpretability Slack for support!

Other SAE Projects

  • dictionary-learning: An SAE training library that focuses on having hackable code.
  • Sparsify: A lean SAE training library focused on TopK SAEs.
  • Overcomplete: SAE training library focused on vision models.
  • SAE-Vis: A library for visualizing SAE features, works with SAELens.
  • SAEBench: A suite of LLM SAE benchmarks, works with SAELens.

Citation

Please cite the package as follows:

@misc{bloom2024saetrainingcodebase,
   title = {SAELens},
   author = {Bloom, Joseph and Tigges, Curt and Duong, Anthony and Chanin, David},
   year = {2024},
   howpublished = {\url{https://github.com/decoderesearch/SAELens}},
}

About

Training Sparse Autoencoders on Language Models

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 59