Skip to main content

Sparse AutoEncoder to decode Mistral LLM

Project description

Sparse Autoencoder for Steering Mistral 7B

This repository contains a Sparse Autoencoder (SAE) designed to interpret and steer the Mistral 7B language model. By training the SAE on the residual activations of Mistral 7B, we aim to understand the internal representations of the model and manipulate its outputs in a controlled manner.

Overview

Large Language Models (LLMs) like Mistral 7B have complex internal mechanisms that are not easily interpretable. This project leverages a Sparse Autoencoder to:

  • Decode internal activations: Transforming high-dimensional activations into sparse, interpretable features.
  • Steer model behavior: Manipulating specific features to influence the model's output.

This approach is based on the hypothesis that internal features are superimposed in the model's activations and can be disentangled using sparse representations.

Personal Work

I have written the following articles that provide foundational insights guiding the development of this project:

These writings provide foundational insights that have guided the development of this project.

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/mistral-sae.git
    cd mistral-sae
    
  2. Install dependencies:

    pip install -r requirements.txt
    

    Ensure you have the appropriate version of PyTorch installed, preferably with CUDA support for GPU acceleration.

Usage

Training the Sparse Autoencoder

The train.py script trains the SAE on activations from a specified layer of the Mistral 7B model.

python train.py
  • Adjust hyperparameters like D_MODEL, D_HIDDEN, BATCH_SIZE, and lr within the script.
  • Set the MISTRAL_MODEL_PATH and target_layer to specify which model and layer to use.

Generating Feature Explanations

Use explain.py to generate natural language explanations for the features learned by the SAE.

python explain.py
  • Ensure you have access to the required datasets (e.g., The Pile) and APIs.
  • Configure parameters such as batch_size, data_path, and target_layer.

Steering the Model Output

The demo.py script demonstrates how to steer the Mistral 7B model by manipulating specific features.

python demo.py
  • Set FEATURE_INDEX to the index of the feature you wish to manipulate.
  • Toggle STEERING_ON to True to enable steering.
  • Adjust the coeff variable to control the strength of the manipulation.

Project Structure

  • config.py: Contains model configurations and helper functions.
  • train.py: Script for training the Sparse Autoencoder.
  • explain.py: Generates explanations for the features identified by the SAE.
  • demo.py: Demonstrates how to steer the Mistral 7B model using the SAE.
  • mistral_sae/: Directory containing the SAE implementation and related utilities.
  • requirements.txt: Lists the Python dependencies required for the project.

Background

Understanding the internal workings of LLMs is crucial for both interpretability and control. By applying a Sparse Autoencoder to the activations of Mistral 7B, we can:

  • Identify monosemantic neurons that correspond to specific concepts or features.
  • Test the superposition hypothesis by examining how multiple features are represented within the same neurons.
  • Enhance our ability to steer the model's outputs towards desired behaviors by manipulating these features.

Acknowledgments

This project is inspired by and builds upon several key works:

Resources

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mistral_sae-0.1.0.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

mistral_sae-0.1.0-py3-none-any.whl (11.3 kB view details)

Uploaded Python 3

File details

Details for the file mistral_sae-0.1.0.tar.gz.

File metadata

  • Download URL: mistral_sae-0.1.0.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Windows/11

File hashes

Hashes for mistral_sae-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d3398f49d1b68610de44081e3cfd96256fc4b23b33428ef3407f58d84fe7d5d5
MD5 5ae7c871080dfa20f98a6de5b88b7b92
BLAKE2b-256 24d7ebb3f0b1d94ee1bf6c995feec468bafaa22966e7c29e1ad75f87019a1878

See more details on using hashes here.

File details

Details for the file mistral_sae-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mistral_sae-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 11.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Windows/11

File hashes

Hashes for mistral_sae-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 325cb4a6015f8304ff7fd6f9c91d13d63f4c48a9f1e6b263000c3615a4ef22a2
MD5 6c461083087156a4a99fd5be40a140af
BLAKE2b-256 ddf081fa35f2ea7066893d0babdbd2c46cdbe94f63c1ff7610d196c04494ae07

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page