Skip to main content

Package to visualize LLM's Neural Networks activation regions

Project description

LLM-MRI: a brain scanner for LLMs

As the everyday use of large language models (LLMs) expands, so does the necessity of understanding how these models achieve their designated outputs. While many approaches focus on the interpretability of LLMs through visualizing different attention mechanisms and methods that explain the model's architecture, LLM-MRI focuses on the activations of the feed-forward layers in a transformer-based LLM.

By adopting this approach, the library examines the neuron activations produced by the model for each distinct label. Through a series of steps, such as dimensionality reduction and representing each layer as a grid, the tool provides various visualization methods for the activation patterns in the feed-forward layers. Accordingly, the objective of this library is to contribute to LLM interpretability research, enabling users to explore visualization methods, such as heatmaps and graph representations of the hidden layers' activations in transformer-based LLMs.

This model allows users to explore questions such as:

  • How do different categories of text in the corpus activate different neural regions?
  • What are the differences between the properties of graphs formed by activations from two distinct categories?
  • Are there regions of activation in the model more related to specific aspects of a category?

We encourage you to not only use this toolkit but also to extend it as you see fit.

Index

Online Example

The link below runs an online example of our library, in the Jupyter platform running over the Binder server:

Binder

Instalation

To see LLM-MRI in action on your own data:

pip install llm_mri

Usage

Firstly, the user needs to import the LLM-MRI and matplotlib.pyplot packages:

from llm_mri import LLM_MRI
import matplotlib.pyplot as plt

The user also needs to specify the Hugging Face Dataset that will be used to process the model's activations. There are two ways to do this:

  • Load the Dataset from Hugging Face Hub:
    dataset_url = "https://huggingface.co/datasets/dataset_link"
    dataset = load_dataset("csv", data_files=dataset_url)
    
  • If you already has the dataset loaded on your machine, you can use the load_from_disk function:
    dataset = load_from_disk(dataset_path) # Specify the Dataset's path
    

Next, the user selects the model to be used as a string:

model_ckpt = "distilbert/distilbert-base-multilingual-cased"

Then, the user instantiates LLM-MRI, to apply the methods defined on Functions:

llm_mri = LLM_MRI(model=model_ckpt, device="cpu", dataset=dataset)

Functions

The library's functionality is divided into the following sections:

Activation Extraction:

As the user inputs the model and corpus to be analyzed, the dimensionality of the model's hidden layers is reduced, enabling visualization as an NxN grid.

llm_mri.process_activation_areas(map_dimension)

Heatmap representation of activations:

This includes the get_layer_image function, which transforms the NxN grid for a selected layer into a heatmap. In this heatmap, each cell represents the number of activations that different regions received for the provided corpus. Additionally, users can visualize activations for a specific label.

fig = llm_mri.get_layer_image(layer, category)

hidden_state_1_true

Graph Representation of Activations:

Using the get_graph function, the module connects regions from neighboring layers based on co-activations to form a graph representing the entire network. The graph's edges can also be colored according to different labels, allowing the user to identify the specific category that activated each neighboring node.

graph = llm_mri.get_graph(category)
graph_image = llm_mri.get_graph_image(graph)

graph-single-category

The user is also able to obtain a composed visualization of two different categories using the get_composed_graph function. By setting a category, each edge is colored based on the designated label, so the user is able to see which document label activated each region.

g_composed = llm_mri.get_composed_graph("true", "fake")
g_composed_img = llm_mri.get_graph_image(g_composed)

graph-multi-categories

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_mri-0.1.5.tar.gz (9.7 kB view details)

Uploaded Source

Built Distribution

llm_mri-0.1.5-py3-none-any.whl (11.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_mri-0.1.5.tar.gz.

File metadata

  • Download URL: llm_mri-0.1.5.tar.gz
  • Upload date:
  • Size: 9.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.13 Linux/6.8.0-40-generic

File hashes

Hashes for llm_mri-0.1.5.tar.gz
Algorithm Hash digest
SHA256 92a37e34fc83ff40b64ccdaa79c4ea6750d7f347893bc491a5e8c1f5f368829a
MD5 21cf413a02b6b833b2314b63247952da
BLAKE2b-256 5137f4bcc19ecdb8943ce0c004a80252f18415d262ef07f912dd0ad04f72eb07

See more details on using hashes here.

File details

Details for the file llm_mri-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: llm_mri-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 11.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.13 Linux/6.8.0-40-generic

File hashes

Hashes for llm_mri-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 4aaf0f28e1fca08b8d8400729d1ab7b26ffa423382a2a0eb9731f9823d87b4ed
MD5 43c0fcc91eec2028bf5269dc6f21bcfb
BLAKE2b-256 8170fec093cf8091dac3c0750653f42cf47adc5cf479e9d06b24d919e763b1fd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page