Skip to main content

An implementation of transformers tailored for mechanistic interpretability.

Project description

TransformerLens

TransformerLens

Pypi Pepy Total Downlods PyPI - License Release CD Tests CD Docs CD

This library is maintained by Joseph Bloom and was created by Neel Nanda

Read the Docs Here

Installation

Install: pip install transformer_lens

import transformer_lens

# Load a model (eg GPT-2 Small)
model = transformer_lens.HookedTransformer.from_pretrained("gpt2-small")

# Run the model and get logits and activations
logits, activations = model.run_with_cache("Hello World")

Key Tutorials

Introduction to the Library and Mech Interp

Demo of Main TransformerLens Features

A Library for Mechanistic Interpretability of Generative Language Models

This is a library for doing mechanistic interpretability of GPT-2 Style language models. The goal of mechanistic interpretability is to take a trained model and reverse engineer the algorithms the model learned during training from its weights. It is a fact about the world today that we have computer programs that can essentially speak English at a human level (GPT-3, PaLM, etc), yet we have no idea how they work nor how to write one ourselves. This offends me greatly, and I would like to solve this!

TransformerLens lets you load in an open source language model, like GPT-2, and exposes the internal activations of the model to you. You can cache any internal activation in the model, and add in functions to edit, remove or replace these activations as the model runs. The core design principle I've followed is to enable exploratory analysis. One of the most fun parts of mechanistic interpretability compared to normal ML is the extremely short feedback loops! The point of this library is to keep the gap between having an experiment idea and seeing the results as small as possible, to make it easy for research to feel like play and to enter a flow state. Part of what I aimed for is to make my experience of doing research easier and more fun, hopefully this transfers to you!

Gallery

Research done involving TransformerLens:

User contributed examples of the library being used in action:

Check out our demos folder for more examples of TransformerLens in practice

Getting Started in Mechanistic Interpretability

Mechanistic interpretability is a very young and small field, and there are a lot of open problems. This means there's both a lot of low-hanging fruit, and that the bar for entry is low - if you would like to help, please try working on one! The standard answer to "why has no one done this yet" is just that there aren't enough people! Key resources:

Support & Community

If you have issues, questions, feature requests or bug reports, please search the issues to check if it's already been answered, and if not please raise an issue!

You're also welcome to join the open source mech interp community on Slack! Please use issues for concrete discussions about the package, and Slack for higher bandwidth discussions about eg supporting important new use cases, or if you want to make substantial contributions to the library and want a maintainer's opinion. We'd also love for you to come and share your projects on the Slack!

We're particularly excited to support grad students and professional researchers using TransformerLens for their work, please have a low bar for reaching out if there's ways we could better support your use case!

Background

I (Neel Nanda) used to work for the Anthropic interpretability team, and I wrote this library because after I left and tried doing independent research, I got extremely frustrated by the state of open source tooling. There's a lot of excellent infrastructure like HuggingFace and DeepSpeed to use or train models, but very little to dig into their internals and reverse engineer how they work. This library tries to solve that, and to make it easy to get into the field even if you don't work at an industry org with real infrastructure! One of the great things about mechanistic interpretability is that you don't need large models or tons of compute. There are lots of important open problems that can be solved with a small model in a Colab notebook!

The core features were heavily inspired by the interface to Anthropic's excellent Garcon tool. Credit to Nelson Elhage and Chris Olah for building Garcon and showing me the value of good infrastructure for enabling exploratory research!

Contributing

See https://neelnanda-io.github.io/TransformerLens/content/contributing.html

Citation

Please cite this library as:

@misc{nanda2022transformerlens,
    title = {TransformerLens},
    author = {Neel Nanda and Joseph Bloom},
    year = {2022},
    howpublished = {\url{https://github.com/neelnanda-io/TransformerLens}},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

transformer_lens-1.8.0.tar.gz (110.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

transformer_lens-1.8.0-py3-none-any.whl (113.6 kB view details)

Uploaded Python 3

File details

Details for the file transformer_lens-1.8.0.tar.gz.

File metadata

  • Download URL: transformer_lens-1.8.0.tar.gz
  • Upload date:
  • Size: 110.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.0 CPython/3.10.12 Linux/6.2.0-1014-azure

File hashes

Hashes for transformer_lens-1.8.0.tar.gz
Algorithm Hash digest
SHA256 d72c9be44dfe4e70667fa89473248a372208b2a8e9cbb2516802398135ef3f82
MD5 6e6fd42ad6707ad5611a9bab4a36fdfb
BLAKE2b-256 3fa2c911b96edf43a87afb4bdecbc3db625581dcb9fe0fd22f6e6babf31636e5

See more details on using hashes here.

File details

Details for the file transformer_lens-1.8.0-py3-none-any.whl.

File metadata

  • Download URL: transformer_lens-1.8.0-py3-none-any.whl
  • Upload date:
  • Size: 113.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.0 CPython/3.10.12 Linux/6.2.0-1014-azure

File hashes

Hashes for transformer_lens-1.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ef813cc779c1ff59ce3f307c1bdf879fe615662cf9e9d74de5eaf18fd8838814
MD5 334c9925d9b8caebe9f65ed54475ce05
BLAKE2b-256 6a685b4b76e1f98ff5bfa5697b7b8752b2744a1cd9705e105d319de0d01859f4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page