No project description provided
Project description
VisionTransformerLens
This is an extension of TransformerLens that integrates other modalities using HuggingFace. It incorporates components of TransformerLens such as HookPoints
and retains signature functionalities like run_with_cache()
, to make it straightforward for users to explore multimodal mechanistic interpretability.
Installation
git clone git@github.com:jannik-brinkmann/vision-transformerlens.git
Example
from PIL import Image
import requests
from transformers import CLIPProcessor
from lens import HookedVisionTransformer
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# load a model (e.g. CLIP)
model_name_or_path = "openai/clip-vit-base-patch32"
model = HookedVisionTransformer.from_pretrained(model_name_or_path, device)
processor = CLIPProcessor.from_pretrained(model_name_or_path)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# extract image features
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
inputs.to(device)
# run the model and get outputs and activations
outputs, activations = model.run_with_cache(**inputs)
Citation
@misc{brinkmann2023visiontransformerlens
title = {VisionTransformerLens},
author = {Brinkmann, Jannik},
journal = {https://github.com/jannik-brinkmann/VisionTransformerLens},
year = {2023}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
hugginglens-0.1.0.tar.gz
(3.8 kB
view hashes)
Built Distribution
Close
Hashes for hugginglens-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b447d605fdb20451038dc334fef864ae4f8b183277f392f2e55a187b522963ab |
|
MD5 | 31ae31518c1fd9abd3789a32dc26298a |
|
BLAKE2b-256 | 67668fa2fbb8650410e9c6de736ca9b5caaa173c2e016340a8fc0b1d7d6a00f4 |