No project description provided
Project description
clipscope
usage
device='cpu'
filename_in_hf_repo = "725159424.pt"
sae = TopKSAE.from_pretrained(repo_id="lewington/CLIP-ViT-L-scope", filename=filename_in_hf_repo, device=device)
transformer_name='laion/CLIP-ViT-L-14-laion2B-s32B-b82K'
locations = [(22, 'resid')]
transformer = ConfiguredViT(locations, transformer_name, device=device)
input = PIL.Image.new("RGB", (224, 224), (0, 0, 0)) # black image for testing
# alternatively load an image
# input = PIL.Image.open("test.jpg")
# input = input.resize((224, 224)).convert("RGB")
activations = transformer.all_activations(input)[locations[0]] # (1, 257, 1024)
assert activations.shape == (1, 257, 1024)
activations = activations[:, 0, :] # just the cls token
# alternatively flatten the activations
# activations = activations.flatten(1)
activations = transformer.all_activations(input) # (1, 257, 1024)
print('activations shape', activations.shape)
output = sae(activations)
print('output keys', output.keys())
print('latent shape', output['latent'].shape) # (1, 65536)
print('reconstruction shape', output['reconstruction'].shape) # (1, 1024)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
clipscope-0.1.2.tar.gz
(3.8 kB
view hashes)
Built Distribution
Close
Hashes for clipscope-0.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6f60fb69cc5e5a95a83243c919b264b5f5c3569f1ca04c31a4b23dfc3182a8d4 |
|
MD5 | daa8cb346dba349fcafdab2fb592423d |
|
BLAKE2b-256 | c2be9d3cca39f88a380f170aa3afa1a305d47a6c516f8f34b1c871b771c89917 |