Skip to main content

Easily map images (as `PIL.Images`) to features (as `np.ndarray`) from pretrained vision models.

Project description

enczoo: easily extract image features from pretrained vision models

CI

enczoo is a Python library with a simple goal: to make it as easy as possible to map images (as PIL.Images) to features (as numpy arrays) from state-of-the-art vision models, such as Imagenet-pretrained ResNet50 and CLIP ViT-B/16.

Installation

enczoo requires Python 3.12 or above, and is installed using the wonderful uv project manager. Once you have uv installed, just run the following command in your project:

uv add enczoo

Usage

import enczoo
from PIL import Image

image = Image.open('my-image.png')
model = enczoo.ResNet50(
    layer_name='avgpool',
    # device=gpu
) 
features = model.compute_features(images=[image]) # np.ndarray
# Want another layer? Check out: print(enczoo.ResNet50.layer_names)

Available models

Pixels
  • Family: raw pixels
  • Returns: float32 RGB pixels after preprocessing
  • Output shape: [B, 224, 224, 3]
  • Academic reference: none; this is an enczoo convenience encoder
AlexNet
  • Family: ImageNet-pretrained CNN
  • Returns: intermediate activations from the requested layer
  • Output shape: depends on layer_name
  • Layer selection: inspect enczoo.AlexNet.layer_names
  • Academic reference: AlexNet, "ImageNet Classification with Deep Convolutional Neural Networks" (Krizhevsky et al., 2012)
ResNet50
  • Family: ImageNet-pretrained CNN
  • Returns: intermediate activations from the requested layer
  • Output shape: depends on layer_name
  • Layer selection: inspect enczoo.ResNet50.layer_names
  • Academic reference: ResNet, "Deep Residual Learning for Image Recognition" (He et al., 2015)
ConvNeXtB
  • Family: ImageNet-pretrained CNN
  • Returns: intermediate activations from the requested layer
  • Output shape: depends on layer_name
  • Layer selection: inspect enczoo.ConvNeXtB.layer_names
  • Academic reference: ConvNeXt, "A ConvNet for the 2020s" (Liu et al., 2022)
CLIPResNet50
  • Family: CLIP ResNet visual encoder
  • Returns: intermediate activations from the requested visual layer
  • Output shape: depends on layer_name
  • Layer selection: inspect enczoo.CLIPResNet50.layer_names
  • Academic reference: CLIP, "Learning Transferable Visual Models From Natural Language Supervision" (Radford et al., 2021)
CLIPViTB16
  • Family: CLIP vision transformer
  • Returns: the model's pooled CLS-based image embedding
  • Output shape: [B, 768]
  • Academic reference: CLIP, "Learning Transferable Visual Models From Natural Language Supervision" (Radford et al., 2021)
DINOv2ViTB14
  • Family: self-supervised vision transformer
  • Returns: the model's pooled CLS-based image embedding
  • Output shape: [B, 768]
  • Academic reference: DINOv2, "DINOv2: Learning Robust Visual Features without Supervision" (Oquab et al., 2023)
AligNetViTB16
  • Family: AlignNet-aligned vision transformer
  • Returns: the SavedModel feature tensor selected from the exported pre_logits output
  • Output shape: depends on the downloaded model
  • Weights: downloaded on first use and cached under ENCZOO_CACHE_DIR or the platform cache directory
  • Academic reference: Muttenthaler et al. 2025; weights come from the AlignNet model release
UnaligNetViTB16
  • Family: unaligned vision transformer from the AlignNet release
  • Returns: the SavedModel feature tensor selected from the exported pre_logits output
  • Output shape: depends on the downloaded model
  • Weights: downloaded on first use and cached under ENCZOO_CACHE_DIR or the platform cache directory
  • Academic reference: Muttenthaler et al. 2025; weights come from the AlignNet model release

Why develop enczoo?

Under the hood, enczoo solves several tiny problems which make correctly computing image features more annoying and error-prone than it should be. For example, enczoo automatically:

  • performs model-specific image transforms ("was it -1 to 1, 0 to 1, or 0-255...?"),
  • ensures images are in RGB format
  • puts the model in inference, not training, mode
  • turns off autograd
  • returns tensors as np.ndarray (no more detach().cpu().numpy())
  • resizes the image while preserving aspect ratio
  • and more!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

enczoo-0.1.5.dev2.tar.gz (1.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

enczoo-0.1.5.dev2-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file enczoo-0.1.5.dev2.tar.gz.

File metadata

  • Download URL: enczoo-0.1.5.dev2.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.12

File hashes

Hashes for enczoo-0.1.5.dev2.tar.gz
Algorithm Hash digest
SHA256 1f151d0ebab45054b3f089a83c8a02ae223a2881049463960a03e104210ed8c2
MD5 7ec30e358a93971e00d90479bb65a9f9
BLAKE2b-256 d5d60c0d8033f1eb689d83534d9a9635f7001029230cd8e91b0222463729e7c4

See more details on using hashes here.

File details

Details for the file enczoo-0.1.5.dev2-py3-none-any.whl.

File metadata

File hashes

Hashes for enczoo-0.1.5.dev2-py3-none-any.whl
Algorithm Hash digest
SHA256 86ce3b0bae3be5c5892478e644b5be70ede2ab92bf36bd9624fc47f42752e232
MD5 39929df854326a49ac28b4923c0fe2ca
BLAKE2b-256 9f5f544da438db2dd5ec499acd5a1ae2ffa6340932fe3a22631d125e9224d8e1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page