Skip to main content

Extracting image features from state-of-the-art neural networks for Computer Vision made easy

Project description


:notebook_with_decorative_cover: Table of Contents

:star2: About the Project

thingsvision is a Python package that let's you easily extract image representations from many state-of-the-art neural networks for computer vision. In a nutshell, you feed thingsvision with a directory of images and tell it which neural network you are interested in. thingsvision will then give you the representation of the indicated neural network for each image so that you will end up with one feature vector per image. You can use these feature vectors for further analyses. We use the word features for short when we mean "image representation".

:rotating_light: Note: some function calls mentioned in the paper have been deprecated. To use this package successfully, exclusively follow this README and the documentation. :rotating_light:

(back to top)

:mechanical_arm: Functionality

With thingsvision, you can:

  • extract features for any imageset from many popular networks.
  • extract features for any imageset from your custom networks.
  • extract features for >26,000 images from the THINGS image database.
  • optionally turn off the standard center cropping performed by many networks before extracting features.
  • extract features from HDF5 datasets directly (e.g. NSD stimuli)
  • conduct basic Representational Similarity Analysis (RSA) after feature extraction.
  • perform Centered Kernel Alignment (CKA) to compare image features across model-module combinations.

(back to top)

:file_cabinet: Model collection

Neural networks come from different sources. With thingsvision, you can extract image representations of all models from:

  • torchvision
  • Keras
  • timm
  • vissl (Self-Supervised Learning Models)
    • Currently available: simclr-rn50, mocov2-rn50, jigsaw-rn50, rotnet-rn50, swav-rn50, pirl-rn50
  • OpenCLIP
  • both original CLIP variants (ViT-B/32 and RN50)
  • some custom models (VGG-16, Resnet50, Inception_v3 and Alexnet) trained on Ecoset
  • each of the many CORnet versions.

(back to top)

:running: Getting Started

:computer: Setting up your environment

Working locally.

First, create a new conda environment with Python version 3.8, 3.9, or 3.10 e.g. by using conda:

$ conda create -n thingsvision python=3.9
$ conda activate thingsvision

Then, activate the environment and simply install thingsvision via running the following pip command in your terminal.

$ pip install --upgrade thingsvision
$ pip install git+https://github.com/openai/CLIP.git

Google Colab.

Alternatively, you can use Google Colab to play around with thingsvision by uploading your image data to Google Drive (via directory mounting). You can find the jupyter notebook using PyTorch here and the TensorFlow example here.

(back to top)

:mag: Basic usage

thingsvision was designed to make extracting features as easy as possible. Start by importing all the necessary components and instantiating a thingsvision extractor. Here we're using AlexNet from the torchvision library as the model to extract features from and also load the model to GPU for faster inference:

import torch
from thingsvision import get_extractor
from thingsvision.utils.storing import save_features
from thingsvision.utils.data import ImageDataset, DataLoader

model_name = 'alexnet'
source = 'torchvision'
device = 'cuda' if torch.cuda.is_available() else 'cpu'

extractor = get_extractor(
  model_name=model_name,
  source=source,
  device=device,
  pretrained=True
)

Next, create the Dataset and Dataloader for your images. Here, we have all our images in a single directory root, which can also contain subfolders (e.g. for individual classes), so we're using the ImageDataset class.

root='path/to/root/img/directory' # (e.g., './images/)
batch_size = 32

dataset = ImageDataset(
  root=root,
  out_path='path/to/features',
  backend=extractor.get_backend(),
  transforms=extractor.get_transformations()
)

batches = DataLoader(
  dataset=dataset,
  batch_size=batch_size, 
  backend=extractor.get_backend()
)

Now all that is left is to extract the image features and store them to disk! Here we're extracting features from the last convolutional layer of AlexNet (features.10), but if you don't know which modules are available for a given model, just call extractor.show_model() to print all modules.

module_name = 'features.10'

features = extractor.extract_features(
  batches=batches,
  module_name=module_name,
  flatten_acts=True  # flatten 2D feature maps from convolutional layer
)

save_features(features, out_path='path/to/features', file_format='npy')

For more examples on the many models available in thingsvision and explanations of additional functionality like how to optionally turn off center cropping, how to use HDF5 datasets (e.g. NSD stimuli), how to perform RSA or CKA, or how to easily extract features for the THINGS image database, please refer to the Documentation.

(back to top)

:wave: How to contribute

If you come across problems or have suggestions please submit an issue!

(back to top)

:warning: License

This GitHub repository is licensed under the MIT License - see the LICENSE.md file for details.

(back to top)

:page_with_curl: Citation

If you use this GitHub repository (or any modules associated with it), please cite our paper for the initial version of thingsvision as follows:

@article{Muttenthaler_2021,
	author = {Muttenthaler, Lukas and Hebart, Martin N.},
	title = {THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks},
	journal ={Frontiers in Neuroinformatics},
	volume = {15},
	pages = {45},
	year = {2021},
	url = {https://www.frontiersin.org/article/10.3389/fninf.2021.679838},
	doi = {10.3389/fninf.2021.679838},
	issn = {1662-5196},
}

(back to top)

:gem: Contributions

This library is based on the groundwork laid by Lukas Muttenthaler and Martin N. Hebart, who are both still actively involved, but has been extended and refined into its current form with the help of our many contributors,

sorted alphabetically.

This is a joint open-source project between the Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, and the Machine Learning Group at Technische Universtität Berlin. Correspondence and requests for contributing should be adressed to Lukas Muttenthaler. Feel free to contact us if you want to become a contributor or have any suggestions/feedback. For the latter, you could also just post an issue or engange in discussions. We'll try to respond as fast as we can.

(back to top)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thingsvision-2.2.9.tar.gz (36.6 kB view details)

Uploaded Source

Built Distribution

thingsvision-2.2.9-py3-none-any.whl (102.4 kB view details)

Uploaded Python 3

File details

Details for the file thingsvision-2.2.9.tar.gz.

File metadata

  • Download URL: thingsvision-2.2.9.tar.gz
  • Upload date:
  • Size: 36.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for thingsvision-2.2.9.tar.gz
Algorithm Hash digest
SHA256 dc91e89ca910b16f26782da73731b6ba3b05672f9364efb2872fa4bbf9383afb
MD5 ba5195d6b77445d6a4c472ea3979dc43
BLAKE2b-256 1b671559e8c67622ccc0c78e4169de078b7e306b228c9174479ea9d6ca63e9fa

See more details on using hashes here.

Provenance

File details

Details for the file thingsvision-2.2.9-py3-none-any.whl.

File metadata

  • Download URL: thingsvision-2.2.9-py3-none-any.whl
  • Upload date:
  • Size: 102.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for thingsvision-2.2.9-py3-none-any.whl
Algorithm Hash digest
SHA256 a4d65bd64fb968707bbef941cd0ac4b9ea5ec91dfd105c3aae72b4e22e0b63bb
MD5 52b2cf1ab1fbc67e2124f27f7430af70
BLAKE2b-256 2956c87add0fb53765d40343a359fdbfe216e0a9c41d2e32d0700162b45579a2

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page