Skip to main content

Python library for model-based stimulus synthesis.

Project description

plenoptic

PyPI Version Anaconda-Server Badge License: MIT Python version Build Status Documentation Status DOI codecov Binder Project Status: Active – The project has reached a stable, usable state and is being actively developed.

plenoptic is a python library for model-based synthesis of perceptual stimuli. For plenoptic, models are those of visual[^1] information processing: they accept an image as input, perform some computations, and return some output, which can be mapped to neuronal firing rate, fMRI BOLD response, behavior on some task, image category, etc. The intended audience is researchers in neuroscience, psychology, and machine learning. The generated stimuli enable interpretation of model properties through examination of features that are enhanced, suppressed, or discarded. More importantly, they can facilitate the scientific process, through use in further perceptual or neural experiments aimed at validating or falsifying model predictions.

Getting started

  • If you are unfamiliar with stimulus synthesis, see the conceptual introduction for an in-depth introduction.
  • If you understand the basics of synthesis and want to get started using plenoptic quickly, see the Quickstart tutorial.

Installation

The best way to install plenoptic is via pip:

$ pip install plenoptic

or conda:

$ conda install plenoptic -c conda-forge

[!WARNING] We do not currently support conda installs on Windows, due to the lack of a Windows pytorch package on conda-forge. See here for the status of that issue.

Our dependencies include pytorch and pyrtools. Installation should take care of them (along with our other dependencies) automatically, but if you have an installation problem (especially on a non-Linux operating system), it is likely that the problem lies with one of those packages. Open an issue and we'll try to help you figure out the problem!

See the installation page for more details, including how to set up a virtual environment and jupyter.

ffmpeg and videos

Several methods in this package generate videos. There are several backends possible for saving the animations to file, see matplotlib documentation for more details. In order convert them to HTML5 for viewing (and thus, to view in a jupyter notebook), you'll need ffmpeg installed and on your path as well. Depending on your system, this might already be installed, but if not, the easiest way is probably through [conda] (https://anaconda.org/conda-forge/ffmpeg): conda install -c conda-forge ffmpeg.

To change the backend, run matplotlib.rcParams['animation.writer'] = writer before calling any of the animate functions. If you try to set that rcParam with a random string, matplotlib will tell you the available choices.

Contents

Synthesis methods

  • Metamers: given a model and a reference image, stochastically generate a new image whose model representation is identical to that of the reference image. This method investigates what image features the model disregards entirely.
  • Eigendistortions: given a model and a reference image, compute the image perturbation that produces the smallest and largest changes in the model response space. This method investigates the image features the model considers the least and most important.
  • Maximal differentiation (MAD) competition: given two metrics that measure distance between images and a reference image, generate pairs of images that optimally differentiate the models. Specifically, synthesize a pair of images that the first model says are equi-distant from the reference while the second model says they are maximally/minimally distant from the reference. Then synthesize a second pair with the roles of the two models reversed. This method allows for efficient comparison of two metrics, highlighting the aspects in which their sensitivities differ.

Models, Metrics, and Model Components

  • Portilla-Simoncelli texture model, which measures the statistical properties of visual textures, here defined as "repeating visual patterns."
  • Steerable pyramid, a multi-scale oriented image decomposition. The basis are oriented (steerable) filters, localized in space and frequency. Among other uses, the steerable pyramid serves as a good representation from which to build a primary visual cortex model. See the pyrtools documentation for more details on image pyramids in general and the steerable pyramid in particular.
  • Structural Similarity Index (SSIM), is a perceptual similarity metric, returning a number between -1 (totally different) and 1 (identical) reflecting how similar two images are. This is based on the images' luminance, contrast, and structure, which are computed convolutionally across the images.
  • Multiscale Structrual Similarity Index (MS-SSIM), is a perceptual similarity metric similar to SSIM, except it operates at multiple scales (i.e., spatial frequencies).
  • Normalized Laplacian distance, is a perceptual distance metric based on transformations associated with the early visual system: local luminance subtraction and local contrast gain control, at six scales.

Getting help

We communicate via several channels on Github:

  • Discussions is the place to ask usage questions, discuss issues too broad for a single issue, or show off what you've made with plenoptic.
  • If you've come across a bug, open an issue.
  • If you have an idea for an extension or enhancement, please post in the ideas section of discussions first. We'll discuss it there and, if we decide to pursue it, open an issue to track progress.
  • See the contributing guide for how to get involved.

In all cases, please follow our code of conduct.

Citing us

If you use plenoptic in a published academic article or presentation, please cite both the code by the DOI as well the JOV paper. If you are not using the code, but just discussing the project, please cite the paper. You can click on Cite this repository on the right side of the GitHub page to get a copyable citation for the code, or use the following:

  • Code: DOI
  • Paper:
    @article{duong2023plenoptic,
      title={Plenoptic: A platform for synthesizing model-optimized visual stimuli},
      author={Duong, Lyndon and Bonnen, Kathryn and Broderick, William and Fiquet, Pierre-{\'E}tienne and Parthasarathy, Nikhil and Yerxa, Thomas and Zhao, Xinyuan and Simoncelli, Eero},
      journal={Journal of Vision},
      volume={23},
      number={9},
      pages={5822--5822},
      year={2023},
      publisher={The Association for Research in Vision and Ophthalmology}
    }
    

See the citation guide for more details, including citations for the different synthesis methods and computational moels included in plenoptic.

Support

This package is supported by the Simons Foundation Flatiron Institute's Center for Computational Neuroscience.

[^1]: These methods also work with auditory models, such as in Feather et al., 2019, though we haven't yet implemented examples. If you're interested, please post in Discussions!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

plenoptic-1.1.0.tar.gz (30.4 MB view details)

Uploaded Source

Built Distribution

plenoptic-1.1.0-py3-none-any.whl (399.4 kB view details)

Uploaded Python 3

File details

Details for the file plenoptic-1.1.0.tar.gz.

File metadata

  • Download URL: plenoptic-1.1.0.tar.gz
  • Upload date:
  • Size: 30.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for plenoptic-1.1.0.tar.gz
Algorithm Hash digest
SHA256 733719f0bf63b6c68e35560a8bfa36aa8f065b82b038d7d8b84ccd4e6a5f3fec
MD5 445306d12a9aa6b8c17038d6a0daa699
BLAKE2b-256 79782e7ce43f2323dba7987a2b932c642b2cd14aaa228c5253f1ff1e78695e11

See more details on using hashes here.

File details

Details for the file plenoptic-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: plenoptic-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 399.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for plenoptic-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0be6fce389fce3c1b1caf9138e65a30942b8958dff5f2a17eaf95de27b6f3e81
MD5 61942c756fd9a9ef189392d9bc827100
BLAKE2b-256 3660cdcc7b7d125314690ebbe8d27a29d8c9560f2bcde3c943c50b50c2faac44

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page