Skip to main content

Two-photon imaging analysis tool with a napari interface.

Project description

twopy

Two-photon imaging analysis tool for the Clark Lab output format.

Getting Started

twopy lets you open two-photon recordings, draw ROIs, plot responses in real time, process and analyze them, and save them.

When you first load a recording, twopy converts it to a standardized HDF5 format. The converted format includes the aligned movie, mean image, stimulus tables, photodiode signals, and recording metadata. Analysis and the GUI both work from the converted files, so the original source files remain separate from twopy's outputs.

Install

Examples use micromamba, but any conda-compatible environment manager should work.

Run this once from the twopy folder to install everything you need:

micromamba env create -f environment.yml
micromamba activate twopy
micromamba run -n twopy pre-commit install
cp config.example.yml config.yml

Then edit config.yml so the paths match your computer. The example file explains each setting in plain language. config.yml stays local to your machine and is not tracked by git.

If the twopy command is missing after setup, refresh the editable install:

micromamba run -n twopy python -m pip install -e .

Start The GUI

Start napari from the twopy environment:

micromamba activate twopy
twopy

Or run it without activating the environment first:

micromamba run -n twopy twopy

You can also open a recording directly:

twopy /path/to/source/recording

Or direct path to converted HDF5 files:

twopy /path/to/recording_data.h5

Inside napari, use the twopy panel to choose a recording folder or a recording_data.h5 file. If a source recording has not been converted yet, twopy converts it first, then opens the converted files.

Basic GUI flow:

  1. Start twopy.
  2. Choose a recording.
  3. Draw or edit ROIs in the rois Labels layer.
  4. Click Save ROIs.
  5. Use the response plot panel to update plots from the current ROIs.

Setup Details

The environment installs twopy as an editable package, so the twopy terminal command is available after activating the environment. If the environment already existed before the command was added, refresh the editable install:

micromamba run -n twopy python -m pip install -e .

Check

micromamba run -n twopy pre-commit run --all-files

The installed pre-commit hook runs ruff, ty, and the unit tests before each commit.

Release

PyPI publishing uses GitHub Actions Trusted Publishing. No PyPI API token secret is required.

One-time setup in PyPI:

  1. Add a trusted publisher for the twopy project.
  2. Use owner gumadeiras, repository twopy, workflow publish-to-pypi.yml, and environment pypi.
  3. In GitHub, create the pypi environment and require manual approval.

Release flow:

  1. Update project.version in pyproject.toml.
  2. Run micromamba run -n twopy pre-commit run --all-files.
  3. Commit the version change.
  4. Create a GitHub release whose tag is the same version, with or without a leading v.
  5. Publish the release.

The release workflow checks that the tag matches pyproject.toml, builds the wheel and source distribution, checks package metadata, and publishes to PyPI after the pypi environment approval.

Find Recordings

from twopy import find_recordings

recordings = find_recordings(
    year=2023,
    month=10,
    day=17,
    genotype="gh146",
    stimulus="combo_stim",
    sensor="g6f",
    cell_type="ALPN",
    hemisphere="right",
    person="Gustavo",
)

config.yml controls whether DB queries use mounted files directly or cached local copies. The default is database_access: copy because database searches over the network can be slow, while copying the DB file locally is usually fast.

Convert Recording

from pathlib import Path

from twopy import convert_recording_to_twopy

recording = Path("/path/to/recording")

converted = convert_recording_to_twopy(recording)
print(converted.path)
print(converted.movie_path)

Conversion writes recording_data.h5 for metadata, stimulus tables, photodiode signals, and the mean image. The large aligned movie is written separately to aligned_movie.h5. By default the mean image uses the full movie; pass mean_start_frame and mean_stop_frame to use a frame range. By default, conversion writes to the location configured by analysis_output; pass output_dir only when you want to override that for a specific call.

config.yml also controls analysis output routing. analysis_output: source writes into recording/twopy; a path mirrors the recording directory structure under that output root.

Analyze Converted Data

from pathlib import Path

import numpy as np

from twopy import (
    classify_recording_photodiode_events,
    compute_roi_delta_f_over_f,
    detect_recording_photodiode_events,
    extract_background_corrected_roi_traces,
    load_converted_recording,
    make_roi_set,
    map_stimulus_epochs_to_frame_windows,
    select_epoch_frame_windows,
)

recording = load_converted_recording(Path("/path/to/recording_data.h5"))
mask_array = np.zeros((1, *recording.movie.shape[1:]), dtype=bool)
mask_array[0, :10, :10] = True
roi_set = make_roi_set(mask_array)
traces = extract_background_corrected_roi_traces(
    recording,
    roi_set,
    method="movie_global_percentile",
)
alignment = detect_recording_photodiode_events(recording)
timing = classify_recording_photodiode_events(recording, alignment)
epoch_windows = map_stimulus_epochs_to_frame_windows(recording, alignment)
interleave_windows = select_epoch_frame_windows(
    epoch_windows,
    epoch_name="Gray Interleave",
)
dff = compute_roi_delta_f_over_f(
    traces,
    interleave_windows,
    data_rate_hz=float(recording.acquisition_metadata["acq.frameRate"]),
    fit_mode="robust",
)

Analysis starts from converted HDF5 files. ROI masks are GUI-independent, trace extraction streams movie chunks, background correction stays explicit, and stimulus epoch windows come from classified photodiode events instead of nominal frame-rate assumptions. timing.events keeps the start, transition, and end classifications auditable. Analysis trace extraction uses the saved alignment-valid spatial crop by default, including method="none" when you want uncorrected traces through the analysis path. ROI masks remain full-frame; pass spatial_domain="full_frame" only when that is the intended audit path. The lower-level extract_roi_traces helper is a full-frame raw primitive. ROI dF/F uses corrected ROI fluorescence plus gray interleave windows to fit one shared exponential tau and one amplitude per ROI. The default dF/F fit mode is robust; pass fit_mode="source_bounds" when you need original source-bound behavior for audit comparisons.

Open In Napari

From a converted output directory that contains recording_data.h5, or from a source recording directory:

twopy

If converted files are missing and the selected folder has the expected source recording files, twopy runs conversion first and then opens the converted HDF5 files. If no recording is found, twopy still opens napari. Choose a recording folder or recording_data.h5 in the twopy dock panel; twopy loads it after selection.

Or pass a source folder or converted recording explicitly:

twopy /path/to/source/recording
twopy /path/to/recording_data.h5

By default the launcher opens the mean image, the full movie, an editable rois Labels layer, a top response-plot dock, a twopy loading dock, and a left Save ROIs dock. Use --no-movie to skip the movie preview, or --movie-start and --movie-end to choose a different preview range. Save ROIs writes rois.h5 beside the current recording by default. The response dock can reload existing analysis_outputs.h5 or update plots from the current Labels layer. Saving analysis writes rois.h5, analysis_outputs.h5, exports/csvs/response_summary_trials.csv, and exports/csvs/response_summary_grouped.csv beside the converted recording. Response plots share one y-axis across epochs and show two seconds before stimulus onset and two seconds after stimulus offset by default, when gray interleave frames are available in the grouped responses. Epoch plots are laid out horizontally. Each saved response trial includes its own time_seconds vector, so plots use direct response time values rather than inferring time from array indices.

from pathlib import Path

from twopy import (
    launch_napari,
    open_recording_in_napari,
    roi_label_image_from_layer,
    save_napari_label_rois,
)

launch_napari(Path("/path/to/recording_data.h5"))

view = open_recording_in_napari(
    Path("/path/to/recording_data.h5"),
    movie_frame_range=(0, 200),
)

# After drawing or editing the rois Labels layer:
label_image = roi_label_image_from_layer(view.roi_labels_layer)
roi_set = save_napari_label_rois(label_image, Path("/path/to/rois.h5"))

Napari code is a thin adapter. It loads converted twopy files, displays the mean image, optionally displays a bounded movie preview, creates an editable ROI Labels layer, and adds small dock widgets for loading folders, saving ROIs, and plotting responses. ROI saving writes the current Labels layer through the core ROI HDF5 helpers. Response plotting calls the core analysis workflow when updating from current ROIs. Pass roi_set=Path("/path/to/rois.h5") when reopening existing ROIs. Napari code does not read source MATLAB/TIFF files or own analysis decisions.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

twopy-0.1.0.tar.gz (212.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

twopy-0.1.0-py3-none-any.whl (198.9 kB view details)

Uploaded Python 3

File details

Details for the file twopy-0.1.0.tar.gz.

File metadata

  • Download URL: twopy-0.1.0.tar.gz
  • Upload date:
  • Size: 212.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for twopy-0.1.0.tar.gz
Algorithm Hash digest
SHA256 912e5521fdad9647b95ffb0f6f6b0100af69ea936d629861bd18c4a097602244
MD5 4164cdacbf3fd44ba17d4e4e8f24cfa3
BLAKE2b-256 70d92d03b6318b125e89165ea1f6a5c05d363760153d87cc5a852438240db876

See more details on using hashes here.

Provenance

The following attestation bundles were made for twopy-0.1.0.tar.gz:

Publisher: publish-to-pypi.yml on gumadeiras/twopy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file twopy-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: twopy-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 198.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for twopy-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a3346ecc7903bed81ced1592e024c0019dfa5fda0f39bf2c171be36bced79345
MD5 62df772028a4559c24ba80dad529a021
BLAKE2b-256 5cd7001c4f0e7dc8677266db4e17eafdeaa0763ad6225fa8957cc526da054ffd

See more details on using hashes here.

Provenance

The following attestation bundles were made for twopy-0.1.0-py3-none-any.whl:

Publisher: publish-to-pypi.yml on gumadeiras/twopy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page