Skip to main content

Helpful methods for exploring in vivo electrophysiology data

Project description

aind-ephys-utils

License Code Style semantic-release: angular

Helpful methods for exploring in vivo electrophysiology data.

Logo

Installation

pip install aind-ephys-utils

Level of support

This package is under active development, and breaking changes to the API may be introduced at any time. We recommend pinning the version for any code that must run reproducibly.

Usage

There are three ways to use this library:

1. via ephys accessor (recommended):

First, create an Xarray DataArray with labeled dimensions and coordinates from a Pandas or Polars DataFrame containing spike times or trials:

from aind_ephys_utils import from_dataframe

da = from_dataframe(spikes_df)
trials = from_dataframe(trials_df)
# or
da = from_dataframe(spikes_df, trials_df, window=(-1,1))

Then, all analysis happens on DataArrays via the ephys accessor:

  • da.ephys.bin(...)
  • da.ephys.reduce(...)
  • da.ephys.psth(...)
  • da.ephys.plot.raster(...)

This allows functions to be run in sequence and combined with built-in Xarray functions, e.g.:

da.ephys.align(...).sel(unit=1).mean('trial').ephys.smooth(...)

2. via Xarray pipe method:

Alternatively, you can import individual functions and run them sequentially on appropriately formatted DataArrays:

from aind_ephys_utils.ops import smooth, baseline, psth

result = (
    da
    .pipe(smooth, method='gaussian', sigma=0.03)
    .pipe(baseline, window=(-0.5, 0.0))
    .pipe(psth, group_by="condition")
)

3. with Numpy arrays:

Many functions are also compatible with Numpy inputs:

from aind_ephys_utils.ops import align, bin

aligned_spikes = align(
    spikes,         # list of arrays of spike times
    events=T,       # list or ndarray of event times
    window=(-1, 1), # window around each event
)
binned_spikes = bin(aligned_spikes,
                    dt=0.01)

CAUTION: Since Numpy arrays lack intrinsic labels, extra care must be taken to make sure input data is formatted correctly.

NWB example

Analysis usually starts from two DataFrames loaded from an NWB file, one for spikes and one for trials:

from aind_ephys_utils import from_dataframe
from pynwb import NWBHDF5IO

# read the file
nwb = NWBHDF5IO('/path/to/file.nwb', 'r').read()

# load units and trials dataframes
units = nwb.units.to_dataframe()
trials = nwb.trials.to_dataframe()

# align all units to all trials in a specific time window
spikes = from_dataframe(units, trials, window=(-0.5, 1.0))

# plot a spike raster for one unit, grouped by the value in the "choice" column:
ax = spikes.sel(unit=1).ephys.plot.raster(group_by="choice")

# bin the spikes in 0.01 s intervals and smooth
binned = spikes.ephys.bin(0.01).ephys.smooth(window=0.05)

# plot a PSTH for all units and conditions:
ax = binned.ephys.plot.psth()

Dimensionality reduction

One of the most powerful features is the reduce operation, which makes it straightforward to perform dimensionality reduction on neural population data:

ds = spikes.ephys.reduce(method='pca', n_components=10)

ds['projections'].shape  # (n_components, n_trials, n_timesteps)

The reduce operation currently supports seven commonly used dimensionality reduction methods:

  • 'pca': Principal component analysis
  • 'gpfa': Gaussian process factor analysis (Yu et al., 2009)
  • 'dpca': Demixed principal component analysis (Kobak et al., 2016)
  • 'coding_direction': Coding direction
  • 'logistic': Logistic regression
  • 'lda': Linear discriminant analysis
  • 'rrr': Reduced rank regression

Other operations

DataArray objects with dimensions of spikes, trials, and/or time are compatible with the following operations, available via the ephys accessor:

  • align: Align a DataArray of spike times to a DataArray of trial times
  • bin: Transform a DataArray of spike times into a DataArray of binned firing rates
  • baseline: Subtract the firing rate in a baseline interval
  • normalize: Perform z-scoring across trials or time
  • psth: Compute the mean across conditions
  • restrict: Only keep data within a specified time window
  • smooth: Smooth firing rates over time

These operations also support Numpy inputs/outputs.

Contributing

Developer installation

First, clone the repository. Then, from the aind-ephys-utils directory, run:

pip install -e .[dev]

Note: On macOS, you'll need to put the last argument in quotation marks: ".[dev]"

Linters and testing

There are several libraries used to run linters, check documentation, and run tests.

  • Please test your changes using the coverage library, which will run the tests and log a coverage report:
coverage run -m unittest discover && coverage report
  • Use interrogate to check that modules, methods, etc. have been documented thoroughly:
interrogate .
  • Use black to automatically format the code into PEP standards:
black .
  • Use flake8 to check that code is up to standards (no unused imports, etc.):
flake8 .
  • Use isort to automatically sort import statements:
isort .

Pull requests

For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use Angular style for commit messages. Roughly, they should follow the pattern:

<type>(<scope>): <short summary>

where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:

  • build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
  • ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
  • docs: Documentation only changes
  • feat: A new feature
  • fix: A bugfix
  • perf: A code change that improves performance
  • refactor: A code change that neither fixes a bug nor adds a feature
  • test: Adding missing tests or correcting existing tests

Documentation

To generate the rst files source files for documentation, run

sphinx-apidoc -f -e -H "API" -o docs/source/api src/aind_ephys_utils

Then to create the documentation HTML files, run

sphinx-build -b html docs/source/ docs/build/html

More info on sphinx installation can be found here.

Developing in Code Ocean

Members of the Allen Institute for Neural Dynamics can follow these steps to create a Code Ocean capsule from this repository:

  1. Click the ⨁ New Capsule button and select "Clone from AllenNeuralDynamics"
  2. Type in aind-ephys-utils and click "Clone" (this step requires that your GitHub credentials are configured properly)
  3. Select a Python base image, and optionally change the compute resources
  4. Attach data to the capsule and any dependencies needed to load it (e.g. pynwb, hdmf-zarr)
  5. Add plotting dependencies (e.g. ipympl, plotly)
  6. Launch a Visual Studio Code cloud workstation

Inside Visual Studio Code, select "New Terminal" from the "Terminal" menu and run the following commands:

$ pip install -e .[dev]
$ git checkout -b <name of feature branch>

Now, you can create Jupyter notebooks in the "code" directory that can be used to test out new functions before updating the library. When prompted, install the "Python" extensions to be able to execute notebook cells.

Once you've finished writing your code and tests, run the following commands:

$ coverage run -m unittest discover && coverage report
$ interrogate . 
$ black .
$ flake8 .
$ isort .

Assuming all of these pass, you're ready to push your changes:

$ git add <files to add>
$ git commit -m "Commit message"
$ git push -u origin <name of feature branch>

After doing this, you can open a pull request on GitHub.

Note that git will only track files inside the aind-ephys-utils directory, and will ignore everything else in the capsule. You will no longer be able to commit changes to the capsule itself, which is why this workflow should only be used for developing a library, and not for performing any type of data analysis.

When you're done working, it's recommended to put the workstation on hold rather than shutting it down, in order to keep Visual Studio Code in the same state.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aind_ephys_utils-0.1.2.tar.gz (1.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aind_ephys_utils-0.1.2-py3-none-any.whl (125.5 kB view details)

Uploaded Python 3

File details

Details for the file aind_ephys_utils-0.1.2.tar.gz.

File metadata

  • Download URL: aind_ephys_utils-0.1.2.tar.gz
  • Upload date:
  • Size: 1.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for aind_ephys_utils-0.1.2.tar.gz
Algorithm Hash digest
SHA256 5e88037ab1f197bfb1ffa84ed58cd69c5f4b3a8dd792fdbd828ed0e403b58b77
MD5 ec61f27bf13a1a22116c9ec2d5c1c37a
BLAKE2b-256 93181e1a3f52f00bb73fa8b9cff1f83ace49d165de57146a272003110f80d566

See more details on using hashes here.

File details

Details for the file aind_ephys_utils-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for aind_ephys_utils-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fbcd34c69cee453807eb62de4c4b29d1c738ae95b80fc0af7e8636ae9fbfe639
MD5 e5120c6c2373eea54784f4c20f84d9ae
BLAKE2b-256 fc790f82a7925cd79ab94bd68555ef9de7292d2b400a30b3129dbeccf152a501

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page