Helpful methods for exploring in vivo electrophysiology data
Project description
aind-ephys-utils
Helpful methods for exploring in vivo electrophysiology data.
Installation
pip install aind-ephys-utils
Level of support
This package is under active development, and breaking changes to the API may be introduced at any time. We recommend pinning the version for any code that must run reproducibly.
Usage
There are three ways to use this library:
1. via ephys accessor (recommended):
First, create an Xarray DataArray with labeled dimensions and coordinates from a Pandas or Polars DataFrame containing spike times or trials:
from aind_ephys_utils import from_dataframe
da = from_dataframe(spikes_df)
trials = from_dataframe(trials_df)
# or
da = from_dataframe(spikes_df, trials_df, window=(-1,1))
Then, all analysis happens on DataArrays via the ephys accessor:
da.ephys.bin(...)da.ephys.reduce(...)da.ephys.psth(...)da.ephys.plot.raster(...)
This allows functions to be run in sequence and combined with built-in Xarray functions, e.g.:
da.ephys.align(...).sel(unit=1).mean('trial').ephys.smooth(...)
2. via Xarray pipe method:
Alternatively, you can import individual functions and run them sequentially on appropriately formatted DataArrays:
from aind_ephys_utils.ops import smooth, baseline, psth
result = (
da
.pipe(smooth, method='gaussian', sigma=0.03)
.pipe(baseline, window=(-0.5, 0.0))
.pipe(psth, group_by="condition")
)
3. with Numpy arrays:
Many functions are also compatible with Numpy inputs:
from aind_ephys_utils.ops import align, bin
aligned_spikes = align(
spikes, # list of arrays of spike times
events=T, # list or ndarray of event times
window=(-1, 1), # window around each event
)
binned_spikes = bin(aligned_spikes,
dt=0.01)
CAUTION: Since Numpy arrays lack intrinsic labels, extra care must be taken to make sure input data is formatted correctly.
NWB example
Analysis usually starts from two DataFrames loaded from an NWB file, one for spikes and one for trials:
from aind_ephys_utils import from_dataframe
from pynwb import NWBHDF5IO
# read the file
nwb = NWBHDF5IO('/path/to/file.nwb', 'r').read()
# load units and trials dataframes
units = nwb.units.to_dataframe()
trials = nwb.trials.to_dataframe()
# align all units to all trials in a specific time window
spikes = from_dataframe(units, trials, window=(-0.5, 1.0))
# plot a spike raster for one unit, grouped by the value in the "choice" column:
ax = spikes.sel(unit=1).ephys.plot.raster(group_by="choice")
# bin the spikes in 0.01 s intervals and smooth
binned = spikes.ephys.bin(0.01).ephys.smooth(window=0.05)
# plot a PSTH for all units and conditions:
ax = binned.ephys.plot.psth()
Dimensionality reduction
One of the most powerful features is the reduce operation, which makes it straightforward to perform dimensionality reduction on neural population data:
ds = spikes.ephys.reduce(method='pca', n_components=10)
ds['projections'].shape # (n_components, n_trials, n_timesteps)
The reduce operation currently supports seven commonly used dimensionality reduction methods:
'pca': Principal component analysis'gpfa': Gaussian process factor analysis (Yu et al., 2009)'dpca': Demixed principal component analysis (Kobak et al., 2016)'coding_direction': Coding direction'logistic': Logistic regression'lda': Linear discriminant analysis'rrr': Reduced rank regression
Other operations
DataArray objects with dimensions of spikes, trials, and/or time are compatible with the following operations, available via the ephys accessor:
align: Align aDataArrayof spike times to aDataArrayof trial timesbin: Transform aDataArrayof spike times into aDataArrayof binned firing ratesbaseline: Subtract the firing rate in a baseline intervalnormalize: Perform z-scoring across trials or timepsth: Compute the mean across conditionsrestrict: Only keep data within a specified time windowsmooth: Smooth firing rates over time
These operations also support Numpy inputs/outputs.
Contributing
Developer installation
First, clone the repository. Then, from the aind-ephys-utils directory, run:
pip install -e .[dev]
Note: On macOS, you'll need to put the last argument in quotation marks: ".[dev]"
Linters and testing
There are several libraries used to run linters, check documentation, and run tests.
- Please test your changes using the coverage library, which will run the tests and log a coverage report:
coverage run -m unittest discover && coverage report
- Use interrogate to check that modules, methods, etc. have been documented thoroughly:
interrogate .
- Use black to automatically format the code into PEP standards:
black .
- Use flake8 to check that code is up to standards (no unused imports, etc.):
flake8 .
- Use isort to automatically sort import statements:
isort .
Pull requests
For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use Angular style for commit messages. Roughly, they should follow the pattern:
<type>(<scope>): <short summary>
where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:
- build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
- ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
- docs: Documentation only changes
- feat: A new feature
- fix: A bugfix
- perf: A code change that improves performance
- refactor: A code change that neither fixes a bug nor adds a feature
- test: Adding missing tests or correcting existing tests
Documentation
To generate the rst files source files for documentation, run
sphinx-apidoc -f -e -H "API" -o docs/source/api src/aind_ephys_utils
Then to create the documentation HTML files, run
sphinx-build -b html docs/source/ docs/build/html
More info on sphinx installation can be found here.
Developing in Code Ocean
Members of the Allen Institute for Neural Dynamics can follow these steps to create a Code Ocean capsule from this repository:
- Click the ⨁ New Capsule button and select "Clone from AllenNeuralDynamics"
- Type in
aind-ephys-utilsand click "Clone" (this step requires that your GitHub credentials are configured properly) - Select a Python base image, and optionally change the compute resources
- Attach data to the capsule and any dependencies needed to load it (e.g.
pynwb,hdmf-zarr) - Add plotting dependencies (e.g.
ipympl,plotly) - Launch a Visual Studio Code cloud workstation
Inside Visual Studio Code, select "New Terminal" from the "Terminal" menu and run the following commands:
$ pip install -e .[dev]
$ git checkout -b <name of feature branch>
Now, you can create Jupyter notebooks in the "code" directory that can be used to test out new functions before updating the library. When prompted, install the "Python" extensions to be able to execute notebook cells.
Once you've finished writing your code and tests, run the following commands:
$ coverage run -m unittest discover && coverage report
$ interrogate .
$ black .
$ flake8 .
$ isort .
Assuming all of these pass, you're ready to push your changes:
$ git add <files to add>
$ git commit -m "Commit message"
$ git push -u origin <name of feature branch>
After doing this, you can open a pull request on GitHub.
Note that git will only track files inside the aind-ephys-utils directory, and will ignore everything else in the capsule. You will no longer be able to commit changes to the capsule itself, which is why this workflow should only be used for developing a library, and not for performing any type of data analysis.
When you're done working, it's recommended to put the workstation on hold rather than shutting it down, in order to keep Visual Studio Code in the same state.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aind_ephys_utils-0.1.0.tar.gz.
File metadata
- Download URL: aind_ephys_utils-0.1.0.tar.gz
- Upload date:
- Size: 1.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8d6a44eff33d49b0c7bbc99c9493902afff4043d1047d579e054e6204ec307b5
|
|
| MD5 |
3eec8de7ed670b6e20763a8d4737c140
|
|
| BLAKE2b-256 |
e8e3c448c04539be39b53ba1e38efbaf74194d31011f18228863e456c9ab9f66
|
File details
Details for the file aind_ephys_utils-0.1.0-py3-none-any.whl.
File metadata
- Download URL: aind_ephys_utils-0.1.0-py3-none-any.whl
- Upload date:
- Size: 125.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6213cfb4d23ed130caf174169f3dd901da47d036d884fe2fe78efb4ed01a1842
|
|
| MD5 |
478b38742d0708293148c4e58dad8ec9
|
|
| BLAKE2b-256 |
7b1b25428ad2919662968b017be4206a4871dff2400fba2d23ec07e48a62cee4
|