Skip to main content

Single trial EEG pipeline at the Neurocognitive Psychology Lab, Humboldt-Universität zu Berlin

Project description

hu-neuro-pipeline

Single trial EEG pipeline at the Neurocognitive Psychology lab, Humboldt-Universität zu Berlin

PyPI PyPI - Python Version GitHub

Based on Frömer, R., Maier, M., & Abdel Rahman, R. (2018). Group-level EEG-processing pipeline for flexible single trial-based analyses including linear mixed models. Frontiers in Neuroscience, 12, 48. https://doi.org/10.3389/fnins.2018.00048

Usage

For Python users

1. Install the pipeline

Install as usual from the Python Package Index (PyPI):

python3 -m pip install hu-neuro-pipeline

2. Run the pipeline

The group_pipeline() function is used to process the EEG data for multiple participants in parallel.

from pipeline import group_pipeline

trials, evokeds, config = group_pipeline(
    vhdr_files='Results/EEG/raw',
    log_files='Results/RT',
    ocular_correction='fastica',
    triggers={'standard': 101, 'target': 102},
    components={'name': ['P3'], 'tmin': [0.3], 'tmax': [0.5],
                'roi': [['C1', 'C2', 'Cz', 'CP1', 'CP2', 'CPz']]},
    condition_cols=['Stim_freq'],
    export_dir='Results/EEG/export',
)

See help(group_pipeline) for documentation of the input and output arguments.

For R users

1. Install reticulate and Miniconda

Python packages can be installed and used directly from R via the reticulate package. You will also need a Python installation for this to work. Reticulate can help you to get one in the form of the Miniconda distribution.

install.packages("reticulate")
reticulate::install_miniconda()

2. Install the pipeline

Reticulate can install the pipeline from the Python Package Index (PyPI).

py_install("hu_neuro_pipeline", pip = TRUE, python_version = "3.8")

3. Run the pipeline from R

You are now ready to import and use the pipeline in your R scripts. Here is an example for running the group level pipeline on a fictional N400/P600 experiment. The experiment has two experimental factors: Semantics ("related" vs. "unrelated") and emotinal Context ("negative" vs. "neutral").

pipeline <- reticulate::import("pipeline")
res <- pipeline$group_pipeline(
    vhdr_files = "Results/EEG/raw",
    log_files = "Results/RT",
    ocular_correction = "Results/EEG/cali",
    triggers = list(
        "related/negative" = 201,
        "related/neutral" = 202,
        "unrelated/negative" = 211,
        "unrelated/neutral" = 212
    ),
    skip_log_conditions = list("Semantics" = "filler"),
    components = list(
        "name" = c("N400", "P600"),
        "tmin" = c(0.3, 0.5),
        "tmax" = c(0.5, 0.9),
        "roi" = list(
            c("C1", "Cz", "C2", "CP1", "CPz", "CP2"),
            c("Fz", "FC1", "FC2", "C1", "Cz", "C2")
        )
    ),
    condition_cols = c("Semantics", "Context"),
    export_dir = "Results/EEG/export"
)

For documentation of the input and output arguments, see the source code or:

reticulate::py_help(pipeline$group_pipeline)

4. Use the results

The group_pipeline() function returns three elements as a list (here "res"):

  • trials: A data frame with the single trial behavioral and ERP component data. Can be used, e.g., to fit a linear mixed model (LMM) predicting the mean amplitude of the N400 component:
library(lme4)
form <- N400 ~ semantics * context + (semantics * context | participant_id)
trials <- res[[1]]  # First output is the single trial data frame
mod <- lmer(form, trials)
summary(mod)
  • evokeds: The by-participant averages for each condition (or combination of conditions) in condition_cols. Unlike trials, these are averaged over trials, but not averaged over EEG channels or time points. Can be used, e.g., for plotting the time course for the Semantics * Context interaction (incl. standard errors). The eegUtils package could be used to plot the corresponding scalp topographies (example to be added).
library(dplyr)
library(ggplot2)
evokeds <- res[[2]]  # The second output is the evokeds data frame
evokeds %>%
    filter(average_by == "Semantics * Context") %>%
    ggplot(aes(x = time, y = N400, color = Semantics) +
    facet_wrap(~ Context) +
    stat_summary(geom = "linerange", fun.data = mean_se, alpha = 0.1) +
    stat_summary(geom = "line", fun = mean)    
  • config: A list of the options that were used by the pipeline. Can be used to check which default options were used in addition to the inputs that you have provided. You can also extract the number of channels that were interpolated for each participant (when using bad_channels = "auto"):
config <- res[[3]]  # The third output is the pipeline config
num_bad_chans <- lengths(config$bad_channels)
print(mean(num_bad_chans))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hu-neuro-pipeline-0.1.0.tar.gz (17.1 kB view hashes)

Uploaded Source

Built Distribution

hu_neuro_pipeline-0.1.0-py3-none-any.whl (21.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page