Skip to main content

Single trial EEG pipeline at the Abdel Rahman Lab for Neurocognitive Psychology, Humboldt-Universität zu Berlin

Project description

hu-neuro-pipeline

PyPI PyPI - Python Version GitHub

Single trial EEG pipeline at the Abdel Rahman Lab for Neurocognitive Psychology, Humboldt-Universität zu Berlin

Based on Frömer, R., Maier, M., & Abdel Rahman, R. (2018). Group-level EEG-processing pipeline for flexible single trial-based analyses including linear mixed models. Frontiers in Neuroscience, 12, 48. https://doi.org/10.3389/fnins.2018.00048

1. Installation

1.1 For Python users

Install the pipeline via pip from the Python Package Index (PyPI):

python3 -m pip install hu-neuro-pipeline

To install the latest development version from GitHub:

python3 -m pip install git+https://github.com/alexenge/hu-neuro-pipeline.git

1.2 For R users

First install reticulate and Miniconda for being able to import Python packages into R:

install.packages("reticulate")
reticulate::install_miniconda()

Then install the pipeline via pip from the Python Package Index (PyPI):

reticulate::py_install("hu-neuro-pipeline", pip = TRUE)

To install the latest development version from GitHub:

reticulate::py_install("git+https://github.com/alexenge/hu-neuro-pipeline.git", pip = TRUE)

2. Usage

2.1 For Python users

Here is a fairly minimal example for a (fictional) N400/P600 experiment with two experimental factors: semantics (e.g., related versus unrelated words) and emotional context (e.g., emotionally negative versus neutral).

from pipeline import group_pipeline

trials, evokeds, config = group_pipeline(
    raw_files='Results/EEG/raw',
    log_files='Results/RT',
    output_dir='Results/EEG/export',
    besa_files='Results/EEG/cali',
    triggers=[201, 202, 211, 212],
    skip_log_conditions={'semantics': 'filler'},
    components={'name': ['N400', 'P600'],
                'tmin': [0.3, 0.5],
                'tmax': [0.5, 0.9],
                'roi': [['C1', 'Cz', 'C2', 'CP1', 'CPz', 'CP2'],
                        ['Fz', 'FC1', 'FC2', 'C1', 'Cz', 'C2']]},
    average_by={'related': 'semantics == "related"',
                'unrelated': 'semantics == "unrelated"'})

In this example we have specified:

  • The paths to the raw EEG data, to the behavioral log files, to the desired output directory, and to the BESA files for ocular correction

  • Four different EEG triggers corresponding to each of the four cells in the 2 × 2 design

  • The fact that log files contain additional trials from a semantic 'filler' condition (which we want to skip because they don't have corresponding EEG triggers)

  • The a priori defined time windows and regions of interest for the N400 and P600 components

  • The log file columns (average_by) for which we want to obtain by-participant averaged waveforms (i.e., for all main and interaction effects)

2.2 For R users

Here is the same example as above but for using the pipeline from R:

# Import Python module
pipeline <- reticulate::import("pipeline")

# Run the group level pipeline
res <- pipeline$group_pipeline(
    raw_files = "Results/EEG/raw",
    log_files = "Results/RT",
    output_dir = "Results/EEG/export",
    besa_files = "Results/EEG/cali",
    triggers = c(201, 202, 211, 212),
    skip_log_conditions = list("semantics" = "filler"),
    components = list(
        "name" = list("N400", "P600"),
        "tmin" = list(0.3, 0.5),
        "tmax" = list(0.5, 0.9),
        "roi" = list(
            c("C1", "Cz", "C2", "CP1", "CPz", "CP2"),
            c("Fz", "FC1", "FC2", "C1", "Cz", "C2")
        )
    ),
    average_by = list(
        related = "semantics == 'related'",
        unrelated = "semantics == 'unrelated'"
    )
)

# Extract results
trials <- res[[1]]
evokeds <- res[[2]]
config <- res[[3]]

3. Processing details

See the documentation for more details about how to use the pipeline and how it works under the hood.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hu-neuro-pipeline-0.8.0.tar.gz (354.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hu_neuro_pipeline-0.8.0-py3-none-any.whl (100.9 kB view details)

Uploaded Python 3

File details

Details for the file hu-neuro-pipeline-0.8.0.tar.gz.

File metadata

  • Download URL: hu-neuro-pipeline-0.8.0.tar.gz
  • Upload date:
  • Size: 354.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for hu-neuro-pipeline-0.8.0.tar.gz
Algorithm Hash digest
SHA256 1e75e6ccee3cbffbee6e1260587772fbfa06d47ec79655de4f7fd8e81f83656c
MD5 ee8727ec7c88585be98cae1ed3eed0de
BLAKE2b-256 4a1fe03fa28585d4f5c9c08249ab99ab1160abd5b76172ff05f89715e3fda530

See more details on using hashes here.

File details

Details for the file hu_neuro_pipeline-0.8.0-py3-none-any.whl.

File metadata

File hashes

Hashes for hu_neuro_pipeline-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 77be240c759353804eca4d3a81fca74aa91f9a9f7da4c650920dc18bc18103be
MD5 f3f30523a16bfabb7f893e419d4210f5
BLAKE2b-256 8eea850b4ecf1e5bd2af2c36af90b4d2a129a6d69f2f0e2d7cd18c4d121b4ac2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page