Skip to main content

Preprocessing and Extraction of Linguistic Information for Computational Analysis

Project description

pelican_nlp Logo

pelican_nlp stands for “Preprocessing and Extraction of Linguistic Information for Computational Analysis - Natural Language Processing”. This package enables the creation of standardized and reproducible language processing pipelines, extracting linguistic features from various tasks like discourse, fluency, and image descriptions.

PyPI version License CC BY-NC 4.0 Supported Python Versions Contributions Welcome

Installation

Create conda environment

conda create --name pelican-nlp --channel defaults python=3.10

Activate environment

conda activate pelican-nlp

Install the package using pip:

pip install pelican-nlp

Usage

To run pelican_nlp, you need a configuration.yml file in your main project directory. This file defines the settings and parameters used for your project.

Sample configuration files are available here: https://github.com/ypauli/pelican_nlp/tree/main/examples

  1. Adapt a sample configuration to your needs.

  2. Save your personalized configuration.yml in the root of your project directory.

Running pelican_nlp

You can run pelican_nlp via the command line or a Python script.

From the command line:

Navigate to your project directory (must contain your participants/ folder and configuration.yml), then run:

conda activate pelican-nlp
pelican-run

To optimize performance, close other programs and limit GPU usage during language processing.

Data Format Requirements: LPDS

For reliable operation, your data must follow the Language Processing Data Structure (LPDS), inspired by brain imaging data structures like BIDS.

Main Concepts (Quick Guide)

  • Project Root: Contains a participants/ folder plus optional files like participants.tsv, dataset_description.json, and README.

  • Participants: Each participant has a folder named part-<ID> (e.g., part-01).

  • Sessions (Optional): For longitudinal studies, use ses-<ID> subfolders inside each participant folder.

  • Tasks/Contexts: Each session (or directly in the participant folder for non-longitudinal studies) includes subfolders for specific tasks (e.g., interview, fluency, image-description).

  • Data Files: Named with structured metadata, e.g.: part-01_ses-01_task-fluency_cat-semantic_acq-baseline_transcript.txt

Filename Structure

Filenames follow this format:

part-<id>[_ses-<id>]_task-<label>[_<key>-<value>...][_suffix].<extension>
  • Required Entities: part, task

  • Optional Entities Examples: ses, cat, acq, proc, metric, model, run, group, param

  • Suffix Examples: transcript, audio, embeddings, logits, annotations

Example Project Structure

my_project/
├── participants/
│   ├── part-01/
│   │   └── ses-01/
│   │       └── interview/
│   │           └── part-01_ses-01_task-interview_transcript.txt
│   └── part-02/
│       └── fluency/
│           └── part-02_task-fluency_audio.wav
├── configuration.yml
├── dataset_description.json
├── participants.tsv
└── README.md

Features

  • Feature 1: Cleaning text files
    • Handles whitespaces, timestamps, punctuation, special characters, and case-sensitivity.

  • Feature 2: Linguistic Feature Extraction
    • Extracts semantic embeddings, logits, distance from optimality, and semantic similarity.

  • Feature 3: Acoustic Feature Extraction
    • Extracts prosogram and openSMILE feature.

Examples

You can find example setups on the github repository in the examples folder:

Contributing

Contributions are welcome! Please check out the contributing guide.

License

This project is licensed under Attribution-NonCommercial 4.0 International. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pelican_nlp-0.3.17-py3-none-any.whl (34.4 MB view details)

Uploaded Python 3

File details

Details for the file pelican_nlp-0.3.17-py3-none-any.whl.

File metadata

  • Download URL: pelican_nlp-0.3.17-py3-none-any.whl
  • Upload date:
  • Size: 34.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for pelican_nlp-0.3.17-py3-none-any.whl
Algorithm Hash digest
SHA256 3d0ca1d62699e8b0669df42c6fd87f3d03b1dc5f112936f9295069c4a55093ae
MD5 d0e3a491e727a6294e707b127d16f0ab
BLAKE2b-256 c9ad49a97acea6ea76a03aaaaca0f1a40cc0b2613f25986aeb846957297f9b73

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page