Skip to main content

Decomposition of Neurophysiological Time Series Signals with a Particle Swarm Optimised Independence Estimator

Project description

Swarm-Contrastive Decomposition 🧠

PyPI version Python 3.10+ License: CC BY-NC 4.0

A Python package for decomposition of neurophysiological time series signals using a Particle Swarm Optimised Independence Estimator for Blind Source Separation.

Pipeline

Table of Contents 📚

Installation 🛠️

From PyPI (Recommended)

pip install swarm-contrastive-decomposition

From GitHub (Latest Development Version)

pip install git+https://github.com/AgneGris/swarm-contrastive-decomposition.git

From Source

git clone https://github.com/AgneGris/swarm-contrastive-decomposition
cd swarm-contrastive-decomposition
pip install -e .

Verify Installation

python -c "import scd; print(f'SCD version: {scd.__version__}')"

Quick Start 🚀

import scd

# Train with default configuration
dictionary, timestamps = scd.train("data/input/emg.npy")

# Save results
scd.save_results("data/output/emg.pkl", dictionary)

Usage

Basic Usage

import scd

# Use a predefined configuration
dictionary, timestamps = scd.train(
    "path/to/your/data.mat",
    config_name="surface"  # or "default", "intramuscular"
)

scd.save_results("output.pkl", dictionary)

With Configuration Overrides

import scd

# Override specific parameters
dictionary, timestamps = scd.train(
    "data/input/emg.npy",
    config_name="surface",
    max_iterations=100,  # override for quick testing
    output_final_source_plot=True
)

Step-by-Step Control

import scd

# Load configuration
config = scd.load_config("surface")

# Load data
neural_data = scd.load_data("data/input/emg.npy", device=config.device)

# Preprocess
neural_data = scd.preprocess_data(neural_data, config)

# Train model
dictionary, timestamps = scd.train_model(neural_data, config)

# Save results
scd.save_results("output.pkl", dictionary)

Supported Data Formats

  • .mat — MATLAB files (specify the variable name with key parameter)
  • .npy — NumPy arrays
# For .mat files with custom variable name
dictionary, timestamps = scd.train("data.mat", key="emg_data")

# For .npy files
dictionary, timestamps = scd.train("data.npy")

Data should have shape (time, channels) or (channels, time) — the loader will automatically transpose if needed.

Configuration ⚙️

Configurations are defined in scd/configs.json. Available presets:

Config Name Use Case Sampling Rate Description
default General purpose 10240 Hz Balanced settings for most EMG data
surface Surface EMG 10240 Hz Optimized for surface recordings
intramuscular Intramuscular EMG 10240 Hz Higher iterations for fine-wire recordings

Configuration Parameters

Parameter Description Default
device "cuda" for GPU or "cpu" "cuda"
acceptance_silhouette Quality threshold for source acceptance 0.85
extension_factor Typically 1000 / num_channels. Higher values may improve results 25
low_pass_cutoff Low-pass filter cutoff frequency (Hz) 4400
high_pass_cutoff High-pass filter cutoff frequency (Hz) 10
sampling_frequency Sampling frequency of your signal (Hz) 10240
start_time Start time for signal trimming (s). Use 0 for beginning 0
end_time End time for signal trimming (s). Use -1 for entire signal -1
max_iterations Maximum decomposition iterations 200
peel_off_window_size_ms Window size for spike-triggered average (ms) 20
output_final_source_plot Generate plot of final sources false
use_coeff_var_fitness Use coefficient of variation fitness. true for EMG, false for intracortical true
remove_bad_fr Filter sources with firing rates < 2 Hz or > 100 Hz true
clamp_percentile Percentile for amplitude clamping 0.999

Custom Configuration

Add your own configuration to scd/configs.json:

{
    "my_experiment": {
        "device": "cuda",
        "acceptance_silhouette": 0.80,
        "extension_factor": 30,
        "sampling_frequency": 2048,
        ...
    }
}

Then use it:

dictionary, timestamps = scd.train("data.mat", config_name="my_experiment")

Test Data 🧪

The repository includes test data to verify your installation:

  • File: data/input/emg.npy
  • Type: Surface EMG
  • Sampling rate: 10240 Hz
  • Configuration: Use "surface" config
import scd

# Run with test data
dictionary, timestamps = scd.train(
    "data/input/emg.npy",
    config_name="surface"
)

print(f"Found {len(dictionary)} motor units")

Contributing 🤝

We welcome contributions! Here's how you can contribute:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/newfeature)
  3. Commit your changes (git commit -m 'Add some newfeature')
  4. Push to the branch (git push origin feature/newfeature)
  5. Open a pull request

License 📜

This project is licensed under the CC BY-NC 4.0 License.

Citation

If you use this code in your research, please cite our paper:

@article{grison2024particle,
  author={Grison, Agnese and Clarke, Alexander Kenneth and Muceli, Silvia and Ibáñez, Jaime and Kundu, Aritra and Farina, Dario},
  journal={IEEE Transactions on Biomedical Engineering}, 
  title={A Particle Swarm Optimised Independence Estimator for Blind Source Separation of Neurophysiological Time Series}, 
  year={2024},
  volume={},
  number={},
  pages={1-11},
  doi={10.1109/TBME.2024.3446806},
  keywords={Recording; Time series analysis; Sorting; Vectors; Measurement; Electrodes; Probes; Independent component analysis; particle swarm optimisation; blind source separation; intramuscular electromyography; intracortical recording}
}

Contact

For questions or inquiries:

Agnese Grison
📧 agnese.grison16@imperial.ac.uk

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swarm_contrastive_decomposition-0.1.6.tar.gz (23.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

swarm_contrastive_decomposition-0.1.6-py3-none-any.whl (23.8 kB view details)

Uploaded Python 3

File details

Details for the file swarm_contrastive_decomposition-0.1.6.tar.gz.

File metadata

File hashes

Hashes for swarm_contrastive_decomposition-0.1.6.tar.gz
Algorithm Hash digest
SHA256 2ee2ebeaf91bd967258b09b8093ed8ae03ae75ae404200377d77d8c35a6ed9c7
MD5 39ecda34340f0b97386f2be3c1585efa
BLAKE2b-256 b937245799445967687e2d7b8681bc2b124d76538c2b13927c0f15b0b5f514e2

See more details on using hashes here.

File details

Details for the file swarm_contrastive_decomposition-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for swarm_contrastive_decomposition-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 c5225ffe8f0a794b5cb97c56cddcd196acdc4586b8a0ceb819e1de51ec74b1cb
MD5 1a61b356e49861c0009236d2d20ab983
BLAKE2b-256 f8354bba2e36649a6dbe20d711f562ce06f682764e9ffca80bbbdf4f57104087

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page