Skip to main content

A python package for the curation and interpretation of dielectric barrier discharge ionisation mass spectrometric datasets.

Project description

DBDIpy (Version 0.8.3)

--local edit--- DBDIpy is an open-source Python library for the curation and interpretation of dielectric barrier discharge ionisation mass spectrometric datasets.

tl;dr

  1. Installation
  2. User Tutorial

Introduction

Mass spectrometric data from direct injection analysis is hard to interpret as missing chromatographic separation complicates identification of fragments and adducts generated during the ionization process.

Here we present an in-silico approach to putatively identify multiple ion species arising from one analyte compound specially tailored for time-resolved datasets from dielectric barrier discharge ionization (DBDI). DBDI is a relatively young technology which is rapidly gaining popularity in applications as breath analysis, process control or food research.

DBDIpy's core functionality relys on putative identification of in-source fragments (eg. [M-H2O+H]+) and in-source generated adducts (eg. [M+On+H]+). Custom adduct species can be defined by the user and passed to this open-search algorithm. The identification is performed in a two-step procedure:

  • calculation of pointwise correlation identifies features with matching temporal intensity profiles through the experiment.
  • (exact) mass differences are used to refine the nature of potential candidates.

These putative identifications can than further be validated by the user, eg. based on tandem MS fragment data.

DBDIpy further comes along with functions optimized for preprocessing of experimental data and visualization of identified adducts. The library is integrated into the matchms ecosystem to assimilate DBDIpy's functionalities into existing workflows.

For details, we invite you to read the tutorial or to try out the functions with our demonstrational dataset or your own data!

Badges
License PyPi license
Status test
Updated GitHub latest commit
Language made-with-python
Version Python - 3.7, 3.8, 3.9, 3.10
Operating Systems macOS Windows
Documentation Documentation Status
Supporting Data DOI
Further Reads Researchgate

Latest Changes (since 0.6.0)

  • updated descriptions.
  • improved help pages.
  • finished tutorial.

User guide

Installation

Prerequisites:

  • Anaconda (recommended)
  • Python 3.7, 3.8, 3.9 or 3.10

DBDIpy can be installed from PyPI with:

# we recommend installing DBDIpy in a new virtual environment
conda create --name DBDIpy python=3.9
conda activate DBDIpy
python3 -m pip install DBDIpy

Known installation issues: Apple M1 chip users might encounter issues with automatic installation of matchms. Manual installation of the dependency as described on the libraries official site helps solving the issue.

Tutorial

The following tutorial showcases an ordinary data analysis workflow by going through all functions of DBDIpy from loading data until visualization of correlation results. Therefore, we supplied a demo dataset which is publicly available here.

The demo data is from an experiments where wheat bread was roasted for 20 min and monitored by DBDI coupled to FT-ICR-MS. It consists of 500 randomly selected features.

bitmap

Fig.1 - Schematic DBDIpy workflow for in-source adduct and fragment detection: imported MS1 data are aligned, imputed and parsed to combined correlation and mass difference analysis.

1. Importing MS data

DBDIpy core functions utilize 2D tabular data. Raw mass spectra containing m/z-intensity-pairs first will need to be aligned to a DataFrame of features. We build features by using the align_spectra() function. align_spectra() is the interface to load data from open file formats such as .mgf, .mzML or .mzXML files via matchms.importing.

If your data already is formatted accordingly, you can skip this step.

##loading libraries for the tutorial
import os
import feather
import numpy as np
import pandas as pd
import DBDIpy as dbdi
from matchms.importing import load_from_mgf
from matchms.exporting import save_as_mgf

##importing the downloaded .mgf files from demo data by matchms
demo_path = ""                                                #enter path to demo dataset
demo_mgf = os.path.join(demo_path, "example_dataset.mgf")
spectrums = list(load_from_mgf(demo_mgf))

##align the listed Spectra
specs_aligned = dbdi.align_spectra(spec = spectrums, ppm_window = 2) 

We first imported the demo MS1 data into a list of matchms.Spectra objects. At this place you can run your personal matchms preprocessing pipelines or manually apply filters like noise reduction. By aplication of align_spectra(), we transformed the list of spectra objects to a two-dimensional pandas.DataFrame. Now you have a column for each mass spectrometric scan and features are aligned to rows. The first column shows the mean m/z of a feature. If a signal was not detected in a scan, the according field will be set to an instance of np.nan.

Remember to set the ppm_window parameter according to the resolution of you mass spectrometric system.

We now can inspect the aligned data, e.g. by running:

specs_aligned.describe()
specs_aligned.info()

Likewise, specs_aligned.isnull().values.any() will give us an idea if there are missing values in the data. These cannot be handled by successive DBDIpy functions and most machine learning algorithms, so we need to impute them.

2. Imputation of missing values

impute_intensities() will assure that after imputation we will have a set of uniform length extracted ion chromatograms (XIC) in our DataFrame. This is an important prerequisite for pointwise correlation calculation and for many tools handling time series data.

Missing values in our feature table will be imputed by a two-stage imputation algorithm.

  • First, missing values within the detected signal region are interpolated in between.
  • Second, a noisy baseline is generated for all XIC to be of uniform length which the length of the longest XIC in the dataset.

The function lets the user decide which imputation method to use. Default mode is linear, however several others are available.

feature_mz = specs_aligned["mean"]
specs_aligned = specs_aligned.drop("mean", axis = 1)

##impute the dataset
specs_imputed = dbdi.impute_intensities(df = specs_aligned, method = "linear")

Now specs_imputed does not contain any missing values anymore and is ready for adduct and in-source fragment detection.

##check if NaN are present in DataFrame
specs_imputed.isnull().values.any()
Out[]: False

3. Detection of adducts and in-source fragments

Based on the specs_imputed, we compute pointwise correlation of XIC traces to identify in-source adducts or in-source fragments generated during the DBD ionization process. The identification is performed in a two-step procedure:

  • First, calculation of pointwise intensity correlation identifies feature groups with matching temporal intensity profiles through the experiment.
  • Second, (exact) mass differences are used to refine the nature of potential candidates.

By default, identify_adducts() searches for [M-H2O+H]+, [M+O1+H]+ and [M+O2+H]+. For demonstrational purposes we also want to search for [M+O3+H]+ in this example. Note that identify_adducts() has a variety of other parameters which allow high user customization. See the help file of the functions for details.

##prepare a DataFrame to search for O3-adducts
adduct_rule = pd.DataFrame({'deltamz': [47.984744],'motive': ["O3"]})

##identify in-source fragments and adducts
search_res = dbdi.identify_adducts(df = specs_imputed, masses = feature_mz, custom_adducts = adduct_rule,
                                   method = "spearman", threshold = 0.9, mass_error = 2)

The function will return a dictionary holding one DataFrame for each adduct type that was defined. A typical output looks like the following:

##output search results
search_res
Out[24]: 
{'O':   base_mz    base_index  match_mz  match_index    mzdiff      corr
 19     215.11789          24  231.11280        ID40  15.99491  0.963228
 310    224.10699          33  240.10191        ID51  15.99492  0.939139
 605    231.11280          39  215.11789        ID25  15.99491  0.963228
 1413   240.10191          50  224.10699        ID34  15.99492  0.939139
 1668   244.13321          55  260.12812        ID67  15.99491  0.976541,
                                 ...
 'O2':  base_mz    base_index  match_mz  match_index    mzdiff      corr
 1437   240.10191          50  272.09174        ID77  31.98983  0.988866
 1677   244.13321          55  276.12304        ID84  31.98983  0.972251
 2362   260.12812          66  292.11795       ID100  31.98983  0.964096
 3024   272.09174          76  240.10191        ID51  31.98983  0.988866
 3354   276.12304          83  244.13321        ID56  31.98983  0.972251,
                                 ...
 'H2O': base_mz    base_index  match_mz  match_index    mzdiff      corr
 621    231.11280          39  249.12337        ID60  18.01057  0.933640
 1883   249.12337          59  231.11280        ID40  18.01057  0.933640
 3263   275.13902          82  293.14958       ID102  18.01056  0.948774
 4775   293.14958         101  275.13902        ID83  18.01056  0.948774
 5573   300.08665         112  318.09722       ID140  18.01057  0.905907
                                  ...
 'O3':  base_mz    base_index  match_mz  match_index    mzdiff      corr
 320    224.10699          33  272.09174        ID77  47.98475  0.924362
 1688   244.13321          55  292.11795       ID100  47.98474  0.964896
 3013   272.09174          76  224.10699        ID34  47.98475  0.924362
 4631   292.11795          99  244.13321        ID56  47.98474  0.964896
 13597  438.28502         308  486.26976       ID356  47.98474  0.935359
                                  ...

The base_mz and base_index column give us the index of the features which correlates with a correlation partner specified in match_mz and match_index. The mass difference between both is given for validation purpose and the correlation coefficient between both features is listed.

Now we can for example search series of Oxygen adducts of a single analyte:

##search for oxygenation series
two_adducts = np.intersect1d(search_res["O"]["base_index"], np.intersect1d(search_res["O"]["base_index"],search_res["O2"]["base_index"]))
three_adducts = np.intersect1d(two_adducts , search_res["O3"]["base_index"])

three_adducts
Out[33]: array([55, 99], dtype=int64)

This tells us that features 55 and 99 both putatively have [M+O1-3+H]+ adduct ions with correlations of R2 > 0.9 in our dataset. Let's visualize this finding!

4. Visualization of correlation results

Now that we putatively identified some related ions of a single analyte, we want to check their temporal response during the baking experiment. Therefore, we can use the plot_adducts() function to conveniently draw XICs. The demo dataset even comes along with some annotated metadata for our features, so we can decorate the plot and check our previous results!

##load annotation metadta
demo_path = ""                                                     #enter path to demo dataset
demo_meta = os.path.join(demo_path, "example_metadata.feather")
annotation_metadata = feather.read_dataframe(demo_meta)

##plot the XIC
dbdi.plot_adducts(IDs = [55,66,83,99], df = specs_imputed, metadata = annotation_metadata, transform = True)

Fig.2 - XIC plots for features 55, 66, 83 and 99 which have highly correlated intensity profile through the baking experiment.

We see that the XIC traces show a similar intensity profile through the experiment. The plot further tells us the correlation coefficients of the identified adducts. From the metadata we can see that the detected mass signals were previously annotated as C15H17O2-5N which tells us that we most probably found an Oxgen-adduct series.

If MS2 data was recorded during the experiment we now can go on further and compare fragment spectra to reassure the identifications. You might find ms2deepscore to be a usefull library to do so in an automated way.

5. Exporting tabular MS data to match.Spectra objects

If you want to export your (imputed) tabular data to matchms.Spectra objects, you can do so by calling the export_to_spectra() function. We just need to re-add a column containing m/z values of the features. This gives you access to the matchms suite and enables you to safe your mass spectrometric data to open file formats. Hint: you can manually add some metadata after construction of the list of spectra.

##export tabular MS data back to list of spectrums.
specs_imputed["mean"] = feature_mz

speclist = dbdi.export_to_spectra(df = specs_imputed, mzcol = 88)

##write processed data to .mgf file
save_as_mgf(speclist, "DBDIpy_processed_spectra.mgf")

We hope you liked this quick introduction into DBDIpy and will find its functions helpful and inspiring on your way to work through data from direct infusion mass spectrometry. Of course, the functions are applicable to all sort of ionisation mechanisms and you can modify the set of adducts to search in accordance to your source.

If you have open questions left about functions, their parameter or the algorithms we invite you to read through the built-in help files. If this does not clarify the issues, please do not hesitate to get in touch with us!

Contact

leopold.weidner@tum.de

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

DBDIpy-0.8.3.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

DBDIpy-0.8.3-py3-none-any.whl (23.3 kB view details)

Uploaded Python 3

File details

Details for the file DBDIpy-0.8.3.tar.gz.

File metadata

  • Download URL: DBDIpy-0.8.3.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for DBDIpy-0.8.3.tar.gz
Algorithm Hash digest
SHA256 2d81c295c1e593204599633dd7239c5a9efcee7a4fd9440e11375d0013cdfcc7
MD5 68ec117971b2bc7ff8005bbe44817fe7
BLAKE2b-256 824ab4000a35a532e9dce2f2ef79312cdcdb68f748b112a86927d1b05763acf2

See more details on using hashes here.

File details

Details for the file DBDIpy-0.8.3-py3-none-any.whl.

File metadata

  • Download URL: DBDIpy-0.8.3-py3-none-any.whl
  • Upload date:
  • Size: 23.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for DBDIpy-0.8.3-py3-none-any.whl
Algorithm Hash digest
SHA256 3573b3f9c29d025e871c998dc9f3500b5abc226df85f5b7d50061317adc9c348
MD5 79bdd8a04be27fee2e739713fb473786
BLAKE2b-256 c7d40411e59b3c5b9a4ea9c5f5e74806cb73bca89c0e49c997f9b9fe7e30bb46

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page