Skip to main content

Hyperspectral package for spectroscopists

Project description

SpectraMap (SpMap): Hyperspectral package for spectroscopists in Python

Hyperspectral imaging presents important current applications in medicine, agriculture, pharmaceutical, space, food and many upcoming applications. The analysis of hyperspectral images requires advanced software. The upcoming developments related to fast hyperspectral imaging, automation and deep learning applications demand innovative software developments for analyzing hyperspectral data. The Figure 1 shows the hyperspectral imaging by a standard spectrometer instrument. More information regarding novel medical imaging is found in advances in imaging.

Figure 1 Raman Imaging system

Features

The package includes standard tools such as reading, preprocessing, processing and visualization. The designing was focused on working hyperspectral images from Raman datasets. The package is extended to other spectroscopies as long as the data follows the type data structure. Some features are shown by the next figures.

  • Preprocessing: some tool such as smoothing, removal of spikes, normalization and advanced baseline corrections are included. Figure 2 illustrates a mean and standard deviation of a tissue signature.

Figure 2 Visualization of tissue Raman signature

  • Processing: some tools such as unmixing, pca, pls, vca and hierarchical and kmeans clustering are included. Figure 3 displays application of clustering for locating microplastics on complex matrices.

Figure 3 Segmentation by clustering: (a) clustering, (b) image, (c) concentration map and (d) mean clusters

  • Visualization: the next examples shows the pca scores of several biomolecules.

Figure 4 PCA scores

Further upcoming developments:

  • Graphical User Interface
  • Supervised tools
  • Deep learning - CNN
  • Optimizing speed and organizing main code
  • More examples

Installation

The predetermined work interface is Spyder. Install completely anaconda, check the link: Anaconda . The library comes with 4 different hyperspectral examples and analysis. A manual presents the relevant functions and examples Manual.

Install the library: (admin rights):

pip install spectramap

Examples

Reading and processing a spc file

In the examples file , there is ps.spc file for this example. The next lines show some basic tools. The function read_single_spc reads the path directory of the file.

from spectramap import spmap as sp #reading spmap
pigm = sp.hyper_object('pigment') #creating the hyperobject
pigm.read_single_spc('pigment') #reading the spc file
pigm.keep(400, 1800) #Keeping fingerprint region
pigm_original = pigm.copy() #Copying hyperobject
pigm_original.set_label('original') #renaming hyperobject to original
pigm.set_label('processed') #renaming hyperobject to processed

pigm.rubber() #basic baseline correction rubber band
pigm.gol(15, 3, 0) #savitzky-golay filter
both = sp.hyper_object('result') #creating an auxilary hyperobject
both.concat([pigm_original, pigm]) #concatenating the original and processed data
both.show(False) #show both spectra 

both.show_stack(0.2, 0, 'auto') #advanced stack visualization 

Figure 6 Second visualization

Reading and processing a comma separated vector file with depth profiling

In the examples, there is a layers.csv.xz file for this example. The next lines show some basic tools. The function read_csv requires the path directory of the file. The csv file must keep the structure of the manual (hyperspectral object). The examples shows how to analise the data of spectroscopic profiles.

from spectramap import spmap as sp # reading spectramap library
stack = sp.hyper_object('plastics') # creating the hyper_object
stack.read_csv_xz('layers') # reading compressed csv of plastics profile
stack.keep(500, 1800) # keeping fingerprint region
stack.rubber() # baseline correciton rubber band
stack.vector() # vector normalization
endmember = stack.vca(6) # number of endmembers  
endmember.show_stack(0.2, 0, 'auto') # advanced stack plot of endmembers 

abundance = stack.abundance(endmember, 'NNLS') # estimation of concentrations by NNLS
abundance.set_resolution(0.01) # setting the step size resolution
abundance.show_profile('auto') # plotting spectral profile 

Processing hyperspectral images by VCA and Clustering

comming soon. For now on, Check the manual.

Processing plastics hyperspectral data by PCA and PLS-LDA

In the examples file , there is a layers.csv.xz file for this example. The next processing steps computes unsupervised principal component analysis and double supervised partial least square + linear discriminant analysis. The scatter plots show the separation of the plastics: red, light_blue and blue are the most different ones.

from spectramap import spmap as sp # reading spectramap library
sample = sp.hyper_object("sample") # creating hyper_object
sample.read_csv_xz("layers") # reading compressed csv of plastics profile
sample.remove(1800, 2700) # removing silent region
sample.keep(400, 3300) # keeping finger print and high wavenumber region
sample.gaussian(2) # appliying gaussian filter
sample.rubber() # rubber baseline correction
sample.kmeans(2) # kmeans 2 clusters
sample.rename_label([1, 2], ["first", "second"]) # rename labels
sub_label = sample.get_label()  # saving sub_labels
sub_label.name = "sub_label" # renaming the title of sub_label
sample.show_stack(0,0, "auto") # showing a stack

sample.kmeans(6) # kmeans clustering example for main_label
main_label = sample.get_label() # saving the main_label
main_label.name = "main_label" # renaming the title of the label
sample.show_stack(0,0, "auto") # showing the 6 components

scores_pca, loadings_pca = sample.pca(3, False) # 3 components pca
scores_pca.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublabel

scores_pls, loadings_pls = sample.pls_lda(3, False, 0.7) # 3 components pls-lda  and 70% training data
scores_pls.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublevel

The next figures shows the precision, recall (sensititivity), f1-score (weighted average of preceision) and support for the 6 components. Accuracy and average accuracy.

Raman wavenumber calibration by paracetaminol

Reproducibility and replicativity are fundamental parameters for Raman spectroscopy. One common way for wavenumber axis calibration is discussed in this section. The requirements are a paracetaminol sample (powder) and the calibration file (well-measured peaks) and a polynomial regression.

from spectramap import spmap as sp # reading spectramap library
import pandas as pd
import numpy as np

### Paracetaminol 
path = 'para.csv' # path of the paracetaminol data
table = pd.read_table(path, sep = ',', header = None) # read data
table['label'] = "Para" # create label
table[['x', 'y']] = np.zeros((20,2)) # create fake positions
### Processing
mp = sp.hyper_object("Para") # creation of hyper object
mp.set_data(table.iloc[:,:len(table.columns)-3]) # reading the intensity 
mp.set_position(table[['x', 'y']]) # reading positions
mp.set_label(pd.Series(table['label'])) # reading labeling
copy = mp.copy() # copy data
peaks = copy.calibration_peaks(mp, 0.05) # finding peaks of para (next plot)

copy.calibration_regression(peaks) # determining regression for the calibration

mp.set_wavenumber(copy.get_wavenumber()) # set the new wavenumber to the original mp
mp.show(True) # show calibrated data
mp.add_peaks(0.1, 'r') # add peaks (not inline mode)
mp.save_data("", "calibration") # save calibrated data

Processing hyperspectral images from biological tissue

comming soon. For now on, Check the manual.

Raman Intensity Calibration

The next lines show how to calibrate intensity axis in Ramam spectroscopy. It is required a standard spectrum of halogen lamp and the experimental measurement of the halogen lamp with the Raman instrument.

from spectramap import spmap as sp # reading spectramap package
reference_trial = sp.hyper_object("reference") # creating reference hyper object
reference_trial.read_single_spc(path + "reference") # reading the referece data spectrum 
reference_trial.show(True) # showing the spectrum in the next plot

Now the experimental spectrum.

measured_trial = sp.hyper_object("measured") # creating hyper object
measured_trial.read_single_spc(path + "lamp") # reading data
measured_trial.keep(400, 1900) # keeping finger print region
measured_trial.show(True) # showing the plot as the next figures shows

Reading the Raman sample.

sample = sp.hyper_object("sample") # declareting hyper object
sample.read_single_spc(path + "sample") # reading tissue data
sample.keep(400, 1900) # keeping finger print region
sample.show(True) # showing plot in the next figure

Calibration of the Raman sample.

sample.intensity_calibration(reference_trial, measured_trial) # intensity calibration function
sample.show(True) # showing the calibrated data in the next figure

Working Team

License

MIT

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

References

[1] F. Pedregosa, G. Varoquaux, and A. Gramfort, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825-- 2830, 2011.

[2] J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: A fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, 2005, doi: 10.1109/TGRS.2005.844293.

[3] Z. M. Zhang, S. Chen, and Y. Z. Liang, “Baseline correction using adaptive iteratively reweighted penalized least squares,” Analyst, vol. 135, no. 5, pp. 1138–1146, 2010, doi: 10.1039/b922045c.

[4] L. McInnes, J. Healy, S. Astels, hdbscan: Hierarchical density based clustering In: Journal of Open Source Software, The Open Journal, volume 2, number 11. 2017

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spectramap-0.4.tar.gz (46.5 kB view details)

Uploaded Source

Built Distribution

spectramap-0.4-py3-none-any.whl (37.2 kB view details)

Uploaded Python 3

File details

Details for the file spectramap-0.4.tar.gz.

File metadata

  • Download URL: spectramap-0.4.tar.gz
  • Upload date:
  • Size: 46.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.1 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for spectramap-0.4.tar.gz
Algorithm Hash digest
SHA256 d5012626feb2c85913a8e366442007804cf64042048148f87a6af65950a47140
MD5 ed2d0e41e7e2b094c0fda67e6fe9ce4c
BLAKE2b-256 0b6675ff4794201939f59632dc66186e74f2a90f7f062796920a247a3558ba78

See more details on using hashes here.

File details

Details for the file spectramap-0.4-py3-none-any.whl.

File metadata

  • Download URL: spectramap-0.4-py3-none-any.whl
  • Upload date:
  • Size: 37.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.1 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for spectramap-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 22d97c18e5ac1737319e65dea1dde7247b880ea55a99ee2f8c6e28f9b185b692
MD5 7d0041dae33a373430cf0199f09a8648
BLAKE2b-256 fdacbe2eb5f102bd3186891b946a085e8c4b492fb809b0c474e94c14fad72d63

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page