Hyperspectral data analysis and machine learning
Project description
hypers
Provides an object model for hyperspectral data.
 Simple tools for exploratory analysis of hyperspectral data
 Interactive hyperspectral viewer built into the object
 Allows for unsupervised machine learning directly on the object (using scikitlearn)
 More features coming soon...
Contents
About
This package provides an object model for hyperspectral data (e.g. similar to pandas for tabulated data). Many of the
commonly used tools are built into the object, including a lightweight interactive gui for visualizing the data.
Importantly, the object also interfaces with scikitlearn
to allow the cluser and decomposition classes (e.g. PCA,
ICA, Kmeans) to be used directly with the object.

Dataset object (
hypers.Dataset
)This class forms the core of hypers. It provides useful information about the hyperspectral data and makes machine learning on the data simple.

Interactive hyperspectral viewer
A lightweight pyqt gui that provides an interative interface to view the hyperspectral data.
Please note that this package is currently in prerelease. The first general release will be v0.1.0
Hyperspectral data
Whilst this package is designed to work with any type of hyperspectral data, of the form of either of the following:
,
some of the features are particularly useful for vibrationalscattering related hyperspectral data (e.g. Raman microspectroscopy), e.g. the spectral component of the hyperspectral viewer (see figure above).
Installation
To install using pip
:
pip install hypers
The following packages are required:
 numpy
 matplotlib
 scipy
 scikitlearn
 PyQt5
 pyqtgraph
Features
Features implemented in hypers
include:
 Clustering (e.g. KMeans, Spectral clustering, Hierarchical clustering)
 Decomposition (e.g. PCA, ICA, NMF)
 Hyperspectral viewer
Examples
Hyperspectral dimensionality reduction and clustering
Below is a quick example of using some of the features of the package on a randomized hyperspectral array. For an example using the IndianPines dataset, see the Jupyter notebook in the examples/ directory.
import numpy as np import hypers as hp from sklearn.decomposition import PCA from sklearn.cluster import KMeans # Generating a random 4d dataset and creating a Dataset instance # The test dataset here has spatial dimensions (x=200, y=200, z=10) and spectral dimension (s=1024) test_data = np.random.rand(200, 200, 10, 1024) X = hp.Dataset(test_data) X.scale() # Using Principal Components Analysis to reduce to first 5 components # The variables ims, spcs are arrays of the first 5 principal components for the images, spectra respectively ims, spcs = X.decompose( mdl=PCA(n_components=5), plot=False, return_arrs=True ) # Clustering using Kmeans (with and without applying PCA first) # The cluster method will return the labeled image array and the spectrum for each cluster lbls_nodecompose, spcs_nodecompose = X.cluster( mdl=KMeans(n_clusters=3), decomposed=False, plot=False, return_arrs=True ) # Clustering on only the first 5 principal components lbls_decomposed, spcs_decomposed = X.cluster( mdl=KMeans(n_clusters=3), decomposed=True, pca_comps=5, plot=False, return_arrs=True )
Documentation
The docs are hosted here.
License
hypers is licensed under the OSI approved BSD 3Clause License.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size & hash SHA256 hash help  File type  Python version  Upload date 

hypers0.0.11py3noneany.whl (19.0 kB) Copy SHA256 hash SHA256  Wheel  py3  
hypers0.0.11.tar.gz (13.5 kB) Copy SHA256 hash SHA256  Source  None 