Skip to main content

Hyperspectral data analysis and machine learning

Project description

hypers

Build Status Documentation Status Python Version 3.5 Python Version 3.6 PyPI version

hypers provides a data structure in python for hyperspectral data. The data structure includes:

  • Tools for processing and exploratory analysis of hyperspectral data
  • Interactive hyperspectral viewer (using PyQt) that can be accessed as a method from the object
  • Allows for unsupervised machine learning directly on the object

The data structure is built on top of the numpy ndarray, and this package simply adds additional functionality that allows for quick analysis of hyperspectral data. Importantly, this means that the object can still be used as a normal numpy array.

Please note that this package is currently in pre-release. It can still be used, however there is likely to be significant changes to the API. The first public release will be v0.1.0.

Contents

  1. Installation
  2. Features
  3. Examples
  4. Documentation
  5. License
  6. References

Installation

To install using pip:

pip install hypers

The following packages will also be installed:

  • numpy
  • matplotlib
  • scipy
  • scikit-learn
  • PyQt5
  • pyqtgraph

Features

Features implemented in hypers include:

  • Clustering
  • Decomposition (e.g. PCA, ICA, NMF)
  • Hyperspectral viewer
  • Vertex component analysis
  • Gaussian mixture models

A full list of features can be found here.

Examples

Hyperspectral dimensionality reduction and clustering

Below is a quick example of using some of the features of the package on a randomized hyperspectral array. For an example using the IndianPines dataset, see the Jupyter notebook in the examples directory.

import numpy as np
import hypers as hp

# Generating a random 4-d dataset and creating a Dataset instance
# The test dataset here has spatial dimensions (x=200, y=200, z=10) and spectral dimension (s=1024)
test_data = np.random.rand(200, 200, 10, 1024)
X = hp.array(test_data)

# Using Principal Components Analysis to reduce to first 5 components
# The variables ims, spcs are arrays of the first 5 principal components for the images, spectra respectively
ims, spcs = X.decompose.pca.calculate(n_components=5)

# Clustering using K-means (with and without applying PCA first)
# The cluster method will return the labeled image array and the spectrum for each cluster
lbls_nodecompose, spcs_nodecompose = X.cluster.kmeans.calculate(
    n_clusters=3,
    decomposed=False
)

# Clustering on only the first 5 principal components
lbls_decomposed, spcs_decomposed = X.cluster.kmeans.calculate(
    n_clusters=3,
    decomposed=True,
    pca_comps=5
)

Interactive viewer

The interactive viewer can be particularly helpful for exploring a completely new dataset for the first time to get a feel for the type of data you are working with. An example from a coherent anti-Stokes Raman (CARS) dataset is shown below:

Documentation

The docs are hosted here.

License

hypers is licensed under the OSI approved BSD 3-Clause License.

References

  1. VCA algorithm
    J. M. P. Nascimento and J. M. B. Dias, "Vertex component analysis: a fast algorithm to unmix hyperspectral data," in IEEE Transactions on Geoscience and Remote Sensing, 2005
    Adapted from repo.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for hypers, version 0.0.12.2
Filename, size File type Python version Upload date Hashes
Filename, size hypers-0.0.12.2-py3-none-any.whl (34.3 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size hypers-0.0.12.2.tar.gz (14.7 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page