Skip to main content

Hyperspectral data analysis and machine learning

Project description


Build Status Documentation Status Python Version 3.5 Python Version 3.6 PyPI version

Provides an object model for hyperspectral data.

  • Simple tools for exploratory analysis of hyperspectral data
  • Interactive hyperspectral viewer built into the object
  • Allows for unsupervised machine learning directly on the object (using scikit-learn)
  • More features coming soon...


  1. About
  2. Installation
  3. Features
  4. Examples
  5. Documentation
  6. License


This package provides an object model for hyperspectral data (e.g. similar to pandas for tabulated data). Many of the commonly used tools are built into the object, including a lightweight interactive gui for visualizing the data. Importantly, the object also interfaces with scikit-learn to allow the cluser and decomposition classes (e.g. PCA, ICA, K-means) to be used directly with the object.

  • Dataset object (hypers.Dataset)

    This class forms the core of hypers. It provides useful information about the hyperspectral data and makes machine learning on the data simple.

  • Interactive hyperspectral viewer

    A lightweight pyqt gui that provides an interative interface to view the hyperspectral data.

Please note that this package is currently in pre-release. The first general release will be v0.1.0

Hyperspectral data

Whilst this package is designed to work with any type of hyperspectral data, of the form of either of the following:


some of the features are particularly useful for vibrational-scattering related hyperspectral data (e.g. Raman micro-spectroscopy), e.g. the spectral component of the hyperspectral viewer (see figure above).


To install using pip:

pip install hypers

The following packages are required:

  • numpy
  • matplotlib
  • scipy
  • scikit-learn
  • PyQt5
  • pyqtgraph


Features implemented in hypers include:


Hyperspectral dimensionality reduction and clustering

Below is a quick example of using some of the features of the package on a randomized hyperspectral array. For an example using the IndianPines dataset, see the Jupyter notebook in the examples/ directory.

import numpy as np
import hypers as hp
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans

# Generating a random 4-d dataset and creating a Dataset instance
# The test dataset here has spatial dimensions (x=200, y=200, z=10) and spectral dimension (s=1024)
test_data = np.random.rand(200, 200, 10, 1024)
X = hp.Dataset(test_data)

# Using Principal Components Analysis to reduce to first 5 components
# The variables ims, spcs are arrays of the first 5 principal components for the images, spectra respectively
ims, spcs = X.decompose(

# Clustering using K-means (with and without applying PCA first)
# The cluster method will return the labeled image array and the spectrum for each cluster
lbls_nodecompose, spcs_nodecompose = X.cluster(

# Clustering on only the first 5 principal components
lbls_decomposed, spcs_decomposed = X.cluster(


The docs are hosted here.


hypers is licensed under the OSI approved BSD 3-Clause License.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
hypers-0.0.11-py3-none-any.whl (19.0 kB) Copy SHA256 hash SHA256 Wheel py3
hypers-0.0.11.tar.gz (13.5 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page