Skip to main content

Library to analyse, plot, and export data stored in HDF5 files.

Project description

CLS H5Analysis

This is a library to analyse, plot, and export HDF5 data. The package is meant to provide a framework to load data into jupyter and enable data interaction.

Installation

Install the package from PyPi with the pip package manager. This is the recommended way to obtain a copy for your local machine and will install all required dependencies.

    $ pip install h5analysis

You will also need Jupyter Notebook together with python 3 on your local machine.

In case that certain widgets aren't rendered properly, make sure to enable the appropriate jupyter extensions

    $ jupyter nbextension enable --py widgetsnbextension

Running

Launch your local jupyter installation with

    $ jupyter notebook

Examples

Load the required module

Before you start, you will need to import the required h5analysis package, and enable bokeh plotting.

## Setup necessarry inputs
from h5analysis.LoadData import *
from h5analysis.config import h5Config
from bokeh.io import show, output_notebook
output_notebook(hide_banner=True)

All data loaders require a proper configuration variable which is beamline/data format specific. An example configuration for the CLS REIXS beamline may look like this:

config = h5Config()

config.key("SCAN_{scan:03d}",'scan')
config.sca_folder('Data')

config.sca('Mono Energy','Data/beam')
config.sca('Mesh Current','Data/i0')
config.sca('Sample Current','Data/tey')
config.sca('TEY','Data/tey','Data/i0')
config.sca("MCP Energy", 'Data/mcpMCA_scale')
config.sca("SDD Energy", 'Data/sddMCA_scale')
config.sca("XEOL Energy", 'Data/xeolMCA_scale')

config.mca('SDD','Data/sddMCA','Data/sddMCA_scale',None)
config.mca('MCP','Data/mcpMCA','Data/mcpMCA_scale',None)
config.mca('XEOL','Data/xeolMCA','Data/xeolMCA_scale',None)

config.stack('mcpSTACK','Data/mcp_a_img','Data/mcpMCA_scale',None,None)

1d plots

sca = Load1d()
sca.load(config,'FileName.h5','x_stream','y_stream',1,2,3,4)  # Loads multiple scans individually
sca.add(config,'FileName.h5','x_stream','y_stream',1,2,3,4)  # Adds multiple scans
sca.subtract(config,'FileName.h5','x_stream','y_stream',1,2,3,4,norm=False) # Subtracts scans from the first scan
sca.xlim(lower_lim,upper_lim) # Sets the horizontal axis plot region
sca.ylim(lower_lim,upper_lim) # Sets the vertical axis plot region
sca.plot_legend("pos string as per bokeh") # Determines a specific legend position
sca.vline(position) # Draws a vertical line
sca.hline(position) # Draws a horizontal line
sca.label(pos_x,pos_y,'Text') # Adds a label to the plot
sca.plot() # Plots the defined object
sca.exporter() # Exports the data by calling an exporter widget
  1. Create "Loader" object

  2. Enter the file name of the scan to analyse ('FileName.h5')

  3. Options for x_stream quantities include:

  • All quantities contained in the sca folder(s) specified in the config
  • All SCA specified in the config
  1. Options for y_stream quantities include:
  • All quantities contained in the sca folder(s) specified in the config
  • All SCA specified in the config
  • All MCA specified in the config with applied ROI
  • All STACK specified in the config with two applied ROIs
  1. List all scans to analyse (comma-separated)

  2. Set optional flags. Options include:

  • norm (Normalizes to [0,1])
  • xcoffset (Defines a constant shift in the x-stream)
  • xoffset (Takes a list of tuples and defines a polynomial fit of the x-stream)
  • ycoffset (Defines a constant shift in the y-stream)
  • yoffset (Takes a list of tuples and defines a polynomial fit of the y-stream) e.g. offset = [(100,102),(110,112),(120,121)]
  • grid_x (Takes a list with three arguments to apply 1d interpolation gridding) e.g. grid_x = [Start Energy, Stop Energy, Delta]
  • savgol (Takes a list with two or three arguments to apply data smoothing and derivatives) e.g. savgol = [Window length, Polynomial order, deriavtive] as specified in the scipy Savitzky-Golay filter
  • binsize (int, allows to perform data binning to improve Signal-to-Noise)
  • legend_items (dict={scan_number:"name"}, overwrites generic legend names; works for the load method)
  • legend_item (str, overwrites generic legend name in the add/subtract method)

2d Images

Note: Can only load one scan at a time!

General loader for MCA detector data

load2d = Load2d()
load2d.load(config,'Filename.h5','x_stream','detector',1)
load2d.plot()
load2d.exporter()
  1. Create "Loader" object

  2. Enter the file name of the scan to analyse ('FileName.h5')

  3. Options for x_stream quantities include:

  • All quantities contained in the sca folder(s) specified in the config
  • All SCA specified in the config
  1. Options for detector quantities include:
  • All MCA specified in the config
  • All STACK specified in the config with applied ROI
  1. Select scan to analyse (comma-separated)

  2. Set optional flags. Options include:

  • norm (Normalizes to [0,1])
  • xcoffset (Defines a constant shift in the x-stream)
  • xoffset (Takes a list of tuples and defines a polynomial fit of the x-stream)
  • ycoffset (Defines a constant shift in the y-stream)
  • yoffset (Takes a list of tuples and defines a polynomial fit of the y-stream) e.g. offset = [(100,102),(110,112),(120,121)]
  • grid_x (Takes a list with three arguments to apply 1d interpolation gridding) e.g. grid_x = [Start Energy, Stop Energy, Delta]
  • norm_by (Normalizes to specified stream)

2d histogram

mesh = LoadMesh()
mesh.load(config,'Filename.h5','x_stream','y_stream','z_stream',24)
mesh.plot()
mesh.exporter()
  1. Create "Loader" object

  2. Enter the file name of the scan to analyse ('FileName.h5')

  3. Options for x_stream quantities include:

  • All quantities contained in the sca folder(s) specified in the config
  • All SCA specified in the config
  • All MCA specified in the config with ROI specified
  • All STACK specified in the config with two ROIs specified
  1. Options for y_stream quantities include:
  • All quantities contained in the sca folder(s) specified in the config
  • All SCA specified in the config
  • All MCA specified in the config with ROI specified
  • All STACK specified in the config with two ROIs specified
  1. Options for z_stream quantities include:
  • All quantities contained in the sca folder(s) specified in the config
  • All SCA specified in the config
  • All MCA specified in the config with ROI specified
  • All STACK specified in the config with two ROIs specified
  1. Specify scan to analyse

  2. Set optional flags. Options include:

  • norm (Normalizes to [0,1])
  • xcoffset (Defines a constant shift in the x-stream)
  • xoffset (Takes a list of tuples and defines a polynomial fit of the x-stream)
  • ycoffset (Defines a constant shift in the y-stream)
  • yoffset (Takes a list of tuples and defines a polynomial fit of the y-stream) e.g. offset = [(100,102),(110,112),(120,121)]

3d Images

Note: Can only load one scan at a time!

load3d = Load2d()
load3d.load(config,'Filename.h5','ind_stream','stack',1)
load3d.plot()
load3d.export()
  1. Create "Loader" object

  2. Enter the file name of the scan to analyse ('FileName.h5')

  3. Enter SCA name for independent stream, corresponding to length of first axis.

  4. Options for stack quantities include:

  • All STACK specified in the config
  1. Select scan to analyse

  2. Set optional flags. Options include:

  • norm (Normalizes to [0,1])
  • xcoffset (Defines a constant shift in the x-stream)
  • xoffset (Takes a list of tuples and defines a polynomial fit of the x-stream)
  • ycoffset (Defines a constant shift in the y-stream)
  • yoffset (Takes a list of tuples and defines a polynomial fit of the y-stream) e.g. offset = [(100,102),(110,112),(120,121)]
  • grid_x (Takes a list with three arguments to apply 1d interpolation gridding) e.g. grid_x = [Start Energy, Stop Energy, Delta]
  • norm_by (Normalizes to specified stream)

Meta Data

bl = LoadBeamline()
bl.load(config,'Filename.h5','path to variable')
bl.plot()
  1. Create "Loader" object

  2. Enter the file name of the scan to analyse ('FileName.h5')

  3. Options for path to variable quantities include:

  • All directory paths within the specified h5 file

Spreadsheet

df = getSpreadsheet(config,'Filename.h5', average = False,columns=None)
  1. Create "Loader" object

  2. Enter the file name of the scan to analyse ('FileName.h5')

3a. Specify average boolean

  • If False, return all data (even if 1d array)
  • If True, return average of 1d array where applicable

3b. Options for columns quantities include:

  • Custom dictionary with column headers and quantities, see example below:
columns = dict()

columns['Command'] = 'command'
columns['Sample Stage (ssh)'] = 'Endstation/Motors/ssh'
columns['Sample Stage (ssv)'] = 'Endstation/Motors/ssv'
columns['Sample Stage (ssd)'] = 'Endstation/Motors/ssd'
columns['Spectrometer (XES dist)'] = 'Endstation/Motors/spd'
columns['Spectrometer (XES angl)'] = 'Endstation/Motors/spa'
columns['Flux 4-Jaw (mm)'] = 'Beamline/Apertures/4-Jaw_2/horz_gap'
columns['Mono Grating'] = '/Beamline/Monochromator/grating'
columns['Mono Mirror'] = '/Beamline/Monochromator/mirror'
columns['Polarization'] = 'Beamline/Source/EPU/Polarization'
#columns['Comment'] = 'command'
columns['Status'] = 'status'

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

h5analysis-0.0.39.tar.gz (58.4 kB view details)

Uploaded Source

Built Distribution

h5analysis-0.0.39-py3-none-any.whl (65.1 kB view details)

Uploaded Python 3

File details

Details for the file h5analysis-0.0.39.tar.gz.

File metadata

  • Download URL: h5analysis-0.0.39.tar.gz
  • Upload date:
  • Size: 58.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for h5analysis-0.0.39.tar.gz
Algorithm Hash digest
SHA256 0c9c1ae831baf9b9e97c6c9cbfdd9ba020dbef7e37ef522a4fe7240d53b18390
MD5 a223a1fa779b3bed6b4804693edf9f87
BLAKE2b-256 678eca54f76116176e0de2e7f1c96b0d3a693d9cd195f50b7926ef95b4d32743

See more details on using hashes here.

File details

Details for the file h5analysis-0.0.39-py3-none-any.whl.

File metadata

  • Download URL: h5analysis-0.0.39-py3-none-any.whl
  • Upload date:
  • Size: 65.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for h5analysis-0.0.39-py3-none-any.whl
Algorithm Hash digest
SHA256 e53d0801f28d10aef4e12390328855883964d9b84e9f43e24bae5ad64c40c6c4
MD5 0aa23be5e8c2979ae290eb08ae03933d
BLAKE2b-256 c92b8c06746f20b5db9b3374768de0b432b191d3d3fb3967ed771e686bc378c7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page