Skip to main content

Pangeo zarr benchmarking package

Project description

Benchmarking

Benchmarking & Scaling Studies of the Pangeo Platform

Creating an Environment

To run the benchmarks, it's recommended to create a dedicated conda environment by running:

conda env create -f ./binder/environment.yml

This will create a conda environment named pangeo-bench with all of the required packages.

You can activate the environment with:

conda activate pangeo-bench

and then run the post build script:

./binder/postBuild

Benchmark Configuration

The benchmark-configs directory contains YAML files that are used to run benchmarks on different machines. So far, the following HPC systems' configs are provided:

$ tree ./benchmark-configs/
benchmark-configs/
├── cheyenne.yaml
└── hal.yaml
└── wrangler.yaml

In case you are interested in running the benchmarks on another system, you will need to create a new YAML file for your system with the right configurations. See the existing config files for reference.

Running the Benchmarks

from command line

To run the benchmarks, a command utility pangeobench is provided in this repository. To use it, you simply need to specify the location of the benchmark configuration file. For example:

./pangeobench benchmark-configs/cheyenne.yaml
$ ./pangeobench --help
Usage: pangeobench [OPTIONS] CONFIG_FILE

Options:
  --help  Show this message and exit.

Running the Benchmarks

from jupyter notebook.

To run the benchmarks from jupyter notebook, install 'pangeo-bench' kernel to your jupyter notebook enviroment, then start run.ipynb notebook. You will need to specify the configuration file as described above in your notebook.

To install your 'pangeo-bench' kernel to your jupyter notebook enviroment you'll need to connect a terminal of your HPC enviroment and run following command.

source activate pangeo-bench
ipython kernel install --user --name pangeo-bench

Before starting your jupyternotebook, you can verify that if your kernel is well installed or not by follwing command

jupyter kernelspec list

Benchmark Results

Benchmark results are persisted in the results directory by default. The exact location of the benchmark results depends on the machine name (specified in the config file) and the date on which the benchmarks were run. For instance, if the benchmarks were run on Cheyenne supercomputer on 2019-09-07, the results would be saved in: results/cheyenne/2019-09-07/ directory. The file name follows this template: compute_study_YYYY-MM-DD_HH-MM-SS.csv

Visualization

Visualisation can be done using jupyter notebooks placed in analysis directories.

Project details


Release history Release notifications

This version

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for pangeobench, version 0.1
Filename, size File type Python version Upload date Hashes
Filename, size pangeobench-0.1-py3-none-any.whl (8.2 kB) File type Wheel Python version py3 Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page