Skip to main content

A collection of benchmarking problems and datasets for testing the performance of advanced optimization algorithms in the field of materials science and chemistry.

Project description

Project generated with PyScaffold

matsci-opt-benchmarks (WIP)

A collection of benchmarking problems and datasets for testing the performance of advanced optimization algorithms in the field of materials science and chemistry for a variety of "hard" problems involving one or several of: constraints, heteroskedasticity, multiple objectives, multiple fidelities, and high-dimensionality.

There are already materials-science-specific resources related to datasets, surrogate models, and benchmarks out there:

  • Matbench focuses on materials property prediction using composition and/or crystal structure
  • Olympus focuses on small datasets generated via experimental self-driving laboratories
  • Foundry focuses on delivering ML-ready datasets in materials science and chemistry
  • Matbench-genmetrics focuses on generative modeling for crystal structure using metrics inspired by guacamol and CDVAE

In March 2021, pymatgen reorganized the code into namespace packages, which makes it easier to distribute a collection of related subpackages and modules under an umbrella project. Tangent to that, PyScaffold is a project generator for high-quality Python packages, ready to be shared on PyPI and installable via pip; coincidentally, it also supports namespace package configurations. My plan for this repository is to host pip-installable packages that allow for loading datasets, surrogate models, and benchmarks for recent manuscripts I've written. It is primarily intended as a convenience for me, with a secondary benefit of adding value to the community. I will look into hosting the datasets via Foundry and using the surrogate model API via Olympus. I will likely do logging to a MongoDB database via Atlas and later take a snapshot of the dataset for Foundry. Initially, I will probably use a basic scikit-learn model, such as RandomForestRegressor or GradientBoostingRegressor, along with cross-validated hyperparameter optimization via RandomizedSearchCV or HalvingRandomSearchCV for the surrogate model.

What will really differentiate the contribution of this repository is the modeling of heteroskedastic noise, where the noise variance can be a complex function of the input parameters. This is contrasted with homoskedasticity, where the noise variance for a given parameter is fixed [Wikipedia].

My goal is to win a "Turing test" of sorts for the surrogate model, where the model is indistinguishable from the true, underlying objective function.

To accomplish this, I plan to:

  • run ~10 repeats for every set of parameters and fit separate models for quantiles of the noise distribution
  • Get a large enough quasi-random sampling of the search space to accurately model intricate interactions between parameters (i.e. the response surface)
  • Train a classification model that short-circuits the regression model: return NaN values for inaccessible regions of objective functions and return the regression model values for accessible regions

My plans for implementation include:

  • packing fraction of a random 3D packing of spheres as a function of the number of spheres, 6 parameters that define three separate truncated log-normal distributions, and 3 parameters that define the weight fractions [code] [paper1] [paper2] [data DOI]
  • discrete intensity vs. wavelength spectra (measured experimentally via a spectrophotometer) as a function of red, green, and blue LED powers and three sensor settings: number of integration steps, integration time per step, and signal gain [code] [paper]
  • Two error metrics (RMSE and MAE) and two hardware performance metrics (runtime and memory) of a CrabNet regression model trained on the Matbench experimental band gap dataset as a function of 23 CrabNet hyperparameters [code] [paper]

Installation

In order to set up the necessary environment:

  1. review and uncomment what you need in environment.yml and create an environment matsci-opt-benchmarks with the help of conda:
    conda env create -f environment.yml
    
  2. activate the new environment with:
    conda activate matsci-opt-benchmarks
    

NOTE: The conda environment will have matsci-opt-benchmarks installed in editable mode. Some changes, e.g. in setup.cfg, might require you to run pip install -e . again.

Optional and needed only once after git clone:

  1. install several pre-commit git hooks with:

    pre-commit install
    # You might also want to run `pre-commit autoupdate`
    

    and checkout the configuration under .pre-commit-config.yaml. The -n, --no-verify flag of git commit can be used to deactivate pre-commit hooks temporarily.

  2. install nbstripout git hooks to remove the output cells of committed notebooks with:

    nbstripout --install --attributes notebooks/.gitattributes
    

    This is useful to avoid large diffs due to plots in your notebooks. A simple nbstripout --uninstall will revert these changes.

Then take a look into the scripts and notebooks folders.

Dependency Management & Reproducibility

  1. Always keep your abstract (unpinned) dependencies updated in environment.yml and eventually in setup.cfg if you want to ship and install your package via pip later on.
  2. Create concrete dependencies as environment.lock.yml for the exact reproduction of your environment with:
    conda env export -n matsci-opt-benchmarks -f environment.lock.yml
    
    For multi-OS development, consider using --no-builds during the export.
  3. Update your current environment with respect to a new environment.lock.yml using:
    conda env update -f environment.lock.yml --prune
    

Project Organization

├── AUTHORS.md              <- List of developers and maintainers.
├── CHANGELOG.md            <- Changelog to keep track of new features and fixes.
├── CONTRIBUTING.md         <- Guidelines for contributing to this project.
├── Dockerfile              <- Build a docker container with `docker build .`.
├── LICENSE.txt             <- License as chosen on the command-line.
├── README.md               <- The top-level README for developers.
├── configs                 <- Directory for configurations of model & application.
├── data
│   ├── external            <- Data from third party sources.
│   ├── interim             <- Intermediate data that has been transformed.
│   ├── processed           <- The final, canonical data sets for modeling.
│   └── raw                 <- The original, immutable data dump.
├── docs                    <- Directory for Sphinx documentation in rst or md.
├── environment.yml         <- The conda environment file for reproducibility.
├── models                  <- Trained and serialized models, model predictions,
│                              or model summaries.
├── notebooks               <- Jupyter notebooks. Naming convention is a number (for
│                              ordering), the creator's initials and a description,
│                              e.g. `1.0-fw-initial-data-exploration`.
├── pyproject.toml          <- Build configuration. Don't change! Use `pip install -e .`
│                              to install for development or to build `tox -e build`.
├── references              <- Data dictionaries, manuals, and all other materials.
├── reports                 <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures             <- Generated plots and figures for reports.
├── scripts                 <- Analysis and production scripts which import the
│                              actual PYTHON_PKG, e.g. train_model.
├── setup.cfg               <- Declarative configuration of your project.
├── setup.py                <- [DEPRECATED] Use `python setup.py develop` to install for
│                              development or `python setup.py bdist_wheel` to build.
├── src
│   └── particle_packing    <- Actual Python package where the main functionality goes.
│   └── crabnet_hyperparameter <- Actual Python package where the main functionality goes.
├── tests                   <- Unit tests which can be run with `pytest`.
├── .coveragerc             <- Configuration for coverage reports of unit tests.
├── .isort.cfg              <- Configuration for git hook that sorts imports.
└── .pre-commit-config.yaml <- Configuration of pre-commit git hooks.

Note

This project has been set up using PyScaffold 4.3.1 and the dsproject extension 0.7.2.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

matsci-opt-benchmarks-0.2.2.tar.gz (38.7 MB view details)

Uploaded Source

Built Distribution

matsci_opt_benchmarks-0.2.2-py3-none-any.whl (525.5 kB view details)

Uploaded Python 3

File details

Details for the file matsci-opt-benchmarks-0.2.2.tar.gz.

File metadata

  • Download URL: matsci-opt-benchmarks-0.2.2.tar.gz
  • Upload date:
  • Size: 38.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for matsci-opt-benchmarks-0.2.2.tar.gz
Algorithm Hash digest
SHA256 8adbe31e1dcf6cfa53388c4bc549028d8011802234f4da64aa9d3373c0ff9f2b
MD5 7f027dd9210e09b9cd2a324cc893a154
BLAKE2b-256 c12c93eeaea0fc19cc23bda5d53d0a3b69e438e9ce3cd215da68cbe160e4a357

See more details on using hashes here.

File details

Details for the file matsci_opt_benchmarks-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for matsci_opt_benchmarks-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1c881a3fee7eb93b88d65dee258d28d7837ab3b757b7567cd5cfe6cbe09ea2ae
MD5 203318a58e1bba2d59da5c1d6a8b8d84
BLAKE2b-256 94ddf6534bdbd87291ccd0dddd2e73575085584efe25a46aa79079208f2f18e3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page