Skip to main content

Li-ion Intercalation Electrode Materials Exploration

Project description

LIEME

Description

๐Ÿ‹ LIEME: Li-ion Intercalation Electrode Materials Exploration.

LIEME is an open source python package for discovering new Li intercalation electrode materials.

  1. ๐Ÿ–ฅ๏ธ The computational framework maps first principles derived input features to experimentally measured performance for electrode materials using automated machine learning.
  2. ๐Ÿ“ฅ The input features for materials can be obtained from manual density functional theory (DFT) calculations, or from Materials Project via the API.
  3. โš ๏ธ Usually, electrode material datasets which contain experimental performance data are small (< 100 datapoints). There is high risk of overfitting. To prevent this, only min(f,int(0.1*d)) features should be considered for training, where f is the total number of features and d is the number of datapoints. This ensures less than 10% feature-data ratio as recommended in this paper.
  4. ๐Ÿš„ Models are trained on all feature subsets. This results in combinations(f,int(0.1*d)) models. Each model explores different subspace of the feature space.
  5. ๐Ÿ“Š Final performance is obtained by averaging the predictions from all high-performing models.
  6. โš™๏ธ The training process is automated using TPOT and heavily parallelizable (for example, using SLURM array jobs, where each job is for training one model).
  7. ๐Ÿ’พ All the models are stored in an SQLite database.

Installation

LIEME requires Python 3.11 or newer.

Install the latest stable build using the following command.

pip install lieme

If you want the latest developement version, clone the repo and install in editable mode.

git clone https://github.com/sreeharshab/lieme.git
cd lieme
pip install -e .

After installation, you can import LIEME in Python.

import lieme

LIEME depends on several scientific packages, which will be installed automatically when using pip. To avoid dependency conflicts with other packages, it is recommended to create a new conda environment and install LIEME using pip.

Usage

Obtaining Input Features by Parsing DFT Data

Input features can be generated for materials for which manual DFT data is available in the following directory structure.

material
โ”œโ”€โ”€ Energy_calculation
โ”œโ”€โ”€ Electronic_calculation
โ”œโ”€โ”€ Bader_calculation
โ”œโ”€โ”€ Intercalation
โ€ƒโ€ƒโ€ƒโ”œโ”€โ”€ n_Li
โ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ โ”œโ”€โ”€ Site
โ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ โ”œโ”€โ”€ geo_opt
โ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ โ”œโ”€โ”€ dos
โ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ โ”œโ”€โ”€ bader

n in n_Li is a positive integer. Site can be replaced with any custom name. An example of the directory structure is provided in the example directory.

from lieme.featurize import get_material_features

materials = [material]
x = get_material_features(materials=materials)

Obtaining Input Features from Materials Project

Input features can also be generated for materials in Materials Project.

from lieme.mpfetch import FetchMaterials

fetcher = FetchMaterials(api_key="MATERIALS PROJECT API KEY")
x = fetcher.get_material_features()

When fetching materials from Materials Project, standard_constraints are passed to filter results. You can add additional constraints using custom_constraints. standard_constraints can also be switched off by passing standard_constraints=False to get_material_features().

Machine Learning

Machine learned models can be generated and used according to the following procedure.

from lieme.ml import MaterialsEchemRegressor

regressor = MaterialsEchemRegressor()
regressor.generate_train_jobs(n_features=4)
regressor.train(job_id=10)

generate_train_jobs() generates jobs.pkl which contains feature subset tuples such as (feature_name_1, feature_name_2, feature_name_3, feature_name_4). For example, if the total number of features are 20, then combinations(20,4) which is 4845 tuples are present in jobs.pkl. Then, train(job_id) can train the model with the feature subset corresponding to the job_id. After training, the job_id, best model, feature subset and the cross validation score of the best model are saved to an SQLite database.

regressor.write_train_results_to_db()

The performance of new materials can be predicted after training the models.

predictions = regressor.test()

Make sure you have x_train, y_train and x_test as .pkl files when you run the code mentioned above. If not, pass in x_train and y_train in train() and x_test in test().

For advanced usage, refer to the detailed documentation in the respective modules.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lieme-0.2.0.tar.gz (279.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lieme-0.2.0-py3-none-any.whl (27.1 kB view details)

Uploaded Python 3

File details

Details for the file lieme-0.2.0.tar.gz.

File metadata

  • Download URL: lieme-0.2.0.tar.gz
  • Upload date:
  • Size: 279.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.4

File hashes

Hashes for lieme-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4ccf9be58841fe3f3fb07173ac40dd87999c46ed124cf9601654b92082cbbb06
MD5 17bb2fb2b391149aa464e14d87b914e0
BLAKE2b-256 d09d05d125e7f07d2af903afb1f4d9578e77d499901db65066eee8087b0ab136

See more details on using hashes here.

File details

Details for the file lieme-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: lieme-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 27.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.4

File hashes

Hashes for lieme-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 399d2cf367bafd7478262df1f3dbb63a1234431907fc5b5ac45e841f24f1bdd2
MD5 c5664e10bde4a21fba76576d87e20b21
BLAKE2b-256 36efc2d0d2685400a65c72e3ca34943fb1b82fdf93259c22ffe8de568588f36a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page