Skip to main content

A python framework for invariant rans turbulence closure.

Project description

IDeaL_RCF

An Invariant Deep Learning RANS Closure Framework provides a unified way to interact with A curated dataset for data-driven turbulence modelling by McConkey et al. allowing for data loading, preprocessing, model training and experimenting, inference, evaluation, integration with openfoam via exporting and postprocessing openfoam files.

The framework uses tensorflow and keras for the machine learning operations and scikit-learn for metrics generation. Plotting is done using matplotlib.

The provided models leverage Galilean Invariance when predicting the Anisotropy Tensor and an Eddy Viscosity which can then be injected into a converged RANS simulation using OpenFOAM v2006 and converging towards the DNS velocity field.

The physics behind this framework can be found here.

Support for SSTBNNZ (a semi Supervised Zonal Approach) will be made avalable in the future.

Instalation

conda create --name ML_Turb python=3.9
conda activate ML_Turb
pip install ideal-rcf

Dowloading the dataset

The original dataset can be downloaded directly from kaggle

kaggle datasets download -d ryleymcconkey/ml-turbulence-dataset
mkdir ml-turbulence-dataset
unzip ml-turbulence-dataset.zip -d ml-turbulence-dataset

The expanded dataset can be included with

gdown https://drive.google.com/uc?id=1rb2-7vJQtp_nLqxjmnGJI2aRQx8u9W6B
unzip a_3_1_2_NL_S_DNS_eV.zip -d ml-turbulence-dataset/komegasst

Usage

The package is structure across three core objects CaseSet, DataSet and FrameWork. A series of other modules are available for extended functionality such as evaluation, visualization and integration with OpenFOAM, all of which interact with a CaseSet obj. Before starting make sure that A curated dataset for data-driven turbulence modelling by McConkey et al. is present in your system. The version used in the present work was augmented using these tools and can be found here.

CaseSet

A CaseSet must be created via a SetConfig obj which contains the params to be loaded such as features and labels.

from ideal_rcf.dataloader.config import SetConfig
from ideal_rcf.dataloader.caseset import CaseSet

set_config = SetConfig(...)
caseset_obj = CaseSet('PHLL_case_1p2', set_config=set_config)

View the creating_casesets.ipynb example for more.

DataSet

A DataSet receives the same type of SetConfig as the CaseSet but handles different parameters such as trainset, valset and tesset which are used to split the DataSet into the required sets for the supervised training. The DataSet object fits and stores the scalers built from the trainset.

from ideal_rcf.dataloader.config import SetConfig
from ideal_rcf.dataloader.dataset import DataSet

set_config = SetConfig(...)
dataset_obj = DataSet(set_config=set_config)
train, val, test = dataset_obj.split_train_val_test()

View the creating_datasets.ipynb example for more.

FrameWork

The FrameWork receives a ModelConfig obj which will determine the model to be used. Currently three models are supported:

  1. TBNN - Tensor Based Neural Networks - proposed originally by Ling et al. [paper] code
  2. eVTBN - Effective Viscosity Tensor Based Neural Network - proposed by ... [paper][thesis][wiki]
  3. OeVNLTBNN - Optimal Eddy Viscosity + Non Linear Tensor Based Neural Network:
    1. orginally proposed by Wang et al. [paper]
    2. improved by McConkey et al. [paper] to always be non-negative
    3. expanded by ... [paper][thesis] [wiki] to be coupled with the anisotropy tensor via the strain rate during training but decoupled for inference and eVTBNN for non linear-term

Each model builds on the previous, so an eVTBNN is a TBNN combined with an eVNN while the OeVNLTBNN is an eVTBNN paired with an oEVNN.

View the creating_models.ipynb example for more.

TBNN

from ideal_rcf.models.config import ModelConfig
from ideal_rcf.models.framework import FrameWork

TBNN_config = ModelConfig(
    layers_tbnn=layers_tbnn,
    units_tbnn=units_tbnn,
    features_input_shape=features_input_shape,
    tensor_features_input_shape=tensor_features_input_shape,
)
tbnn = FrameWork(TBNN_config)
tbnn.compile_models()
### acess compiled model
print(tbnn.models.tbnn.summary())

eVTBNN

from ideal_rcf.models.config import ModelConfig
from ideal_rcf.models.framework import FrameWork

eVTBNN_config = ModelConfig(
    layers_tbnn=layers_tbnn,
    units_tbnn=units_tbnn,
    features_input_shape=features_input_shape,
    tensor_features_input_shape=tensor_features_input_shape,
    layers_evnn=layers_evnn,
    units_evnn=units_evnn,
    tensor_features_linear_input_shape=tensor_features_linear_input_shape,
)
evtbnn = FrameWork(eVTBNN_config)
evtbnn.compile_models()
### acess compiled model
print(evtbnn.models.evtbnn.summary())

OeVNLTBNN

from ideal_rcf.models.config import ModelConfig
from ideal_rcf.models.framework import FrameWork

OeVNLTBNN_config = ModelConfig(
    layers_tbnn=layers_tbnn,
    units_tbnn=units_tbnn,
    features_input_shape=features_input_shape,
    tensor_features_input_shape=tensor_features_input_shape,
    layers_evnn=layers_evnn,
    units_evnn=units_evnn,
    tensor_features_linear_input_shape=tensor_features_linear_input_shape,
    layers_oevnn=layers_oevnn,
    units_oevnn=units_oevnn,
    tensor_features_linear_oev_input_shape=tensor_features_linear_oev_input_shape,
    learning_rate=learning_rate,
    learning_rate_oevnn=learning_rate_oevnn,
)
oevnltbnn = FrameWork(OeVNLTBNN_config)
oevnltbnn.compile_models()
### after training you can extract oev model from oevnn so that S_DNS is not required to run inference
### this is done automatically inside the train module
oevnltbnn.extract_oev()
### acess compiled model
print(oevnltbnn.models.oevnn.summary())
print(oevnltbnn.models.nltbnn.summary())

Mixer:

All models support the Mixer Architecture which is based on the concept introduced by Chen et al. in TSMixer: An All-MLP Architecture for Time Series Forecasting [code] and adapted to work with spatial features while preserving invariance. The architecture and explanation are available in the [wiki].

from ideal_rcf.models.config import ModelConfig, MixerConfig
from ideal_rcf.models.framework import FrameWork

tbnn_mixer_config = MixerConfig(
    features_mlp_layers=5,
    features_mlp_units=150
)

TBNN_config = ModelConfig(
    layers_tbnn=layers_tbnn,
    units_tbnn=units_tbnn,
    features_input_shape=features_input_shape,
    tensor_features_input_shape=tensor_features_input_shape,
    tbnn_mixer_config=tbnn_mixer_config
)
tbnn = FrameWork(TBNN_config)
tbnn.compile_models()
### acess compiled model
print(tbnn.models.tbnn.summary())

train

oevnltbnn.train(dataset_obj, train, val)

inference

### the predictions are saved in the test obj
oevnltbnn.inference(dataset_obj, test)

evaluate

from ideal_rcf.infrastructure.evaluator import Evaluator
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error

metrics_list = [mean_squared_error, r2_score, mean_absolute_error]
eval_instance = Evaluator(metrics_list)
eval_instance.calculate_metrics(test)

export to openfoam

from ideal_rcf.foam.preprocess import FoamParser

### dump the predictions into openfoam compatible files
foam = FoamParser(PHLL_case_1p2)
foam.dump_predictions(dir_path)

Examples

More use cases are covered in the examples directory:

  1. FrameWork Training
  2. Setting Up Cross Validtion
  3. Train and Inference with Cross Validation
  4. Inference on loaded DataSet, Framework and exporting to openfoam
  5. Post Processing resulting foam files

OpenFOAM Integration

The solvers and configurations used for injecting the predictions are available here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ideal_rcf-0.3a2.tar.gz (39.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ideal_rcf-0.3a2-py3-none-any.whl (45.1 kB view details)

Uploaded Python 3

File details

Details for the file ideal_rcf-0.3a2.tar.gz.

File metadata

  • Download URL: ideal_rcf-0.3a2.tar.gz
  • Upload date:
  • Size: 39.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for ideal_rcf-0.3a2.tar.gz
Algorithm Hash digest
SHA256 5e5366fba4a5ef9f7efd8f9be65f80a06dfba784849416dfbe2fa4efa2144017
MD5 1025942dfa999a5a98fe346e5bba1e4b
BLAKE2b-256 97adfcea9deb914b341044dfe42f3f953227ffcb0dda03992b6e1f9f3fb84b6c

See more details on using hashes here.

File details

Details for the file ideal_rcf-0.3a2-py3-none-any.whl.

File metadata

  • Download URL: ideal_rcf-0.3a2-py3-none-any.whl
  • Upload date:
  • Size: 45.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for ideal_rcf-0.3a2-py3-none-any.whl
Algorithm Hash digest
SHA256 1e5282f467437d9569536cb449595d09e0da1da44a8c6f311795ea76c9b15490
MD5 8f8d71db867862dcc1b27ae56693da30
BLAKE2b-256 0d93e86d3294f961a6f708c229e0a2b98a136dee0b63f26ec24724c805a4ea83

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page