Skip to main content

AI and ML workflows module for scientific digital twins.

Project description


GitHub Super-Linter GitHub Super-Linter SQAaaS source code

See the latest version of our docs for a quick overview of this platform for advanced AI/ML workflows in digital twin applications.



  • Linux environment. Windows and macOS were never tested.

Python virtual environment

Depending on your environment, there are different ways to select a specific python version.

Laptop or GPU node

If you are working on a laptop or on a simple on-prem setup, you could consider using pyenv. See the installation instructions. If you are using pyenv, make sure to read this.

HPC environment

In HPC systems it is more popular to load dependencies using Environment Modules or Lmod. Contact the system administrator to learn how to select the proper python modules.

On JSC, we activate the required modules in this way:

ml --force purge
ml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA
ml Python CMake HDF5 PnetCDF libaio mpi4py

Install itwinai

Install itwinai and its dependencies using the following command, and follow the instructions:

# Create a python virtual environment and activate it
$ python -m venv ENV_NAME
$ source ENV_NAME/bin/activate

# Install itwinai inside the environment
(ENV_NAME) $ export ML_FRAMEWORK="pytorch" # or "tensorflow"
(ENV_NAME) $ curl -fsSL | bash

The ML_FRAMEWORK environment variable controls whether you are installing itwinai for PyTorch or TensorFlow.

itwinai depends on Horovod, which requires CMake>=1.13 and other packages. Make sure to have them installed in your environment before proceeding.

Installation for developers

If you are contributing to this repository, please continue below for more advanced instructions.

[!WARNING] Branch protection rules are applied to all branches which names match this regex: [dm][ea][vi]* . When creating new branches, please avoid using names that match that regex, otherwise branch protection rules will block direct pushes to that branch.

Install itwinai environment

Regardless of how you loaded your environment, you can create the python virtual environments with the following commands. Once the correct Python version is loaded, create the virtual environments using our pre-make Makefile:

make torch-env # or make torch-env-cpu
make tensorflow-env # or make tensorflow-env-cpu

# Juelich supercomputer
make torch-gpu-jsc
make tf-gpu-jsc



# Install TensorFlow 2.13
make tensorflow-env

# Activate env
source .venv-tf/bin/activate

A CPU-only version is available at the target tensorflow-env-cpu.

PyTorch (+ Lightning)


# Install PyTorch + lightning
make torch-env

# Activate env
source .venv-pytorch/bin/activate

A CPU-only version is available at the target torch-env-cpu.

Development environment

This is for developers only. To have it, update the installed itwinai package adding the dev extra:

pip install -e .[dev]

Test with pytest

Do this only if you are a developer wanting to test your code with pytest.

First, you need to create virtual environments both for torch and tensorflow. For instance, you can use:

make torch-env-cpu
make tensorflow-env-cpu

To select the name of the torch and tf environments you can set the following environment variables, which allow to run the tests in environments with custom names which are different from .venv-pytorch and .venv-tf.

export TORCH_ENV="my_torch_env"
export TF_ENV="my_tf_env"

Functional tests (marked with pytest.mark.functional) will be executed under /tmp/pytest location to guarantee they are run in a clean environment.

To run functional tests use:

pytest -v tests/ -m "functional"

To run all tests on itwinai package:

make test

Run tests in JSC virtual environments:

make test-jsc

Micromamba installation (deprecated)

To manage Conda environments we use micromamba, a light weight version of conda.

It is suggested to refer to the Manual installation guide.

Consider that Micromamba can eat a lot of space when building environments because packages are cached on the local filesystem after being downloaded. To clear cache you can use micromamba clean -a. Micromamba data are kept under the $HOME location. However, in some systems, $HOME has a limited storage space and it would be cleverer to install Micromamba in another location with more storage space. Thus by changing the $MAMBA_ROOT_PREFIX variable. See a complete installation example for Linux below, where the default $MAMBA_ROOT_PREFIX is overridden:

cd $HOME

# Download micromamba (This command is for Linux Intel (x86_64) systems. Find the right one for your system!)
curl -Ls | tar -xvj bin/micromamba

# Install micromamba in a custom directory
./bin/micromamba shell init $MAMBA_ROOT_PREFIX

# To invoke micromamba from Makefile, you need to add explicitly to $PATH
echo 'PATH="$(dirname $MAMBA_EXE):$PATH"' >> ~/.bashrc

Reference: Micromamba installation guide.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

itwinai-0.2.1.tar.gz (717.5 kB view hashes)

Uploaded Source

Built Distribution

itwinai-0.2.1-py3-none-any.whl (55.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page