Skip to main content

A large-scale collection of machine learning datasets of various spatiotemporal physical systems

Project description


Test Workflow PyPI Docs arXiv NeurIPS HuggingFace

The Well: 15TB of Physics Simulations

Welcome to the Well, a large-scale collection of machine learning datasets containing numerical simulations of a wide variety of spatiotemporal physical systems. The Well draws from domain scientists and numerical software developers to provide 15TB of data across 16 datasets covering diverse domains such as biological systems, fluid dynamics, acoustic scattering, as well as magneto-hydrodynamic simulations of extra-galactic fluids or supernova explosions. These datasets can be used individually or as part of a broader benchmark suite for accelerating research in machine learning and computational sciences.

Tap into the Well

Once the Well package installed and the data downloaded you can use them in your training pipeline.

from the_well.data import WellDataset
from torch.utils.data import DataLoader

trainset = WellDataset(
    well_base_path="path/to/base",
    well_dataset_name="name_of_the_dataset",
    well_split_name="train"
)
train_loader = DataLoader(trainset)

for batch in train_loader:
    ...

For more information regarding the interface, please refer to the API and the tutorials.

Installation

If you plan to use The Well datasets to train or evaluate deep learning models, we recommend to use a machine with enough computing resources. We also recommend creating a new Python (>=3.10) environment to install the Well. For instance, with venv:

python -m venv path/to/env
source path/to/env/activate/bin

From PyPI

The Well package can be installed directly from PyPI.

pip install the_well

From Source

It can also be installed from source. For this, clone the repository and install the package with its dependencies.

git clone https://github.com/PolymathicAI/the_well
cd the_well
pip install .

Depending on your acceleration hardware, you can specify --extra-index-url to install the relevant PyTorch version. For example, use

pip install . --extra-index-url https://download.pytorch.org/whl/cu121

to install the dependencies built for CUDA 12.1.

Benchmark Dependencies

If you want to run the benchmarks, you should install additional dependencies.

pip install the_well[benchmark]

Downloading the Data

The Well datasets range between 6.9GB and 5.1TB of data each, for a total of 15TB for the full collection. Ensure that your system has enough free disk space to accomodate the datasets you wish to download.

Once the_well is installed, you can use the the-well-download command to download any dataset of The Well.

the-well-download --base-path path/to/base --dataset active_matter --split train

If --dataset and --split are omitted, all datasets and splits will be downloaded. This could take a while!

Streaming from Hugging Face

Most of the Well datasets are also hosted on Hugging Face. Data can be streamed directly from the hub using the following code.

from the_well.data import WellDataset
from torch.utils.data import DataLoader

# The following line may take a couple of minutes to instantiate the datamodule
trainset = WellDataset(
    well_base_path="hf://datasets/polymathic-ai/",  # access from HF hub
    well_dataset_name="active_matter",
    well_split_name="train",
)
train_loader = DataLoader(trainset)

for batch in train_loader:
    ...

For better performance in large training, we advise downloading the data locally instead of streaming it over the network.

Benchmark

Train Models on the Well

The repository allows benchmarking surrogate models on the different datasets that compose the Well. Some state-of-the-art models are already implemented in models, while dataset classes handle the raw data of the Well. The benchmark relies on a training script that uses hydra to instantiate various classes (e.g. dataset, model, optimizer) from configuration files.

For instance, to run the training script of default FNO architecture on the active matter dataset, launch the following commands:

cd the_well/benchmark
python train.py experiment=fno server=local data=active_matter

Each argument corresponds to a specific configuration file. In the command above server=local indicates the training script to use local.yaml, which just declares the relative path to the data. The configuration can be overridden directly or edited with new YAML files. Please refer to hydra documentation for editing configuration.

You can use this command within a sbatch script to launch the training with Slurm.

Load Benchmarked Model Checkpoints

The model benchmarked in the original paper of the Well have been designed as a a simple baseline. They should not be considered as state-of-the-art. We hope that the community will build upon these results to develop better architectures for PDE surrogate modeling.

Most of the checkpoints of the models are available on Hugging Face. To load a specific checkpoint follow the example below of the FNO model trained on the active_matter dataset.

from the_well.benchmark.models import FNO

model = FNO.from_pretrained("polymathic-ai/FNO-active_matter")

Citation

This project has been led by the Polymathic AI organization, in collaboration with researchers from the Flatiron Institute, University of Colorado Boulder, University of Cambridge, New York University, Rutgers University, Cornell University, University of Tokyo, Los Alamos Natioinal Laboratory, University of California, Berkeley, Princeton University, CEA DAM, and University of Liège.

If you find this project useful for your research, please consider citing

@article{ohana2024well,
  title={The well: a large-scale collection of diverse physics simulations for machine learning},
  author={Ohana, Ruben and McCabe, Michael and Meyer, Lucas and Morel, Rudy and Agocs, Fruzsina and Beneitez, Miguel and Berger, Marsha and Burkhart, Blakesly and Dalziel, Stuart and Fielding, Drummond and others},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={44989--45037},
  year={2024}
}

Contact

For questions regarding this project, please contact Ruben Ohana and Michael McCabe at {rohana,mmccabe}@flatironinstitute.org.

Bug Reports and Feature Requests

To report a bug (in the data or the code), request a feature or simply ask a question, you can open an issue on the repository.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

the_well-1.2.0.tar.gz (78.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

the_well-1.2.0-py3-none-any.whl (85.7 kB view details)

Uploaded Python 3

File details

Details for the file the_well-1.2.0.tar.gz.

File metadata

  • Download URL: the_well-1.2.0.tar.gz
  • Upload date:
  • Size: 78.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.13

File hashes

Hashes for the_well-1.2.0.tar.gz
Algorithm Hash digest
SHA256 0735e8e251864d9de07e2cbbb487ab70cc6d7965cb6b3ef0880fd40d4fa814ad
MD5 18697c76417f7cca4c4f1a724eababd3
BLAKE2b-256 483653c6f339560035a7472d7eb85057f55c6b411d265829b6d04f93b5f1e8f2

See more details on using hashes here.

File details

Details for the file the_well-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: the_well-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 85.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.13

File hashes

Hashes for the_well-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0bc4e6e94099786ad92729ee13b00a7c898ffd6cf11a43dbbbe1e91e76b9ad54
MD5 6f018f3df0bfa6440a798428e539f725
BLAKE2b-256 e0b6f3762fa40c115e82171008e7eb33f8ccdfa6ed5fe2c20c593ffb8c60fc36

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page