Skip to main content

Library of modular domain generalization for deep learning

Project description

DomainLab: modular python package for training domain invariant neural networks

GH Actions CI codecov Codacy Badge Documentation pages-build-deployment

Distribution shifts, domain generalization and DomainLab

Neural networks trained using data from a specific distribution (domain) usually fails to generalize to novel distributions (domains). Domain generalization aims at learning domain invariant features by utilizing data from multiple domains (data sites, corhorts, batches, vendors) so the learned feature can generalize to new unseen domains (distributions).

DomainLab is a software platform with state-of-the-art domain generalization algorithms implemented, designed by maximal decoupling of different software components thus enhances maximal code reuse.

DomainLab

DomainLab decouples the following concepts or objects:

  • task $M$: In DomainLab, a task is a container for datasets from different domains. (e.g. from distribution $D_1$ and $D_2$). Task offer a static protocol to evaluate the generalization performance of a neural network: which dataset(s) is used for training, wich dataset(s) used for testing.
  • neural network: a map $\phi$ from the input data to the feature space and a map $\varphi$ from feature space to output $\hat{y}$ (e.g. decision variable).
  • model: structural risk in the form of $\ell() + \mu R()$ where
    • $\ell(Y, \hat{y}=\varphi(\phi(X)))$ is the task specific empirical loss (e.g. cross entropy for classification task).
    • $R(\phi(X))$ is the penalty loss to boost domain invariant feature extraction using $\phi$.
    • $\mu$ is the corresponding multiplier to each penalty function factor.
  • trainer: an object that guides the data flow to model and append further domain invariant losses like inter-domain feature alignment.

We offer detailed documentation on how these models and trainers work in our documentation page: https://marrlab.github.io/DomainLab/

DomainLab makes it possible to combine models with models, trainers with models, and trainers with trainers in a decorator pattern like line of code Trainer A(Trainer B(Model C(Model D(network E), network E, network F))) which correspond to $\ell() + \mu_a R_a() + \mu_b R_b + \mu_c R_c() + \mu_d R_d()$, where Model C and Model D share neural network E, but Model C has an extra neural network F. All models share the same neural network for feature extraction, but can have different auxilliary networks for $R()$.

Getting started

Installation

For development version in Github, see Installation and Dependencies handling

We also offer a PyPI version here https://pypi.org/project/domainlab/ which one could install via pip install domainlab and it is recommended to create a virtual environment for it.

Task specification

We offer various ways for the user to specify a scenario to evaluate the generalization performance via training on a limited number of datasets. See detail in Task Specification

Example and usage

Available arguments for commandline

The following command tells which arguments/hyperparameters/multipliers are available to be set by the user and which model they are associated with

python main_out.py --help

or

domainlab --help

Command line configuration file

domainlab -c ./examples/conf/vlcs_diva_mldg_dial.yaml (if you install via pip)

or if you clone this the code repository for DomainLab

python main_out.py -c ./examples/conf/vlcs_diva_mldg_dial.yaml

where the configuration file below can be downloaded here

te_d: caltech                       # domain name of test domain
tpath: examples/tasks/task_vlcs.py  # python file path to specify the task
bs: 2                               # batch size
model: dann_diva                    # combine model DANN with DIVA
epos: 1                             # number of epochs
trainer: mldg_dial                  # combine trainer MLDG and DIAL
gamma_y: 700000.0                   # hyperparameter of diva
gamma_d: 100000.0                   # hyperparameter of diva
npath: examples/nets/resnet.py      # neural network for class classification
npath_dom: examples/nets/resnet.py  # neural network for domain classification

See details in Command line usage

or Programm against DomainLab API

See example here: Transformer as feature extractor, decorate JIGEN with DANN, training using MLDG decorated by DIAL

Benchmark different methods

DomainLab provides a powerful benchmark functionality. To benchmark several algorithms(combination of neural networks, models, trainers and associated hyperparameters), a single line command along with a benchmark configuration files is sufficient. See details in benchmarks documentation and tutorial

One could simply run bash run_benchmark_slurm.sh your_benchmark_configuration.yaml to launch different experiments with specified configuraiton.

For example, the following result (without any augmentation like flip) is for PACS dataset using ResNet. The reader should note that using different neural network, whether pre-trained or not, what kind of preprocessinga and augmentation to use can lead to very different result distributions, which is one of the features DomainLab provide: the above factors get decoupled in DomainLab.

Benchmark results plot generated from DomainLab, where each rectangle represent one model trainer combination, each bar inside the rectangle represent a unique hyperparameter index associated with that method combination, each dot represent a random seeds.

Citation

Source: https://arxiv.org/pdf/2403.14356.pdf

@misc{sun2024domainlab,
  title={DomainLab: A modular Python package for domain generalization in deep learning},
  author={Sun, Xudong and Feistner, Carla and Gossmann, Alexej and Schwarz, George and Umer, Rao Muhammad and Beer, Lisa and Rockenschaub, Patrick and Shrestha, Rahul Babu and Gruber, Armin and Chen, Nutan and others},
  journal={https://arxiv.org/pdf/2403.14356.pdf},
  year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

domainlab-0.6.2.tar.gz (3.0 MB view details)

Uploaded Source

Built Distribution

domainlab-0.6.2-py3-none-any.whl (3.2 MB view details)

Uploaded Python 3

File details

Details for the file domainlab-0.6.2.tar.gz.

File metadata

  • Download URL: domainlab-0.6.2.tar.gz
  • Upload date:
  • Size: 3.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.9.12 Linux/5.15.0-101-generic

File hashes

Hashes for domainlab-0.6.2.tar.gz
Algorithm Hash digest
SHA256 20fe66b0a3a826b09e0292489cbb99cb6befb956680003261f77402d17cbdd89
MD5 1f94ea11bbb890f0192e5eae6e7ee6cc
BLAKE2b-256 faf614320cb965508c5848c1553e9a9e8e90c0ce33d5fd0f36bd2ccb9c332a0f

See more details on using hashes here.

Provenance

File details

Details for the file domainlab-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: domainlab-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.9.12 Linux/5.15.0-101-generic

File hashes

Hashes for domainlab-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6b0921712dac2180cc7ec62c43c29bb416332f6df9d797127067f4715e13c723
MD5 783cc313a09f9ff3a800241ba33cc773
BLAKE2b-256 b86f74e6aa4cfc5a4ca96e122e31fdbc3a3cf1ae76ce2a931b7db01e747a7815

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page