Skip to main content

A Comprehensive Benchmark of Deep Model Fusion

Project description

FusionBench: A Comprehensive Benchmark/ToolKit of Deep Model Fusion

arXiv GitHub License PyPI - Version Downloads Static Badge Static Badge Static Badge

[!TIP]
Documentation is available at tanganke.github.io/fusion_bench/.

Overview

FusionBench is a benchmark suite designed to evaluate the performance of various deep model fusion techniques. It aims to provide a comprehensive comparison of different methods on a variety of datasets and tasks.

Projects based on FusionBench and news from the community (descending order of date):

Hongling Zheng, Li Shen, Anke Tang, Yong Luo et al. Learn From Model Beyond Fine-Tuning: A Survey. has been accepted for publication in Nature Machine Intelligence. Nov, 2024. https://arxiv.org/abs/2310.08184

Foundation models (FM) have demonstrated remarkable performance across a wide range of tasks (especially in the fields of natural language processing and computer vision), primarily attributed to their ability to comprehend instructions and access extensive, high-quality data. This not only showcases their current effectiveness but also sets a promising trajectory towards the development of artificial general intelligence. Unfortunately, due to multiple constraints, the raw data of the model used for large model training are often inaccessible, so the use of end-to-end models for downstream tasks has become a new research trend, which we call Learn From Model (LFM) in this article. LFM focuses on the research, modification, and design of FM based on the model interface, so as to better understand the model structure and weights (in a black box environment), and to generalize the model to downstream tasks. The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing. Each category encompasses a repertoire of methods and strategies that aim to enhance the capabilities and performance of FM. This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM, in order to help readers better understand the current research status and ideas. To conclude, we summarize the survey by highlighting several critical areas for future exploration and addressing open issues that require further attention from the research community. The relevant papers we investigated in this article can be accessed at https://github.com/ruthless-man/Awesome-Learn-from-Model.

Li Shen, Anke Tang, Enneng Yang et al. Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging. Oct, 2024. https://github.com/EnnengYang/Efficient-WEMoE image
Jinluan Yang et al. Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace. Oct, 2024. http://arxiv.org/abs/2410.13910 image
Anke Tang et al. SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models. Aug, 2024. http://arxiv.org/abs/2408.10174

Example notebooks can be found at examples/smile_upscaling.

Installation

install from PyPI:

pip install fusion-bench

or install the latest version in development from github repository

git clone https://github.com/tanganke/fusion_bench.git
cd fusion_bench

pip install -e . # install the package in editable mode

Introduction to Deep Model Fusion

Deep model fusion is a technique that merges, ensemble, or fuse multiple deep neural networks to obtain a unified model. It can be used to improve the performance and robustness of model or to combine the strengths of different models, such as fuse multiple task-specific models to create a multi-task model. For a more detailed introduction to deep model fusion, you can refer to W. Li, 2023, 'Deep Model Fusion: A Survey'. We also provide a brief overview of deep model fusion in our documentation. In this benchmark, we evaluate the performance of different fusion methods on a variety of datasets and tasks.

Project Structure

The project is structured as follows:

  • fusion_bench/: the main package of the benchmark.
    • method: contains the implementation of the fusion methods.

      naming convention: fusion_bench/method/{method_name}/{variant}.py contains the implementation of the specific method or its variants. For example, fusion_bench/method/regmean/clip_regmean.py contains the implementation of the RegMean algorithm for CLIP vision models.

    • modelpool: contains the implementation of the model pool, responsible for managing the models and dataset to be loaded.
    • taskpool: contains the implementation of the task pool, responsible for evaluating the performance of models returned by the algorithm.
  • config/: configuration files for the benchmark. We use Hydra to manage the configurations.
    • method: configuration files for the fusion methods.

      naming convention: config/method/{method_name}/{variant}.yaml contains the configuration for the specific method or its variants.

    • modelpool: configuration files for the model pool.
    • taskpool: configuration files for the task pool.
    • model: configuration files for the models.
    • dataset: configuration files for the datasets.
  • docs/: documentation for the benchmark. We use mkdocs to generate the documentation. Start the documentation server locally with mkdocs serve. The required packages can be installed with pip install -r mkdocs-requirements.txt.
  • examples/: example scripts for running some of the experiments.

    naming convention: examples/{method_name}/ contains the files such as bash scripts and jupyter notebooks for the specific method.

  • tests/: unit tests for the benchmark.

A Unified Command Line Interface

The fusion_bench command-line interface is a powerful tool for researchers and practitioners in the field of model fusion. It provides a streamlined way to experiment with various fusion algorithms, model combinations, and evaluation tasks. By leveraging Hydra's configuration management, fusion_bench offers flexibility in setting up experiments and reproducibility in results. The CLI's design allows for easy extension to new fusion methods, model types, and tasks, making it a versatile platform for advancing research in model fusion techniques.

Read the CLI documentation for more information.

Implement your own model fusion algorithm

First, create a new Python file for the algorithm in the fusion_bench/method directory. Following the naming convention, the file should be named {method_name_or_class}/{variant}.py.

from fusion_bench import BaseModelFusionAlgorithm, BaseModelPool

class DerivedModelFusionAlgorithm(BaseModelFusionAlgorithm):
    """
    An example of a derived model fusion algorithm.
    """

    # _config_mapping maps the attribution to the corresponding key in the configuration file.
    # this is optional and can be used to serialize the object to a configuration file.
    # `self.config.hyperparam_1` will be mapped to the attribute `hyperparam_attr_1`.
    _config_mapping = BaseModelFusionAlgorithm._config_mapping | {
        "hyperparam_attr_1": "hyperparam_1",
        "hyperparam_attr_2": "hyperparam_2",
    }

    def __init__(self, hyperparam_1, hyperparam_2, **kwargs):
        self.hyperparam_attr_1 = hyperparam_1
        self.hyperparam_attr_2 = hyperparam_2
        super().__init__(**kwargs)

    def run(self, modelpool: BaseModelPool):
        # modelpool is an object that responsible for managing the models and dataset to be loaded.
        # implement the fusion algorithm here.
        raise NotImplementedError(
            "DerivedModelFusionAlgorithm.run() is not implemented."
        )

A corresponding configuration file should be created to specify the class and hyperparameters of the algorithm. Here we assume the configuration file is placed at config/method/your_algorithm_config.yaml.

[!NOTE] In fact, you can place your implementation anywhere you like, as long as the _target_ in the configuration file points to the correct class.

_target_: path_to_the_module.DerivedModelFusionAlgorithm

hyperparam_1: some_value
hyperparam_2: another_value

Use the algorithm in the FusionBench:

fusion_bench \
  method=your_algorithm_config \
  method.hyperparam_1=you_can_override_this \
  method.hyperparam_2=and_this \
  ... # other configurations

:rocket: Quick Start for Experienced Users

We provide a project template for quickly starting a new fusion algorithm implementation here: FusionBench Project Template.

Click on Use this template to initialize new repository.

FusionBench Command Generator WebUI (for v0.1.x)

FusionBench Command Generator is a user-friendly web interface for generating FusionBench commands based on configuration files. It provides an interactive way to select and customize FusionBench configurations, making it easier to run experiments with different settings. Read more here.

FusionBench Command Generator Web Interface

Citation

If you find this benchmark useful, please consider citing our work:

@misc{tangFusionBenchComprehensiveBenchmark2024,
  title = {{{FusionBench}}: {{A Comprehensive Benchmark}} of {{Deep Model Fusion}}},
  shorttitle = {{{FusionBench}}},
  author = {Tang, Anke and Shen, Li and Luo, Yong and Hu, Han and Du, Bo and Tao, Dacheng},
  year = {2024},
  month = jun,
  number = {arXiv:2406.03280},
  eprint = {2406.03280},
  publisher = {arXiv},
  url = {http://arxiv.org/abs/2406.03280},
  archiveprefix = {arxiv},
  langid = {english},
  keywords = {Computer Science - Artificial Intelligence,Computer Science - Computation and Language,Computer Science - Machine Learning}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fusion_bench-0.2.6.tar.gz (355.9 kB view details)

Uploaded Source

Built Distribution

fusion_bench-0.2.6-py3-none-any.whl (562.7 kB view details)

Uploaded Python 3

File details

Details for the file fusion_bench-0.2.6.tar.gz.

File metadata

  • Download URL: fusion_bench-0.2.6.tar.gz
  • Upload date:
  • Size: 355.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for fusion_bench-0.2.6.tar.gz
Algorithm Hash digest
SHA256 14d3be7de466153bbe643fdbfef8c8b2a08ad17c5885630f057398ad7b8a60bf
MD5 afe5c9688ad13b6dafd47049cb219385
BLAKE2b-256 d263c5cf2f907a778d065e22949622ec183124b89c98c9d2c148f721d2ab5940

See more details on using hashes here.

Provenance

The following attestation bundles were made for fusion_bench-0.2.6.tar.gz:

Publisher: publish.yml on tanganke/fusion_bench

Attestations:

File details

Details for the file fusion_bench-0.2.6-py3-none-any.whl.

File metadata

  • Download URL: fusion_bench-0.2.6-py3-none-any.whl
  • Upload date:
  • Size: 562.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for fusion_bench-0.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 45657cd222e272718cdaa92b205610e38cdeccceb568f1652bb725a789dd39e2
MD5 a1e22aeee1f5a4e540150a0819501724
BLAKE2b-256 364f70164139096f92f1f466974b29eee4b37b2edc3403250c544bdce0ecb7e2

See more details on using hashes here.

Provenance

The following attestation bundles were made for fusion_bench-0.2.6-py3-none-any.whl:

Publisher: publish.yml on tanganke/fusion_bench

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page