A Comprehensive Benchmark of Deep Model Fusion
Project description
FusionBench: A Comprehensive Benchmark of Deep Model Fusion
[!WARNING]
This project is still in testing phase as the API may be subject to change. Please report any issues you encounter.
[!TIP]
Documentation is available at tanganke.github.io/fusion_bench/.
Overview
FusionBench is a benchmark suite designed to evaluate the performance of various deep model fusion techniques. It aims to provide a comprehensive comparison of different methods on a variety of datasets and tasks.
Projects based on FusionBench:
Anke Tang et al. SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models. Aug, 2024. http://arxiv.org/abs/2408.10174
Example notebooks can be found at examples/smile_upscaling.
Installation
install from PyPI:
pip install fusion-bench
or install the latest version in development from github repository
git clone https://github.com/tanganke/fusion_bench.git
cd fusion_bench
pip install -e . # install the package in editable mode
Introduction to Deep Model Fusion
Deep model fusion is a technique that merges, ensemble, or fuse multiple deep neural networks to obtain a unified model. It can be used to improve the performance and robustness of model or to combine the strengths of different models, such as fuse multiple task-specific models to create a multi-task model. For a more detailed introduction to deep model fusion, you can refer to W. Li, 2023, 'Deep Model Fusion: A Survey'. We also provide a brief overview of deep model fusion in our documentation. In this benchmark, we evaluate the performance of different fusion methods on a variety of datasets and tasks.
Project Structure
The project is structured as follows:
fusion_bench/
: the main package of the benchmark.config/
: configuration files for the benchmark. We use Hydra to manage the configurations.docs/
: documentation for the benchmark. We use mkdocs to generate the documentation. Start the documentation server locally withmkdocs serve
. The required packages can be installed withpip install -r mkdocs-requirements.txt
.examples/
: example scripts for running some of the experiments.tests/
: unit tests for the benchmark.
A Unified Command Line Interface
The fusion_bench
command-line interface is a powerful tool for researchers and practitioners in the field of model fusion. It provides a streamlined way to experiment with various fusion algorithms, model combinations, and evaluation tasks.
By leveraging Hydra's configuration management, fusion_bench offers flexibility in setting up experiments and reproducibility in results.
The CLI's design allows for easy extension to new fusion methods, model types, and tasks, making it a versatile platform for advancing research in model fusion techniques.
Read the CLI documentation for more information.
Implement your own model fusion algorithm
from fusion_bench.method import BaseModelFusionAlgorithm
from fusion_bench.modelpool import BaseModelPool
class DerivedModelFusionAlgorithm(BaseModelFusionAlgorithm):
"""
An example of a derived model fusion algorithm.
"""
# _config_mapping maps the attribution to the corresponding key in the configuration file.
_config_mapping = BaseModelFusionAlgorithm._config_mapping | {
"hyperparam_attr_1": "hyperparam_1",
"hyperparam_attr_2": "hyperparam_2",
}
def __init__(self, hyperparam_1, hyperparam_2, **kwargs):
self.hyperparam_attr_1 = hyperparam_1
self.hyperparam_attr_2 = hyperparam_2
super().__init__(**kwargs)
def run(self, modelpool: BaseModelPool):
# modelpool is an object that responsible for managing the models and dataset to be loaded.
# implement the fusion algorithm here.
raise NotImplementedError(
"DerivedModelFusionAlgorithm.run() is not implemented."
)
A corresponding configuration file should be created to specify the class and hyperparameters of the algorithm.
Here we assume the configuration file is placed at config/method/your_algorithm_config.yaml
.
_target_: path_to_the_module.DerivedModelFusionAlgorithm
hyperparam_1: some_value
hyperparam_2: another_value
Use the algorithm in the FusionBench:
fusion_bench \
method=your_algorithm_config \
method.hyperparam_1=you_can_override_this \
method.hyperparam_2=and_this \
... # other configurations
FusionBench Command Generator WebUI (for v0.1.x)
FusionBench Command Generator is a user-friendly web interface for generating FusionBench commands based on configuration files. It provides an interactive way to select and customize FusionBench configurations, making it easier to run experiments with different settings. Read more here.
Citation
If you find this benchmark useful, please consider citing our work:
@misc{tangFusionBenchComprehensiveBenchmark2024,
title = {{{FusionBench}}: {{A Comprehensive Benchmark}} of {{Deep Model Fusion}}},
shorttitle = {{{FusionBench}}},
author = {Tang, Anke and Shen, Li and Luo, Yong and Hu, Han and Du, Bo and Tao, Dacheng},
year = {2024},
month = jun,
number = {arXiv:2406.03280},
eprint = {2406.03280},
publisher = {arXiv},
url = {http://arxiv.org/abs/2406.03280},
archiveprefix = {arxiv},
langid = {english},
keywords = {Computer Science - Artificial Intelligence,Computer Science - Computation and Language,Computer Science - Machine Learning}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fusion_bench-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 11fbae79642142feab0585bdc21154a4af651ffbdb41f3b5493ade8f5f6ddfc8 |
|
MD5 | 4bf3cccae8a6ad96b1332b06c658c7a9 |
|
BLAKE2b-256 | 8fc9ec4e100e615451d8ab42e9b592dc708a50c66de71c1424078c16fc6359e0 |