Skip to main content

A Federated Learning Benchmarking Framework

Project description

Federated Learning Benchmark Framework

FedBench Website

Website | Paper

A strategic framework for systematically benchmarking federated learning algorithms across diverse datasets and client distributions. Built for researchers and practitioners to make informed algorithm selection decisions based on rigorous comparative analysis.

Strategic Overview

FLBenchmark provides a comprehensive ecosystem for evaluating federated learning strategies in realistic scenarios. Our framework enables:

  • Strategic Algorithm Selection: Identify optimal algorithms for specific data distributions and application constraints
  • Performance Trade-off Analysis: Evaluate critical trade-offs between accuracy, communication efficiency, and privacy preservation
  • Heterogeneity Impact Assessment: Measure how different heterogeneity factors affect algorithm performance
  • Systematic Comparison Methodology: Ensure fair, reproducible comparisons between federated optimization approaches

Installation

pip install fedbench

Quick Start Tutorials

  1. FedBench: Exploring Dataset Statistics for FL

    Kaggle(or open the Jupyter Notebook)

  2. FedBench:Benchmarking 9 Algorithms on FashionMNIST

    Kaggle(or open the Jupyter Notebook)

  3. FedBench : Benchmarking 9 Algorithms on MNIST

    Kaggle(or open the Jupyter Notebook)

  4. FedBench : Benchmarking 9 Algorithms on CIFAR-10

    Kaggle(or open the Jupyter Notebook)

  5. FedBench : Benchmarking 9 Algorithms on CIFAR-100

    Kaggle(or open the Jupyter Notebook)

  6. FedBench: Benchmarking 9 Algorithms on FEDISIC2019

    Kaggle(or open the Jupyter Notebook)

  7. FedBench: Benchmarking 9 Algorithms on SVHN

    Kaggle(or open the Jupyter Notebook)

  8. FedBench: Benchmarking 9 Algorithms on CINIC10

    Kaggle(or open the Jupyter Notebook)

  9. FedBench: Benchmarking 9 Algorithms on FCUBE

    Kaggle(or open the Jupyter Notebook)

  10. FedBench: Benchmarking 9 Algorithms on FEMNIST

    Kaggle(or open the Jupyter Notebook)

  11. FedBench: Benchmarking 9 Algorithms on ADULT

    Kaggle(or open the Jupyter Notebook)

Strategic Dataset Selection

Choose datasets that match your strategic evaluation needs:

Dataset Classes Partitioning Method Partition Settings
MNIST, FMNIST, SVHN, CINIC-10, CIFAR-10 10 label_quantity, dirichlet, iid_noniid, noise, iid labels_per_client, alpha, similarity, segma
FedISIC2019 8 label_quantity, dirichlet, iid_noniid, noise, iid labels_per_client, alpha, similarity, segma
CIFAR-100 100 label_quantity, dirichlet, iid_noniid, noise, iid labels_per_client, alpha, similarity, segma
Adult 2 label_quantity, dirichlet, iid_noniid, iid labels_per_client, alpha, similarity
FCUBE 2 synthetic -
FEMNIST 62 real-world -

Dataset Usage Examples

Below are examples demonstrating how to use the FLBenchmark dataset module.

from fedbench.datasets import FederatedDataset
from omegaconf import DictConfig

# Example 1: MNIST Dataset

data_config = DictConfig({"name": "mnist", "partitioning": "iid", "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=1)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()
federated_dataset.print_dataset_stats()
Number of training instances: 60000
Number of test instances: 10000
Number of features: 784
Number of classes: 10

# Example 2: Adult Dataset

data_config = DictConfig({"name": "adult", "partitioning": "synthetic", "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=1)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()
federated_dataset.print_dataset_stats()
Number of training instances: 26048
Number of test instances: 6513
Number of features: 99
Number of classes: 2

# Example 3: FEMNIST Dataset

data_config = DictConfig({"name": "femnist", "partitioning": "real-world", "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=1)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()
federated_dataset.print_dataset_stats()
Number of training instances: 649184
Number of test instances: 165093
Number of features: 784
Number of classes: 62
# Example 5: Label Quantity Partitiin

data_config = DictConfig({"name": "mnist", "partitioning": "label_quantity","labels_per_client":2, "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=10)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()
# Example 6: Dirichlet Partitiin

data_config = DictConfig({"name": "mnist", "partitioning": "dirichlet","alpha":0.5, "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=10)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()
# Example 7: Quantity Skew Partitiin

data_config = DictConfig({"name": "mnist", "partitioning": "iid_noniid","similarity":0.5, "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=10)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()
# Example 8: Feature Distribution Partitiin

data_config = DictConfig({"name": "mnist", "partitioning": "noise","segma":0.1, "batch_size": 64})

federated_dataset = FederatedDataset(data_config, num_clients=10)
trainloaders, valloaders, testloader = federated_dataset.get_dataloaders()

Strategic Algorithm

🧠 Model Configurations

FedBench provides pre-configured models for various datasets:

Dataset Model Type Input Dim Hidden Dims Num Classes MOON Variant
MNIST, FMNIST, FEMNIST MNISTModel 256 [120, 84] 10 (MNIST, FMNIST), 62 (FEMNIST) MnistModelMOON
CIFAR-10, SVHN, CINIC-10, FedISIC2019 CNN 400 [120, 84] 10 CNNModelMOON
CIFAR-100 CNN 400 [120, 84] 100 CNNModelMOON
Adult MLP 99 [32, 16, 8] 2 MLPModelMOON
FCUBE MLP 3 [32, 16, 8] 2 MLPModelMOON

📖 Usage Examples

FedAvg on MNIST
from omegaconf import OmegaConf,DictConfig
import torch
from fedbench.algorithms.fedavg.simulation import run_fedavg

model_cfg = OmegaConf.create({
    "_target_": "fedbench.models.MNISTModel",
    "input_dim": 256,
    "hidden_dims": [120, 84],
    "num_classes": 10,
})

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}

data_config = DictConfig({
    "name": "mnist",
    "partitioning": "iid",
    "batch_size": 64,
})

history = run_fedavg(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=10,
    num_rounds=20,
    num_epochs=15,
    learning_rate=0.01,
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
FedProx on CIFAR-10
from omegaconf import OmegaConf,DictConfig
import torch
from fedbench.algorithms.fedprox.simulation import run_fedprox

model_cfg = OmegaConf.create({
    "_target_": "fedbench.models.CNN",
    "input_dim": 400,
    "hidden_dims": [120, 84],
    "num_classes": 10,
})

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}

data_config = DictConfig({
    "name": "cifar10",
    "partitioning": "dirichlet",
    "alpha":0.5,
    "batch_size": 32,
})

history = run_fedprox(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=20,
    num_rounds=30,
    num_epochs=10,
    learning_rate=0.005,
    mu=0.01,
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
FedAdam on CIFAR-100
from omegaconf import OmegaConf,DictConfig
import torch
from fedbench.algorithms.fedadam.simulation import run_fedadam

model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.CNN",
            "input_dim": 400,
            "hidden_dims": [120, 84],
            "num_classes": 100,
        })

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}

data_config = DictConfig({
    "name": "cifar100",
    "partitioning": "iid",
    "batch_size": 32,
})

history = run_fedadam(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=100,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
    beta_1=0.9,
    beta_2=0.999,
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
FedAdagrad on SVHN
from omegaconf import OmegaConf,DictConfig
import torch
from fedbench.algorithms.fedadagrad.simulation import run_fedadagrad

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}
 model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.CNN",
            "input_dim": 400,
            "hidden_dims": [120, 84],
            "num_classes": 10,
        })
data_config = DictConfig({
    "name": "svhn",
    "partitioning": "noise",
    "segma":0.1,
    "batch_size": 32,
})

history = run_fedadagrad(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=10,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
FedYogi on CINIC-10
from omegaconf import OmegaConf,DictConfig
import torch
from fedbench.algorithms.fedyogi.simulation import run_fedyogi

model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.CNN",
            "input_dim": 400,
            "hidden_dims": [120, 84],
            "num_classes": 10,
        })

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}
data_config = DictConfig({
    "name": "cinic10",
    "partitioning": "iid",
    "batch_size": 32,
})

history = run_fedyogi(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=10,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
    beta_1=0.9,
    beta_2=0.999,
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
FedNova on FCUBE
from fedbench.algorithms.fednova.simulation import run_fednova
from omegaconf import OmegaConf,DictConfig
import torch

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}

model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.MLP",
            "input_dim": 3,
            "hidden_dims" : [32, 16, 8] ,
            "num_classes": 2,
        })
data_config = DictConfig({
            "name": "fcube",
            "partitioning": "synthetic",
            "batch_size": 64,
        })
history = run_fednova(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=15,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
Scaffold on FedISIC2019
from fedbench.algorithms.scaffold.simulation import run_scaffold
from omegaconf import OmegaConf,DictConfig
import torch

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}
model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.CNN" ,
            "input_dim": 400,
            "hidden_dims": [120, 84],
            "num_classes": 10,
        })
data_config = DictConfig({
    "name": "fedisic2019",
    "partitioning": "iid_noniid",
    "similarity":0.5,
    "batch_size": 32,
})


history = run_scaffold(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=15,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
    model_dir="weights",
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
MOON on Adult
from fedbench.algorithms.moon.simulation import run_moon
from omegaconf import OmegaConf,DictConfig
import torch

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}

model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.MLPModelMOON",
            "input_dim": 99,
            "hidden_dims" : [32, 16, 8] ,
            "output_dim":256,
            "num_classes": 2,
})

data_config = DictConfig({
    "name": "adult",
    "partitioning": "iid",
    "batch_size": 64,
})

history = run_moon(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=10,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
    model_dir="weights",
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
FedBN on FEMNIST
from fedbench.algorithms.fedbn.simulation import run_fedbn
from omegaconf import OmegaConf,DictConfig
import torch

backend_config = {
    "num_cpus": 1,
    "num_gpus": 0
}
 model_cfg = OmegaConf.create({
            "_target_": "fedbench.models.MNISTModel",
            "input_dim": 256,
            "hidden_dims": [120, 84],
            "num_classes": 62,
        })
data_config = DictConfig({
    "name": "femnist",
    "partitioning": "real-world",
    "batch_size": 32,
})


history = run_fedbn(
    data_config=data_config,
    model_cfg=model_cfg,
    backend_config=backend_config,
    num_clients=15,
    num_rounds=25,
    num_epochs=20,
    learning_rate=0.001,
     save_path="weights",
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)

🧪 Benchmarking

You can easily benchmark different algorithms and configurations

# Clone the repository
git clone git@github.com:NechbaMohammed/FLBenchmark.git
cd FLBenchmark
# Install dependencies
pip install -r requirements.txt
# Run all benchmarks
python benchmark_runner.py

# Aggregate results
python aggregate_experiments.py

# Generate plots
python learning_curve_plots.py
python local_epoch_comparison_plots.py

📚 Citation

If you use FedBench in your research, please cite our paper:ation

@article{,
  title={},
  author={},
  journal={},
  year={}
}

🤝 Contributing

We welcome contributions! Please check out our contribution guidelines for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fedbench-0.1.3.tar.gz (35.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fedbench-0.1.3-py3-none-any.whl (53.2 kB view details)

Uploaded Python 3

File details

Details for the file fedbench-0.1.3.tar.gz.

File metadata

  • Download URL: fedbench-0.1.3.tar.gz
  • Upload date:
  • Size: 35.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for fedbench-0.1.3.tar.gz
Algorithm Hash digest
SHA256 7373867c1966038bb8e759e79802e1c8f14e0d9367417305354aee5154545c0d
MD5 378f43d8f6a71c0cdaa7c888d6d1f4d4
BLAKE2b-256 b128e08edd1874c5e6e0c5b18938bc776007188f3568ffcc041593da8a7c5e3e

See more details on using hashes here.

File details

Details for the file fedbench-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: fedbench-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 53.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for fedbench-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a5471c30601e90a8fcae38b42a5c0a80147d827294b84e9729b2348a1020d6f8
MD5 29fc098469b29a2d5ffa8627b3e89ac1
BLAKE2b-256 23551356d22b4483f4bd66002a3bd542bea40f29a54f3b3a3f27f23c44e7dab4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page