A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning
Project description
Installation
git clone https://github.com/lishenghui/blades
cd blades
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
cd blades/blades
python train.py file ./tuned_examples/fedsgd_cnn_fashion_mnist.yaml
Blades internally calls ray.tune; therefore, the experimental results are output to its default directory: ~/ray_results.
Experiment Results
Cluster Deployment
To run blades on a cluster, you only need to deploy Ray cluster according to the official guide.
Built-in Implementations
In detail, the following strategies are currently implemented:
Data Partitioners:
Dirichlet Partitioner
Citation
Please cite our paper (and the respective papers of the methods used) if you use this code in your own work:
@article{li2023blades, title={Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning}, author= {Li, Shenghui and Ju, Li and Zhang, Tianru and Ngai, Edith and Voigt, Thiemo}, journal={arXiv preprint arXiv:2206.05359}, year={2023} }
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
blades-0.1.123.tar.gz
(595.5 kB
view hashes)
Built Distribution
blades-0.1.123-py3-none-any.whl
(45.4 kB
view hashes)