A research-oriented federal learning framework.
Project description
FederatedCore
A research-oriented federal learning framework.
Features
Compatibility. FederatedCore can work seamlessly with mainstream deep learning frameworks, e.g., PyTorch and Tensorflow.
Modular. The code of the algorithm module can be used individually.
Easy to use. Retrofit existing code to data parallelism with no more than 100 lines code.
Support
Attributes |
Value |
---|---|
Framework |
Pytorch, Tensorflow |
Engine |
parallelism, sequence |
Dataset |
label distribution, quality distribution |
Topology |
parameter server, gossip, all reduce |
Communication |
queue, TCP |
QuickStart
Install m3u8_To_MP4 via pip
# via pypi.org
python -m pip install federatedcore
# first clone project, and install.
git clone https://github.com/songs18/FederatedCore.git
python -m pip install ./FederatedCore
A small example (FedAvg)
Implement FedAvg in fewer than 100 lines.
(/examples/FedAvg/federated_average.py)
def run(num_nodes, has_server):
def build_host_ids():
if has_server:
return [i for i in range(num_nodes + 1)]
else:
return [i for i in range(num_nodes)]
def build_func_libs():
func_libs = {
'train_dataset' : 'self_contained_dnn', # load_train_dataset,
'test_dataset' : 'self_contained_dnn', # load_test_dataset,
'model' : 'self_contained_dnn', # get_model,
'loss' : 'self_contained_dnn', # get_loss,
'optimizer' : 'self_contained_dnn', # get_optimizer,
'metric_loss' : 'self_contained_dnn', # get_metric_loss,
'metric_acc' : 'self_contained_dnn', # get_metric_acc,
'train_step' : 'self_contained_dnn', # get_train_step,
'test_step' : 'self_contained_dnn', # get_test_step,
'aggregation_func': average_parameters,
}
return func_libs
def build_linkers():
node_inboxes = queuer.node_inbox(num_nodes + 1)
linkers = list()
for host_id in range(num_nodes):
linker = queuer.LocalQueue(host_id, node_inboxes)
linkers.append(linker)
if has_server:
linker = queuer.LocalQueue(num_nodes, node_inboxes)
linkers.append(linker)
return linkers
def build_execution_plans():
execution_plans = ExecutionPlanTemplate.client_train * 5
execution_plans = [[[c, {}] for c in execution_plans] for _ in range(num_nodes)]
if has_server:
server_execution_plan = ExecutionPlanTemplate.server_init + ExecutionPlanTemplate.server_sync_train * 5
server_execution_plan.pop(-1)
server_execution_plan = [[s, {'iteration': 3}] for s in server_execution_plan]
execution_plans.append(server_execution_plan)
return execution_plans
host_ids = build_host_ids()
func_libs = build_func_libs()
linkers = build_linkers()
execution_plans = build_execution_plans()
parallelism.run_parallel(host_ids, func_libs, linkers, execution_plans)
def main():
num_nodes = 2
generate_topology(num_nodes)
split_dataset(num_nodes)
build_host(num_nodes)
run(num_nodes, True)
if __name__ == '__main__':
main()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file federatedcore-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: federatedcore-0.0.2-py3-none-any.whl
- Upload date:
- Size: 40.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.3 readme-renderer/30.0 requests/2.26.0 requests-toolbelt/0.9.1 urllib3/1.26.7 tqdm/4.62.3 importlib-metadata/4.8.1 keyring/23.2.1 rfc3986/1.5.0 colorama/0.4.4 CPython/3.6.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0087fe5ec98261ead935fb0d1ea1f80fba009e2e6a0d1d96f7bcc274e6b3cf2e |
|
MD5 | b3e8c3696bfbdcf1b98af027eaf82f98 |
|
BLAKE2b-256 | 95e0fcb75e4568350bdd7e70639e2e1652a6e35de69d1ed46b4397ceb5921e51 |