No project description provided
Project description
Chimera: A Framework for Education and Prototyping in Distributed Machine Learning
Introduction
chimera is a Python package for distributed machine learning (DML) designed for both educational and prototyping purposes. It provides a structured environment to experiment with key DML techniques, including Data Parallelism, Model Parallelism, and Hybrid Parallelism.
As a distributed computing framework, chimera aims to simplify the creation, in a local environment, of distributed machine learning models by streamlining the creation of a master node on the host machine and worker nodes on separate virtual machines using Docker containers. By providing a standardized API-based communication framework, chimera enables researchers and practitioners to test, evaluate, and optimize distributed learning algorithms with minimal configuration effort.
chimera supports the following types of DML techniques:
-
Data Parallelism: data distributed between the workers. Each worker has a copy of the model. This case includes Distributed SGD (Stochastic Gradient Descent) for models like linear regression, logistic regression and others, depending on the loss function.
-
Model Parallelism: model distributed between the workers. Each worker has a copy of the dataset. This case includes Distributed SGD (Stochastic Gradient Descent) for generic neural network architectures.
-
Hybrid Parallelism: data and model distributed between the workers. This case includes Distributed Bagging (Bootstrap Aggregating) with generic weak learners from the
scikit-learnpackage.
Docker containers act as Workers. To run the created distributed system, it will be given a standardized function named run, on which a Master type and a port must be selected for the server in the host machine.
The client-master and master-workers communications are made via REST APIs.
Running as a Pypi Package
-
Install Poetry following the documentation: https://python-poetry.org/docs/#installing-with-the-official-installer
-
Initialize a virtual environment running the command
poetry init -
Install the latest version of
chimerarunning the commandpoetry add chimera-distributed-ml -
Start the Docker Daemon. You can make it either by opening Docker Desktop or by starting the Daemon via CLI (in Linux:
sudo systemctl start docker). Docker Daemon makes Docker REST APIs available, so we can run commands likedocker buildanddocker run, that are called internally bychimera. -
Create and run distributed models with
chimera!
Running the Source Code
-
Install Poetry following the documentation: https://python-poetry.org/docs/#installing-with-the-official-installer
-
Clone the
chimeraproject via either HTTPS or SSH:- HTTPS:
git clone https://github.com/Samirnunes/chimera.git - SSH:
git clone git@github.com:Samirnunes/chimera.git
- HTTPS:
-
Go to project's root directory (where
pyproject.tomlis located) and runpoetry install. It will generate a.venvfile in the root directory with the installed dependencies, and apoetry.lockfile. -
Start the Docker Daemon. You can make it either by opening Docker Desktop or by starting the Daemon via CLI (in Linux:
sudo systemctl start docker). Docker Daemon makes Docker REST APIs available, so we can run commands likedocker buildanddocker run, that are called internally bychimera. -
Create and run distributed models with
chimera!
Creating and Running a Distributed Model with chimera
-
After installing
chimera, you need to create aMasterand itsWorkers:- Master: create a
.pyfile in your root directory. This file must specify the environment variables necessary to run the code in string format (in the case of Lists, you must follow the JSON string format for Lists) and run achimeramaster server withchimera.run. For example:chimera.run(AggregationMaster(), 8080). The available configuration environment variables are in the classesNetworkConfigandWorkersConfig, insidesrc/chimera/containers/config.py.
- Workers: create a folder called
chimera_workersand create.pyfiles which are going to represent your workers. Each file must initialize achimeraworker and callworker.serve()inside anif __name__ == "__main__":block, which will initialize the worker server whenchimera.runis called in the master's file. Note that the environment variableCHIMERA_WORKERS_NODES_NAMESin the master's file must contain all the workers' file names, without the.pysuffix.
- Master: create a
-
Before running the master's file, you must specify the local training dataset for each worker. This is made by creating a folder called
chimera_train_datacontaining folders with the same name as the worker's files (clearly without the.py). Each folder must have aX_train.csvfile containing the features and ay_train.csvcontaining the labels. WhetherX_train.csvandy_train.csvare the same or not for all the workers is up to you. Keep in mind what algorithm you want to create in the distributed environment! -
Finally, you can run the master's file using:
poetry run python {your_master_filename.py}. This should initialize all the worker's containers in your Docker environment and the master server in the host machine (the machine running the code).
Environment Variables
The following environment variables allow users to configure the chimera distributed machine learning system. These variables define network settings, worker configurations, and resource allocations, ensuring flexibility to different environments.
Network Configuration
The following variables define the Docker network settings for chimera:
-
CHIMERA_NETWORK_NAME(default:"chimera-network") - The name of the Docker network where chimera runs. -
CHIMERA_NETWORK_PREFIX(default:"192.168.10") - The IP network prefix for the Docker network. - Must be a valid IPv4 network prefix (e.g.,"192.168.10"). -
CHIMERA_NETWORK_SUBNET_MASK(default:24) - The subnet mask for the Docker network, defining how many bits are reserved for the network. - Must be an integer between0and32.
Workers Configuration
The following variables control the behavior of worker nodes in chimera:
-
CHIMERA_WORKERS_NODES_NAMES- A list of worker node names.
- Must be unique across all workers.
- Example:
["worker1", "worker2", "worker3"].
-
CHIMERA_WORKERS_CPU_SHARES(default:[2])- A list of CPU shares assigned to each worker.
- Each value must be an integer ≥
2. - Example:
[2, 4, 4]assigns different CPU shares to three workers.
-
CHIMERA_WORKERS_MAPPED_PORTS(default:[101])- A list of host ports mapped to each worker’s container.
- Must be unique across all workers.
- Example:
[5001, 5002, 5003]assigns distinct ports to three workers.
-
CHIMERA_WORKERS_HOST(default:"0.0.0.0")- The host IP address that binds worker ports.
"0.0.0.0"allows connections from any IP address.
-
CHIMERA_WORKERS_PORT(default:80)- The internal container port that workers listen on.
- This is the port inside the worker's container, not the exposed host port.
-
CHIMERA_WORKERS_ENDPOINTS_MAX_RETRIES(default:0)- The maximum number of retry attempts when communicating with worker nodes.
-
CHIMERA_WORKERS_ENDPOINTS_TIMEOUT(default:100.0)- The timeout (in seconds) for worker API endpoints.
These environment variables give users full control over how chimera distributes models, manages worker nodes, and configures networking in a flexible and simple manner.
Examples
Distributed Bagging (Bootstrap Aggregating)
In distributed bagging, the summarized steps are:
-
Client makes a request to Aggregation Master, which redirects it to each Worker.
-
Each Bootstrap Worker receives the request for an action:
-
fit: trains the local weak learner using the local dataset. Before fit, Worker bootstraps (samples with reposition) the local dataset. Then, it uses the collected samples to fit the local model. When the process is finished, Master sends an "ok" to the Client.
-
predict: makes inference on new data by calculating, in the Master, the mean of the predictions of each Worker's local model's predictions.
-
Distributed SGD (Stochastic Gradient Descent)
In distributed SGD, the summarized steps are:
-
Client makes a request to Parameter Server Master, which redirects it to Workers.
-
Each SGD Worker receives the request for an action:
- fit: trains the distributed model. Worker has a copy of the model on its memory. Then, for a predefined number of iterations or until convergence:
-
- Worker calculates the gradient considering only its local dataset;
-
- Worker communicates through REST API its gradient to Master, which aggregates the gradients by calculating the mean, updates the model's parameters and passes these parameters back to each Worker through REST API, so they update their local models.
-
When convergence is reached, Master stops sending the parameters to Workers and stores the final model. Finally, it communicates an "ok" to Client.
- predict: makes inference on new data using the final model available in the Master.
- fit: trains the distributed model. Worker has a copy of the model on its memory. Then, for a predefined number of iterations or until convergence:
References
Papers
-
"A Survey on Distributed Machine Learning": https://dl.acm.org/doi/pdf/10.1145/3377454
-
"Distributed Machine Learning": https://dl.acm.org/doi/fullHtml/10.1145/3631461.3632516
Websites
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file chimera_distributed_ml-0.1.7.tar.gz.
File metadata
- Download URL: chimera_distributed_ml-0.1.7.tar.gz
- Upload date:
- Size: 20.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a5d7be50da91c27954e97b611f8d7044b82f46ac3889de1c390511b7850a91b1
|
|
| MD5 |
b58628b180593041f8386785c9fb809f
|
|
| BLAKE2b-256 |
a8a771b82efd0597d9fe8d20ba10c8a16830f30ce7395ecdaa89cf115ce5148d
|
Provenance
The following attestation bundles were made for chimera_distributed_ml-0.1.7.tar.gz:
Publisher:
ci.yml on Samirnunes/chimera
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
chimera_distributed_ml-0.1.7.tar.gz -
Subject digest:
a5d7be50da91c27954e97b611f8d7044b82f46ac3889de1c390511b7850a91b1 - Sigstore transparency entry: 186907730
- Sigstore integration time:
-
Permalink:
Samirnunes/chimera@e8e55f975dc84720ac14e8fdc73b7476c3ab1d0c -
Branch / Tag:
refs/tags/v0.1.7 - Owner: https://github.com/Samirnunes
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@e8e55f975dc84720ac14e8fdc73b7476c3ab1d0c -
Trigger Event:
push
-
Statement type:
File details
Details for the file chimera_distributed_ml-0.1.7-py3-none-any.whl.
File metadata
- Download URL: chimera_distributed_ml-0.1.7-py3-none-any.whl
- Upload date:
- Size: 25.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d52d191313e0acfc30b0f1db32585f7b58965c922935a9e042c71e2cdb1e6992
|
|
| MD5 |
e0404d95bb0efacc78c0ddf829670806
|
|
| BLAKE2b-256 |
e093ff2838544e3788f043b03e41ffb6ce53cf01164fe89e5c0ea2e2639320c1
|
Provenance
The following attestation bundles were made for chimera_distributed_ml-0.1.7-py3-none-any.whl:
Publisher:
ci.yml on Samirnunes/chimera
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
chimera_distributed_ml-0.1.7-py3-none-any.whl -
Subject digest:
d52d191313e0acfc30b0f1db32585f7b58965c922935a9e042c71e2cdb1e6992 - Sigstore transparency entry: 186907731
- Sigstore integration time:
-
Permalink:
Samirnunes/chimera@e8e55f975dc84720ac14e8fdc73b7476c3ab1d0c -
Branch / Tag:
refs/tags/v0.1.7 - Owner: https://github.com/Samirnunes
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@e8e55f975dc84720ac14e8fdc73b7476c3ab1d0c -
Trigger Event:
push
-
Statement type: