Skip to main content

No project description provided

Project description

Chimera: A Framework for Education and Prototyping in Distributed Machine Learning

Chimera Logo

Introduction

chimera is a Python package for distributed machine learning (DML) designed for both educational and prototyping purposes. It provides a structured environment to experiment with key DML techniques, including Data Parallelism, Model Parallelism, and Hybrid Parallelism.

As a distributed computing framework, chimera aims to simplify the creation, in a local environment, of distributed machine learning models by streamlining the creation of a master node on the host machine and worker nodes on separate virtual machines using Docker containers. By providing a standardized API-based communication framework, chimera enables researchers and practitioners to test, evaluate, and optimize distributed learning algorithms with minimal configuration effort.

chimera supports the following types of DML techniques:

  • Data Parallelism: data distributed between the workers. Each worker has a copy of the model. This case includes Distributed SGD (Stochastic Gradient Descent) for models like linear regression, logistic regression and others, depending on the loss function.

  • Model Parallelism: model distributed between the workers. Each worker has a copy of the dataset. This case includes Distributed SGD (Stochastic Gradient Descent) for generic neural network architectures.

  • Hybrid Parallelism: data and model distributed between the workers. This case includes Distributed Bagging (Bootstrap Aggregating) with generic weak learners from the scikit-learn package.

Docker containers act as Workers. To run the created distributed system, it will be given a standardized function named run, on which a Master type and a port must be selected for the server in the host machine.

The client-master and master-workers communications are made via REST APIs.

Running as a Package

[IN PROGRESS]

Running the Source Code

  1. Install Poetry following the documentation: https://python-poetry.org/docs/#installing-with-the-official-installer

  2. Clone the chimera project via either HTTPS or SSH:

    • HTTPS: git clone https://github.com/Samirnunes/chimera.git
    • SSH: git clone git@github.com:Samirnunes/chimera.git
  3. Go to project's root directory (where pyproject.toml is located) and run poetry install. It will generate a .venv file in the root directory with the installed dependencies, and a poetry.lock file.

  4. Start the Docker Daemon. You can make it either by opening Docker Desktop or by starting the Daemon via CLI (in Linux: sudo systemctl start docker). Docker Daemon makes Docker REST APIs available, so we can run commands like docker build and docker run, that are called internally by chimera.

  5. Run the examples to see chimera working!

Creating and Running a Distributed Model with chimera

Master Example

  1. After installing chimera, you need to create a Master and its Workers:

    • Master: create a .py file in your root directory. This file must specify the environment variables necessary to run the code in string format (in the case of Lists, you must follow the JSON string format for Lists) and run a chimera master server with chimera.run. For example: chimera.run(AggregationMaster(), 8080). The available configuration environment variables are in the classes NetworkConfig and WorkersConfig, inside src/chimera/containers/config.py.

    Master Example

    • Workers: create a folder called chimera_workers and create .py files which are going to represent your workers. Each file must initialize a chimera worker and call worker.serve() inside an if __name__ == "__main__": block, which will initialize the worker server when chimera.run is called in the master's file. Note that the environment variable CHIMERA_WORKERS_NODES_NAMES in the master's file must contain all the workers' file names, without the .py suffix.

    Master Example

  2. Before running the master's file, you must specify the local training dataset for each worker. This is made by creating a folder called chimera_train_data containing folders with the same name as the worker's files (clearly without the .py). Each folder must have a X_train.csv file containing the features and a y_train.csv containing the labels. Whether X_train.csv and y_train.csv are the same or not for all the workers is up to you. Keep in mind what algorithm you want to create in the distributed environment!

  3. Finally, you can run the master's file using: poetry run python {your_master_filename.py}. This should initialize all the worker's containers in your Docker environment and the master server in the host machine (the machine running the code).

Environment Variables

The following environment variables allow users to configure the chimera distributed machine learning system. These variables define network settings, worker configurations, and resource allocations, ensuring flexibility to different environments.

Network Configuration

The following variables define the Docker network settings for chimera:

  • CHIMERA_NETWORK_NAME (default: "chimera-network") - The name of the Docker network where chimera runs.

  • CHIMERA_NETWORK_PREFIX (default: "192.168.10") - The IP network prefix for the Docker network. - Must be a valid IPv4 network prefix (e.g., "192.168.10").

  • CHIMERA_NETWORK_SUBNET_MASK (default: 24) - The subnet mask for the Docker network, defining how many bits are reserved for the network. - Must be an integer between 0 and 32.

Workers Configuration

The following variables control the behavior of worker nodes in chimera:

  • CHIMERA_WORKERS_NODES_NAMES

    • A list of worker node names.
    • Must be unique across all workers.
    • Example: ["worker1", "worker2", "worker3"].
  • CHIMERA_WORKERS_CPU_SHARES (default: [2])

    • A list of CPU shares assigned to each worker.
    • Each value must be an integer ≥ 2.
    • Example: [2, 4, 4] assigns different CPU shares to three workers.
  • CHIMERA_WORKERS_MAPPED_PORTS (default: [101])

    • A list of host ports mapped to each worker’s container.
    • Must be unique across all workers.
    • Example: [5001, 5002, 5003] assigns distinct ports to three workers.
  • CHIMERA_WORKERS_HOST (default: "0.0.0.0")

    • The host IP address that binds worker ports.
    • "0.0.0.0" allows connections from any IP address.
  • CHIMERA_WORKERS_PORT (default: 80)

    • The internal container port that workers listen on.
    • This is the port inside the worker's container, not the exposed host port.
  • CHIMERA_WORKERS_ENDPOINTS_MAX_RETRIES (default: 0)

    • The maximum number of retry attempts when communicating with worker nodes.
  • CHIMERA_WORKERS_ENDPOINTS_TIMEOUT (default: 100.0)

    • The timeout (in seconds) for worker API endpoints.

These environment variables give users full control over how chimera distributes models, manages worker nodes, and configures networking in a flexible and simple manner.

Examples

Distributed Bagging (Bootstrap Aggregating)

In distributed bagging, the summarized steps are:

  1. Client makes a request to Aggregation Master, which redirects it to each Worker.

  2. Each Bootstrap Worker receives the request for an action:

    • fit: trains the local weak learner using the local dataset. Before fit, Worker bootstraps (samples with reposition) the local dataset. Then, it uses the collected samples to fit the local model. When the process is finished, Master sends an "ok" to the Client.

    • predict: makes inference on new data by calculating, in the Master, the mean of the predictions of each Worker's local model's predictions.

Distributed Bagging

Distributed SGD (Stochastic Gradient Descent)

In distributed SGD, the summarized steps are:

  1. Client makes a request to Parameter Server Master, which redirects it to Workers.

  2. Each SGD Worker receives the request for an action:

    • fit: trains the distributed model. Worker has a copy of the model on its memory. Then, for a predefined number of iterations or until convergence:
        1. Worker calculates the gradient considering only its local dataset;
        1. Worker communicates through REST API its gradient to Master, which aggregates the gradients by calculating the mean, updates the model's parameters and passes these parameters back to each Worker through REST API, so they update their local models.

    When convergence is reached, Master stops sending the parameters to Workers and stores the final model. Finally, it communicates an "ok" to Client.

    • predict: makes inference on new data using the final model available in the Master.

Distributed SGD

References

Papers

Websites

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chimera_distributed_ml-0.1.0.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chimera_distributed_ml-0.1.0-py3-none-any.whl (24.4 kB view details)

Uploaded Python 3

File details

Details for the file chimera_distributed_ml-0.1.0.tar.gz.

File metadata

  • Download URL: chimera_distributed_ml-0.1.0.tar.gz
  • Upload date:
  • Size: 19.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.6

File hashes

Hashes for chimera_distributed_ml-0.1.0.tar.gz
Algorithm Hash digest
SHA256 12c8574ff4d85f92e0c36226904b93372abd3070279d36967eab96721a44cca8
MD5 b0a8746601a6072f7d728797dc19ff7c
BLAKE2b-256 d0404b45c51d12f4a1bd79d40905ea7c87b83d0367a5a7edc4a0d0e348d5e55c

See more details on using hashes here.

File details

Details for the file chimera_distributed_ml-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for chimera_distributed_ml-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 486f41140b0c2a1e24c16d9ff75d9346fed44cafbfb3d86d945ff93f70eeb756
MD5 7fc288e9bf96346cd621213d84abe5f6
BLAKE2b-256 1dd3aa356990338f80e20e180fa5489599546c0a6255787aa4124fc4ab43b83f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page