Skip to main content

Train multiple programs on multiple servers without pain

Project description

Training Noodles

documentation_link travis_build_status

A simple and powerful tool to help training multiple programs on multiple servers with only one human.

Features

  • Automatically deploys experiments to available servers
  • No need to change any existing code
  • Considers CPU usage, GPU usage, memory usage, disk usage, and more
  • Uses only SSH protocol
  • Relies on minimal dependencies
  • Allows fast prototyping

Use Case

If we want to run 4 experiments on 3 servers, more specifically, we need to

  1. Upload the code to one of the servers which has low CPU usage
  2. Run the code on the server
  3. Download experimental results when they're ready

deployment_round_1

In the first deployment round (See image above), Noodles will use the user-defined commands to check CPU usage on the servers.

The CPU usage is high on Server 1 because there are some other programs running, so Noodles uses scp to upload the code Code 1 and Code 2, and run them on Server 2 and Server 3 respectively.

As for how to upload the code, it's just a list of commands written by us, Noodles just follows the commands.

deployment_round_2

In the second deployment round (See image above), we tell Noodles to check experimental results on all servers.

Noodles finds that Server 3 has just finished running Code 2, so it downloads the experimental results and process the data on local machine as we tell it to do so.

deployment_round_3

In the third deployment round (See image above), Code 3 and Code 4 still need to be deployed. Noodles checks the CPU usage on all servers again. As Server 1 has just become free now, Noodles can deploy Code 3 and Code 4 to Server 1 and Server 3 respectively.

The deployment round would continue until all experiments are successfully deployed. As in this case, Noodles will try to download and process the experimental results of Code 1, Code 3 and Code 4 in later rounds.

How Noodles Works

The general procedure is as follows:

  1. Initialize the list of experiments in E
  2. For each deployment round:
    1. Initialize the list of servers in S
    2. For each experiment in E:
      1. Noodles runs user-defined requirements on each server in S
      2. Noodles compares the metrics (results from the above step) to the user-defined expression
      3. If the expression is satisfied:
        1. Noodles runs the user-defined commands on the satisfied server
        2. Remove the current experiment from E
        3. Remove the satisfied server from S
        4. If S is empty, break
    3. If E is empty, break

The implementation of Noodles complies with the following rules:

  1. Simple (User can understand code and spec without looking documentation)
  2. Easy to debug (Noodles can take different actions when different error occurs)
  3. Stateless (The only state Noodles cares about is whether the deployment is successful or not, the states of the experiments must be handled by the user)

Documentation

See full documentation here.

Prerequisites

  1. Linux-based terminals (For Windows, I recommend using git-sdk)
  2. Python 3.5 or higher

Installation

Run the following command:

pip install training-noodles

Usage

noodles <command_type> <path_to_spec>

It's just that simple.

Examples

Here are some examples showing how Noodles is used:

noodles run my_training.yml
noodles status my_training.yml
noodles monitor my_training.yml
noodles stop my_training.yml
noodles download my_training.yml
noodles upload my_training.yml
...

You can also choose only some experiments:

noodles run "my_training.yml:Experiment 1,Experiment 2"

See the example Two Locals to get started. See Train TensorFlow Examples for a more complex example.

Default Spec

Noodles will use properties from default spec if the user spec doesn't specify them. See training_noodles/specs/defaults.yml for the default spec.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

training-noodles-1.2.2.tar.gz (33.6 kB view details)

Uploaded Source

Built Distribution

training_noodles-1.2.2-py3-none-any.whl (35.8 kB view details)

Uploaded Python 3

File details

Details for the file training-noodles-1.2.2.tar.gz.

File metadata

  • Download URL: training-noodles-1.2.2.tar.gz
  • Upload date:
  • Size: 33.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/49.2.0 requests-toolbelt/0.9.1 tqdm/4.42.0 CPython/3.6.8

File hashes

Hashes for training-noodles-1.2.2.tar.gz
Algorithm Hash digest
SHA256 07f39fd7a759bda545a699fe925956ee15235707e73ca00386399c7dddcfdfd6
MD5 62b3b9bc468e979925721a43d604f026
BLAKE2b-256 b13832bee181246f38af6ca2b5216481f77a34f3a5938b6f57735db92a4cfe42

See more details on using hashes here.

File details

Details for the file training_noodles-1.2.2-py3-none-any.whl.

File metadata

  • Download URL: training_noodles-1.2.2-py3-none-any.whl
  • Upload date:
  • Size: 35.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/49.2.0 requests-toolbelt/0.9.1 tqdm/4.42.0 CPython/3.6.8

File hashes

Hashes for training_noodles-1.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 893abb3b598b0d47a0b31727b7d8e86ee7edee7010709dea97a343e7152bc44d
MD5 7e5f0b34c0143e578398658e879bf73e
BLAKE2b-256 f2d4c5773034688035ab5f6fdde3f187e42c2cceb6ab09725b55b3e5517cb370

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page