Skip to main content

Scale Distribution Framework

Project description

Scaled

This project is aiming the target that provides simple and efficient and reliable way for distributing computing framework, centralized scheduler and stable protocol when client and worker talking to scheduler

Introduction

The goal for this project should be as simple as possible

  • It built on top of zmq
  • it has ready python version of Client, Scheduler, Worker
  • I will provide golang or Rust version of Scheduler, the goal for the Scheduler should be completely computer language agnostic, which means they follow the same protocol
  • Scheduler might support function based computing tree in the future

Installation

pip install scaled

if you want to use uvloop, please do: pip install uvloop, default we are using python builtin uvloop

How to use it

Start local scheduler and cluster at the same time in the code

import random

from scaled.client.client import Client
from scaled.cluster.combo import SchedulerClusterCombo
from scaled.utility.zmq_config import ZMQConfig, ZMQType
from scaled.utility.logging.scoped_logger import ScopedLogger
from scaled.utility.logging.utility import setup_logger


def calculate(sec: int):
  return sec * 1


def main():
  setup_logger()
  config = ZMQConfig(type=ZMQType.tcp, host="127.0.0.1", port=2345)

  cluster = SchedulerClusterCombo(address=config, n_workers=10, event_loop="uvloop")
  client = Client(config=config)

  tasks = [random.randint(0, 100) for _ in range(100000)]

  with ScopedLogger(f"submit {len(tasks)} tasks"):
    futures = [client.submit(calculate, i) for i in tasks]

  with ScopedLogger(f"gather {len(futures)} results"):
    results = [future.result() for future in futures]

  assert results == tasks

  cluster.shutdown()
  client.disconnect()


if __name__ == "__main__":
  main()

Start scheduler and cluster independently

use scaled_scheduler to start scheduler, for example:

scaled_scheduler --allocator-type queued tcp://0.0.0.0:8516

use scaled_cluster to start workers:

scaled_worker -n 10 tcp://127.0.0.1:8516

Then you can write simply write client code as:

from scaled.utility.zmq_config import ZMQConfig, ZMQType
from scaled.client.client import Client

def foobar(foo: int):
    return foo

config = ZMQConfig(type=ZMQType.tcp, host="127.0.0.1", port=2345)
client = Client(config=config)
future = client.submit(foobar, 1)

print(future.result())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scaled-0.6.tar.gz (23.3 kB view details)

Uploaded Source

Built Distribution

scaled-0.6-py3-none-any.whl (35.3 kB view details)

Uploaded Python 3

File details

Details for the file scaled-0.6.tar.gz.

File metadata

  • Download URL: scaled-0.6.tar.gz
  • Upload date:
  • Size: 23.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.1

File hashes

Hashes for scaled-0.6.tar.gz
Algorithm Hash digest
SHA256 d29be3f3b329b4006771791a3c2879db627f0f920647cce07488c2d60e884320
MD5 06767a2d715f56f87e3a4d2b7642521a
BLAKE2b-256 f194804d214c1adcfa6a6df5fdc7b0e18d07345a420bc9163e4c6d5cfb08613a

See more details on using hashes here.

File details

Details for the file scaled-0.6-py3-none-any.whl.

File metadata

  • Download URL: scaled-0.6-py3-none-any.whl
  • Upload date:
  • Size: 35.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.1

File hashes

Hashes for scaled-0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 7d00f28e6feb6bf46221f11c3e674d73045c390b16ad611c855de28c4cbf61d2
MD5 df0bb44f8a72c259d44dc7e1fd26a94f
BLAKE2b-256 6f1b9c24e744936888565ef703d348855c25ea0d99c145258c522bf8bd217862

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page