Skip to main content

Multinode's Python client

Project description

What is Multinode?

Multinode allows you to rapidly deploy scalable applications with asynchronous tasks.

Consider using multinode if your application runs tasks that:

  • are triggered on demand by the user of the application;
  • take of the order of minutes or hours to complete;
  • require expensive hardware that should be provisioned only when required.

For example, multinode can be used within:

  • a document/image/video processing app
  • a data analytics app
  • a scientific computing app

The main benefits of multinode are:

  • Minimal boilerplate: Cloud API calls, cloud permissions, task lifecycle management and task data storage are abstracted away.
  • Responsive scaling: Compute resources are spun up as soon as a task is created, and torn down as soon as the task is complete.

Quick start

Deploy the multinode control plane into your AWS account. (Instructions and Terraform code provided in the aws-infra folder.)

Install the multinode Python package and authenticate with the Multinode control plane.

pip install multinode
multinode login

Define the task as a Python function.

# File: tasks/main.py

from multinode import Multinode

mn = Multinode()

@mn.function(cpu=4.0, memory="16 GiB")
def run_expensive_task(x):
    out =  # ... details of the task ...
    return out

Register the function with the multinode control plane.

multinode deploy tasks/ --project-name=my_project

Implement the rest of the application, invoking the function when needed.

# File: application/main.py
# NB can be a different codebase from tasks/

from multinode import get_deployed_function

run_expensive_task = get_deployed_function(
    project_name="my_project",
    function_name="run_expensive_task"
)

# ... other task_code ...

# Start a task invocation.
# The computation runs on *remote* hardware, which is *provisioned on demand*.
invocation_id = run_expensive_task.start(x=10000)

# ... other task_code ...

# Get the status of the task invocation, and the result (if available)
invocation = sum_of_squares.get(invocation_id)
print(invocation.status)  # e.g. PENDING, RUNNING, SUCCEEDED
print(invocation.result)  # 333283335000 (if available), or None (if still running)

Further functionality

In addition to the above basic functionality, Multinode allows you to:

  • Expose progress updates from an in-flight task.
  • Cancel a task programmatically.
  • Implement retries in case of code errors or hardware failures.
  • Configure timeouts and concurrency limits.
  • Inspect task logs.
  • Add custom Python dependencies and environment variables.
  • Manage the lifecycle of the deployed application.

For further details, see the reference guide or the worked example.

Approaches to scaling: When to use Multinode?

Multinode's approach: Direct resource provisioning. Multinode makes direct API calls to the cloud provider, to provision a new worker for each new task.

Alternative approach: Autoscaling a warm worker pool. Popular alternative frameworks for asynchronous tasks include Celery and Kafka consumers. Applications written in these frameworks usually run on a warm pool of workers. Each worker stays alive between task executions. The number of workers is autoscaled according to some metric (e.g. the number of pending tasks).

Advantage of Multinode's approach:

  • Scales up immediately when new tasks are created; scales down immediately when a task finishes.
  • No risk of interrupting a task execution when scaling down.

Advantages of the alternative warm-pool-based approach:

  • More suitable for processing a higher volume of shorter-lived tasks.
  • Can maintain spare capacity to mitigate against cold starts.

Architecture

Currently, Multinode runs on AWS.

  • Asynchronous tasks are run as ECS tasks, using Fargate as the serverless compute engine.
  • The control plane is deployed as an ECS service, again using Fargate.
  • Task definitions and task outputs are stored in an Aurora serverless v2 database.
  • Task logs are stored in Cloudwatch Logs.

With minimal API changes, the framework can be extended to other AWS compute engines (e.g. EC2 with GPUs), to other cloud providers, and to Kubernetes.

We may implement these extensions if there is demand. We also welcome contributions from the open source community in this regard.

Currently, you need to deploy Multinode in your own AWS account. (Terraform is provided in the aws-infra folder.) We may offer Multinode as a managed service in the future.

Programming language support

Python is the only supported language at the moment.

If you need to invoke a deployed Python function from an application written in another language such as Javascript, then you will need to use the REST API. (Or you can contribute a Javascript client!)

Let us know if you want to define your functions in other languages.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

multinode-0.0.14-py3-none-any.whl (143.2 kB view details)

Uploaded Python 3

File details

Details for the file multinode-0.0.14-py3-none-any.whl.

File metadata

  • Download URL: multinode-0.0.14-py3-none-any.whl
  • Upload date:
  • Size: 143.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.18 Linux/6.2.0-1014-azure

File hashes

Hashes for multinode-0.0.14-py3-none-any.whl
Algorithm Hash digest
SHA256 eab3e3c2174513ff43bb6b7c495ae51ab069416223944d768d9c8731f5f4c3b1
MD5 a8cc63d45f351e17b0cb2602369b9fbc
BLAKE2b-256 77b8074cb6308d1650dfa8f4b460f7a9a3fe75e8e05908857e6efb8f0e4a94a3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page