Skip to main content

Launcher for hypha services

Project description

hypha-launcher

Run triton server on HPC

Install with PyPi MIT license

Work In Progress

Features

  • CLI/API for:
    • downloading model from s3 and pulling docker image of triton server
    • launch s3 server
    • launch triton server
    • ...
  • Support different container engines
    • Docker
    • Apptainer
  • Support different compute environments
    • Local
    • Slurm

Installation

pip install hypha-launcher

CLI Usage

hypha-launcher --help

Launch the BioEngine Worker on HPC

BioEngine consists of a set of services that are used to serve AI models from bioimage.io. We provide the model test run feature accessible from https://bioimage.io and a dedicated bioengine web client: https://bioimage-io.github.io/bioengine-web-client/. While our public instance is openly accessible for testing and evaluation, you can run your own instance of the BioEngine worker to serve the models, e.g. with your own HPC computing resources.

Download all models from s3 and launch triton server.

Launch on HPC cluster. You need to set the job command template via the HYPHA_HPC_JOB_TEMPLATE environment variable for your own HPC cluster.

For example, here is an example for launching the BioEngine on a Slurm cluster:

# Please replace the job command with your own settings
export HYPHA_HPC_JOB_TEMPLATE="srun -A Your-Slurm-Account -t 03:00:00 --gpus-per-node A100:1 {cmd}"
python -m hypha_launcher launch_bioengine_worker --hypha-server-url https://ai.imjoy.io --triton-service-id my-triton

In the above example, the job command template is set to use the Slurm scheduler with the specified account and time limit. The {cmd} placeholder will be replaced with the actual command to launch jobs.

Optionally, you can also set the store path for storing the models and the triton server configuration via the HYPHA_LAUNCHER_STORE_DIR environment variable. By default, the store path is set to .hypha-launcher.

export HYPHA_LAUNCHER_STORE_DIR=".hypha-launcher"

Download model from s3

python -m hypha-launcher - download_models_from_s3 bioengine-model-runner.* --n_parallel=5

Pull docker image of triton server

python -m hypha-launcher - pull_image

TODO

  • Download model from s3
  • Pull docker image of triton server
  • Run triton server
  • Register service on hypha
  • Conmmunicate with triton server
  • Test on HPC
  • Support run on local machine without GPU
  • Support launch containers inside a container (For support run inside the podman-desktop)
  • Job management(Auto stop and restart)
  • Load balancing
  • Documentation

Development

Install the package in editable mode with the following command:

pip install -e .
pip install -r requirements-dev.txt

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hypha_launcher-0.1.3.tar.gz (15.7 kB view hashes)

Uploaded Source

Built Distribution

hypha_launcher-0.1.3-py3-none-any.whl (15.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page