Skip to main content

dstack is an open-source framework for orchestration GPU workloads and development of generative AI models across multiple clouds.

Project description

dstack

Orchestrate GPU workloads across clouds

DocsExamplesBlogSlack

Last commit PyPI - License

dstack is an open-source framework for orchestrating GPU workloads across multiple cloud GPU providers. It provides a simple cloud-agnostic interface for development and deployment of generative AI models.

Latest news ✨

Installation

To use dstack, install it with pip, and start the server.

pip install "dstack[all]" -U
dstack start

Configure clouds

Upon startup, the server sets up the default project called main. Prior to using dstack, make sure to configure clouds.

Once the server is up, you can orchestrate GPU workloads using either the CLI or Python API.

Using CLI

Define a configuration

The CLI allows you to define what you want to run as a YAML file and run it via the dstack run CLI command.

Configurations can be of three types: dev-environment, task, and service.

Dev environments

A dev environment is a virtual machine with a pre-configured IDE.

type: dev-environment

python: "3.11" # (Optional) If not specified, your local version is used

setup: # (Optional) Executed once at the first startup
  - pip install -r requirements.txt

ide: vscode

Tasks

A task can be either a batch job, such as training or fine-tuning a model, or a web application.

type: task

python: "3.11" # (Optional) If not specified, your local version is used

ports:
  - 7860

commands:
  - pip install -r requirements.txt
  - python app.py

While the task is running in the cloud, the CLI forwards its ports traffic to localhost for convenient access.

Services

A service is an application that is accessible through a public endpoint.

type: service

port: 7860

commands:
  - pip install -r requirements.txt
  - python app.py

Once the service is up, dstack makes it accessible from the Internet through the gateway.

Run a configuration

To run a configuration, use the dstack run command followed by working directory and the path to the configuration file.

dstack run . -f text-generation-inference/serve.dstack.yml --gpu 80GB -y

 RUN           BACKEND  INSTANCE              SPOT  PRICE STATUS    SUBMITTED
 tasty-zebra-1 lambda   200GB, 1xA100 (80GB)  no    $1.1  Submitted now
 
Privisioning...

Serving on https://tasty-zebra-1.mydomain.com

Using API

As an alternative to the CLI, you can run tasks and services programmatically via Python API.

import sys

import dstack

task = dstack.Task(
    image="ghcr.io/huggingface/text-generation-inference:latest",
    env={"MODEL_ID": "TheBloke/Llama-2-13B-chat-GPTQ"},
    commands=[
        "text-generation-launcher --trust-remote-code --quantize gptq",
    ],
    ports=["8080:80"],
)
resources = dstack.Resources(gpu=dstack.GPU(memory="20GB"))

if __name__ == "__main__":
    print("Initializing the client...")
    client = dstack.Client.from_config(repo_dir="~/dstack-examples")

    print("Submitting the run...")
    run = client.runs.submit(configuration=task, resources=resources)

    print(f"Run {run.name}: " + run.status())

    print("Attaching to the run...")
    run.attach()

    # After the endpoint is up, http://127.0.0.1:8080/health will return 200 (OK).

    try:
        for log in run.logs():
            sys.stdout.buffer.write(log)
            sys.stdout.buffer.flush()

    except KeyboardInterrupt:
        print("Aborting the run...")
        run.stop(abort=True)
    finally:
        run.detach()

More information

For additional information and examples, see the following links:

Licence

Mozilla Public License 2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dstack-0.11.2.tar.gz (184.0 kB view details)

Uploaded Source

Built Distribution

dstack-0.11.2-py3-none-any.whl (13.9 MB view details)

Uploaded Python 3

File details

Details for the file dstack-0.11.2.tar.gz.

File metadata

  • Download URL: dstack-0.11.2.tar.gz
  • Upload date:
  • Size: 184.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for dstack-0.11.2.tar.gz
Algorithm Hash digest
SHA256 80393e35af651cb55df5bc9599d6e5ba35994d03c371295245bb28617887abf3
MD5 fd68e1dfad9403b4d869a6519a44271b
BLAKE2b-256 4a2a26fed67040cac5efb72a5797f0ac6dd3030a16c78cf3e8b31a247e30e004

See more details on using hashes here.

File details

Details for the file dstack-0.11.2-py3-none-any.whl.

File metadata

  • Download URL: dstack-0.11.2-py3-none-any.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for dstack-0.11.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2b9d6b957dfd373e803f59f949990991d5af13eaf393be4b858dd736f7c4e58b
MD5 8942b5203a231dbb397de1fcbc188efc
BLAKE2b-256 a25c96c612a535bb4a31d9e50b7fafb10773b16d6395ccf272244514fa547c5b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page