Skip to main content

A Command Line Interface for training models with https://dstack.ai

Project description

Website | Documentation

Typical ML workflows include multiple steps, e.g. pre-processing data, training, fine-tuning, validation, etc. With dstack, you can define ML workflows in a simple YAML format, and run them via the CLI over a pool of your own servers or on-demand servers in your cloud.

How dstack works

  1. You define .dstack/workflows.yaml and .dstack/variables.yaml files inside your project (must be a Git repository) .

  1. You install the dstack CLI via pip.
  2. You either install dstack-runner daemon on your servers, or use the dstack aws confi to authorize dstack to use your own cloud to create on-demand runners.
  3. You use the dstack CLI to run workflows, manage runs, jobs, logs, artifacts, runners.

  1. When a workflow is submitted via the CLI (e.g. via dstack run) , the request is sent to the dstack server. The dstack server creates jobs for the submitted run, and assign them to available runners (either servers where you've installed dstack-runner or on-demand spot instances in your cloud that you allowed creating).
  2. Runners execute assigned jobs, report their logs in real time, and upload artifacts once the job is finished.

For more information, please visit https://dstack.ai or https://docs.dstack.ai.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

dstack-0.0.3rc2-py3-none-any.whl (32.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page