Skip to main content

SkyPilot: Manage all your AI compute.

Project description

SkyPilot

Documentation GitHub Release Join Slack Downloads

Manage all your AI compute

SkyPilot is a system to run, manage, and scale AI workloads on any AI infrastructure.

SkyPilot gives AI teams a simple interface to run jobs on any infra. Infra teams get a unified control plane to manage any AI compute — with advanced scheduling, scaling, and orchestration.

SkyPilot Abstractions

:fire: News :fire:

  • [Mar 2026] Scaling Karpathy's Autoresearch: Autoresearch runs 1 experiment at a time. We gave it 16 GPUs and let it run in parallel: blog, HackerNews
  • [Mar 2026] SkyPilot Agent Skills: GPU access and job management for AI agents: docs
  • [Jan 2026] Shopify case study: Shopify runs all AI training workloads on SkyPilot: case study
  • [Dec 2025] SkyPilot v0.11 released: Multi-Cloud Pools, Fast Managed Jobs, Enterprise-Readiness at Large Scale, Programmability. Release notes
  • [Dec 2025] Train an agent to use Google Search as a tool with RL on your Kubernetes or clouds: blog, example
  • [Oct 2025] Run RL training for LLMs with SkyRL on your Kubernetes or clouds: example

Overview

SkyPilot is easy to use for AI users:

  • Quickly spin up compute on your own infra
  • Environment and job as code — simple and portable
  • Easy job management: queue, run, and auto-recover many jobs

SkyPilot makes Kubernetes easy for AI & Infra teams:

  • Slurm-like ease of use, cloud-native robustness
  • Local dev experience on K8s: SSH into pods, sync code, or connect IDE
  • Turbocharge your clusters: gang scheduling, multi-cluster, and scaling

SkyPilot unifies multiple clusters, clouds, and hardware:

  • One interface to use reserved GPUs, Kubernetes clusters, Slurm clusters, or 20+ clouds
  • Flexible provisioning of GPUs, TPUs, CPUs, with smart failover
  • Team deployment and resource sharing

SkyPilot maximizes GPU fleet utilization:

  • Autostop: automatic cleanup of idle resources
  • Binpacking: workload binpacking on shared clusters
  • Intelligent scheduler: automatically schedule on the most available infra

SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.

Install with uv (also supported: pip, nightly, from source)

# Choose your clouds:
uv pip install "skypilot[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform,verda]"

To use SkyPilot directly with your agent (Claude Code, Codex, etc.), install the SkyPilot Skill. Tell your agent:

Fetch and follow https://github.com/skypilot-org/skypilot/blob/HEAD/agent/INSTALL.md to install the skypilot skill

SkyPilot

Current supported infra: Kubernetes, Slurm, AWS, GCP, Azure, OCI, CoreWeave, Nebius, Lambda Cloud, RunPod, Fluidstack, Cudo, Digital Ocean, Paperspace, Cloudflare, Samsung, IBM, Vast.ai, VMware vSphere, Seeweb, Prime Intellect, Shadeform, Verda Cloud, VastData, Crusoe.

SkyPilot

Getting started

Install SkyPilot in 1 minute. Then, launch your first cluster in 2 minutes in Quickstart.

SkyPilot is BYOC: Everything is launched within your cloud accounts, VPCs, and clusters.

Benefits of SkyPilot on Kubernetes

SkyPilot makes Kubernetes AI-native.

It turbocharges your existing Kubernetes clusters by accelerating AI/ML velocity:

  • AI-friendly interface to launch jobs and deployments
  • Much simplified interactive dev for K8s (SSH / sync code / connect IDE to pods)

...and optimizing GPU scheduling, utilization, and scaling:

  • Advanced scheduling: Gang scheduling, multi-node jobs, and queueing
  • Multi-cluster support: Bring all your clusters under one control plane
  • Multi-cloud support: One consistent interface to manage many providers

See SkyPilot vs Vanilla Kubernetes and this blog post for more details.

SkyPilot in 1 minute

A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.

Once written in this unified interface (YAML or Python API), the task can be launched on any available infra (Kubernetes, Slurm, cloud, etc.). This avoids vendor lock-in, and allows easily moving jobs to a different provider.

Paste the following into a file my_task.yaml:

resources:
  accelerators: A100:8  # 8x NVIDIA A100 GPU

num_nodes: 1  # Number of VMs to launch

# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples

# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
  cd mnist
  pip install -r requirements.txt

# Commands to run as a job.
# Typical use: launch the main program.
run: |
  cd mnist
  python main.py --epochs 1

Prepare the workdir by cloning:

git clone https://github.com/pytorch/examples.git ~/torch_examples

Launch with sky launch (note: access to GPU instances is needed for this example):

sky launch my_task.yaml

SkyPilot then performs the heavy-lifting for you, including:

  1. Find the cheapest & available infra across your clusters or clouds
  2. Provision the GPUs (pods or VMs), with auto-failover if the infra returned capacity errors
  3. Sync your local workdir to the provisioned cluster
  4. Auto-install dependencies by running the task's setup commands
  5. Run the task's run commands, and stream logs

See Quickstart to get started with SkyPilot.

Runnable examples

See SkyPilot examples that cover: development, training, serving, LLM models, AI apps, and common frameworks.

Latest featured examples:

Task Examples
Training Verl, Finetune Llama 4, TorchTitan, PyTorch, DeepSpeed, NeMo, Ray, Unsloth, Jax/TPU, OpenRLHF
Serving vLLM, SGLang, Ollama
Models DeepSeek-R1, Llama 4, Llama 3, CodeLlama, Qwen, Kimi-K2, Kimi-K2-Thinking, Mixtral
AI apps RAG, vector databases (ChromaDB, CLIP)
Common frameworks Airflow, Jupyter, marimo

Source files can be found in llm/ and examples/.

Learn more

To learn more, see SkyPilot Overview, SkyPilot docs, and SkyPilot blog.

SkyPilot adopters: Testimonials and Case Studies

Partners and integrations: Community Spotlights

Follow updates:

Questions and feedback

We are excited to hear your feedback:

For general discussions, join us on the SkyPilot Slack.

Contributing

We welcome all contributions to the project! See CONTRIBUTING for how to get involved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skypilot-0.12.2rc1.tar.gz (3.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skypilot-0.12.2rc1-py3-none-any.whl (3.6 MB view details)

Uploaded Python 3

File details

Details for the file skypilot-0.12.2rc1.tar.gz.

File metadata

  • Download URL: skypilot-0.12.2rc1.tar.gz
  • Upload date:
  • Size: 3.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for skypilot-0.12.2rc1.tar.gz
Algorithm Hash digest
SHA256 f92db3874494b9bcc5e14eee3d467de6a1005f56a80a9896bab7ac47c3a8a596
MD5 6556980f512140435df9cc7a56e08cde
BLAKE2b-256 3eca90f0b1c6447b6aaa4a856a4746de6264ba80174fdb4ead7d9ccc055d5227

See more details on using hashes here.

File details

Details for the file skypilot-0.12.2rc1-py3-none-any.whl.

File metadata

  • Download URL: skypilot-0.12.2rc1-py3-none-any.whl
  • Upload date:
  • Size: 3.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for skypilot-0.12.2rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 2cce673e3d7d5f8430aa67723b13285a0f795612a6b9f2bf20d49ba5e02a35cc
MD5 1bd2589952cac57cc987801a76e1acfd
BLAKE2b-256 10b38d8f88e7571d3501c564587ef13a64fd38b101a5b854325ac03db3385388

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page