Skip to main content

SkyPilot: Run AI on Any Infra — Unified, Faster, Cheaper.

Project description

SkyPilot

Documentation GitHub Release Join Slack Downloads

Simplify & scale any AI infrastructure


:fire: News :fire:

  • [Nov 2025] Serve Kimi K2 Thinking with reasoning capabilities on your Kubernetes or clouds: example
  • [Oct 2025] Run RL training for LLMs with SkyRL on your Kubernetes or clouds: example
  • [Oct 2025] Train and serve Andrej Karpathy's nanochat - the best ChatGPT that $100 can buy: example
  • [Oct 2025] Run large-scale LLM training with TorchTitan on any AI infra: example
  • [Sep 2025] Scaling AI infrastructure at Abridge - 10x faster development with SkyPilot: blog
  • [Sep 2025] Network and Storage Benchmarks for LLM training on the cloud: blog
  • [Aug 2025] Serve and finetune OpenAI GPT-OSS models (gpt-oss-120b, gpt-oss-20b) with one command on any infra: serve + LoRA and full finetuning
  • [Jul 2025] Run distributed RL training for LLMs with Verl (PPO, GRPO) on any cloud: example
  • [Jul 2025] Finetune Llama4 on any distributed cluster/cloud: example
  • [Jul 2025] Two-part blog series, The Evolution of AI Job Orchestration: (1) Running AI jobs on GPU Neoclouds, (2) The AI-Native Control Plane & Orchestration that Finally Works for ML
  • [Apr 2025] Spin up Qwen3 on your cluster/cloud: example

LLM Finetuning Cookbooks: Finetuning Llama 2 / Llama 3.1 in your own cloud environment, privately: Llama 2 example and blog; Llama 3.1 example and blog


SkyPilot is a system to run, manage, and scale AI workloads on any AI infrastructure.

SkyPilot gives AI teams a simple interface to run jobs on any infra. Infra teams get a unified control plane to manage any AI compute — with advanced scheduling, scaling, and orchestration.

SkyPilot Abstractions

Overview

SkyPilot is easy to use for AI teams:

  • Quickly spin up compute on your own infra
  • Environment and job as code — simple and portable
  • Easy job management: queue, run, and auto-recover many jobs

SkyPilot makes Kubernetes easy for AI & Infra teams:

  • Slurm-like ease of use, cloud-native robustness
  • Local dev experience on K8s: SSH into pods, sync code, or connect IDE
  • Turbocharge your clusters: gang scheduling, multi-cluster, and scaling

SkyPilot unifies multiple clusters, clouds, and hardware:

SkyPilot cuts your cloud costs & maximizes GPU availability:

  • Autostop: automatic cleanup of idle resources
  • Spot instance support: 3-6x cost savings, with preemption auto-recovery
  • Intelligent scheduling: automatically run on the cheapest & most available infra

SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.

Install with pip:

# Choose your clouds:
pip install -U "skypilot[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb]"

To get the latest features and fixes, use the nightly build or install from source:

# Choose your clouds:
pip install "skypilot-nightly[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb]"

SkyPilot

Current supported infra: Kubernetes, AWS, GCP, Azure, OCI, CoreWeave, Nebius, Lambda Cloud, RunPod, Fluidstack, Cudo, Digital Ocean, Paperspace, Cloudflare, Samsung, IBM, Vast.ai, VMware vSphere, Seeweb, Prime Intellect.

SkyPilot

Getting started

You can find our documentation here.

SkyPilot in 1 minute

A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.

Once written in this unified interface (YAML or Python API), the task can be launched on any available infra (Kubernetes, cloud, etc.). This avoids vendor lock-in, and allows easily moving jobs to a different provider.

Paste the following into a file my_task.yaml:

resources:
  accelerators: A100:8  # 8x NVIDIA A100 GPU

num_nodes: 1  # Number of VMs to launch

# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples

# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
  cd mnist
  pip install -r requirements.txt

# Commands to run as a job.
# Typical use: launch the main program.
run: |
  cd mnist
  python main.py --epochs 1

Prepare the workdir by cloning:

git clone https://github.com/pytorch/examples.git ~/torch_examples

Launch with sky launch (note: access to GPU instances is needed for this example):

sky launch my_task.yaml

SkyPilot then performs the heavy-lifting for you, including:

  1. Find the cheapest & available infra across your clusters or clouds
  2. Provision the GPUs (pods or VMs), with auto-failover if the infra returned capacity errors
  3. Sync your local workdir to the provisioned cluster
  4. Auto-install dependencies by running the task's setup commands
  5. Run the task's run commands, and stream logs

See Quickstart to get started with SkyPilot.

Runnable examples

See SkyPilot examples that cover: development, training, serving, LLM models, AI apps, and common frameworks.

Latest featured examples:

Task Examples
Training Verl, Finetune Llama 4, TorchTitan, PyTorch, DeepSpeed, NeMo, Ray, Unsloth, Jax/TPU
Serving vLLM, SGLang, Ollama
Models DeepSeek-R1, Llama 4, Llama 3, CodeLlama, Qwen, Kimi-K2, Kimi-K2-Thinking, Mixtral
AI apps RAG, vector databases (ChromaDB, CLIP)
Common frameworks Airflow, Jupyter, marimo

Source files can be found in llm/ and examples/.

More information

To learn more, see SkyPilot Overview, SkyPilot docs, and SkyPilot blog.

SkyPilot adopters: Testimonials and Case Studies

Partners and integrations: Community Spotlights

Follow updates:

Read the research:

SkyPilot was initially started at the Sky Computing Lab at UC Berkeley and has since gained many industry contributors. To read about the project's origin and vision, see Concept: Sky Computing.

Questions and feedback

We are excited to hear your feedback:

For general discussions, join us on the SkyPilot Slack.

Contributing

We welcome all contributions to the project! See CONTRIBUTING for how to get involved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skypilot-0.11.1.tar.gz (2.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skypilot-0.11.1-py3-none-any.whl (2.9 MB view details)

Uploaded Python 3

File details

Details for the file skypilot-0.11.1.tar.gz.

File metadata

  • Download URL: skypilot-0.11.1.tar.gz
  • Upload date:
  • Size: 2.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for skypilot-0.11.1.tar.gz
Algorithm Hash digest
SHA256 c37913613077d10cdf0b8b7428a338256aa875a7070d98dfc86dbf91242ba9ba
MD5 a79df5dd9efabf5a06c00a29bcee40bf
BLAKE2b-256 cbe8efca2aab07beada5034cae4a93274a502e966bcb0df9135c919f2fb235eb

See more details on using hashes here.

File details

Details for the file skypilot-0.11.1-py3-none-any.whl.

File metadata

  • Download URL: skypilot-0.11.1-py3-none-any.whl
  • Upload date:
  • Size: 2.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for skypilot-0.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5feb20d8d5ed4c12e04a74f2f0b12f8ef5148a2f8192ef40bcf9faefcedf4136
MD5 82f578e99882deb87e0701b4079419eb
BLAKE2b-256 9d6e107f5d0f0cbcdfd0f0eebaa3387f83e586beb0d8a96e075722bb89d85b6e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page