Skip to main content

A CLI tool for launching Kubernetes job fast in EIDF

Project description

kblaunch

Test Python Version Ruff PyPI Version Documentation

A CLI tool for launching Kubernetes jobs with environment variable and secret management.

Installation

Using uv (recommended)

  1. Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh

Alternatively, you can install uv using pip:

pip install uv
  1. Use uvx to use the cli (the uvx command invokes a tool without installing it to the local .venv):
uvx kblaunch --help

When using the kblaunch command always prepend with uvx command.

Usage

Setup

Run the setup command to configure the tool (email and slack webhook):

uvx kblaunch setup

This will go through the following steps:

  1. Set the user (optional): This is used to identify the user and required by the cluster. The default is set to $USER.
  2. Set the email (required): This is used to identify the user and required by the cluster.
  3. Set up Slack notifications (optional): This will send a test message to the webhook, and setup the webhook in the config. When your job starts you will receive a message at the webhook. Note a slack webhook is also required for automatic vscode tunnelling.
  4. Set up a PVC (optional): This will create a PVC for the user to use in their jobs.
  5. Set the default PVC to use (optional): Note only one pod can use the PVC at a time. The default pvc will be passed to the job. The pvc will always be mounted at /pvc.
  6. Set up git credentials (optional): If the user has set up a git/rsa key on the head node. We can export it as a secret for them and automatically load it and setup git credentials in their launched pods. This requires having setup git/rsa credentials before hand.

The outcome of kblaunch setup is a .json file stored in `.cache/.kblaunch/config.json. It should look something like this:

{
  "email": "XXX@ed.ac.uk",
  "user": "sXXX-infk8s",
  "slack_webhook": "https://hooks.slack.com/services/XXX/XXX/XXX",
  "default_pvc": "sXXX-infk8s-pvc",
  "git_secret": "sXXX-infk8s-git-ssh"
}

When you later use kblaunch to launch a job, it will use the values stored in that config.json.

Basic Usage

Launch a simple job:

uvx kblaunch launch
    --job-name myjob \
    --command "python script.py"

With Environment Variables

  1. From local environment:

    export PATH=...
    export OPENAI_API_KEY=...
    # pass the environment variables to the job
    kblaunch launch \
        --job-name myjob \
        --command "python script.py" \
        --local-env-vars PATH,OPENAI_API_KEY
    
  2. From Kubernetes secrets:

    uvx kblaunch launch \
        --job-name myjob \
        --command "python script.py" \
        --secrets-env-vars mysecret1,mysecret2
    
  3. From .env file (default behavior):

    uvx kblaunch launch \
        --job-name myjob \
        --command "python script.py" \
        --load-dotenv
    

    If a .env exists in the current directory, it will be loaded and passed as environment variables to the job.

GPU Jobs

Specify GPU requirements:

uvx kblaunch launch \
    --job-name gpu-job \
    --command "python train.py" \
    --gpu-limit 2 \
    --gpu-product "NVIDIA-A100-SXM4-80GB"

Interactive Mode

Launch an interactive job:

uvx kblaunch launch \
    --job-name interactive \
    --interactive

Launch Options

Launch command options:

  • --email: User email (overrides config)
  • --job-name: Name of the Kubernetes job [required]
  • --docker-image: Docker image (default: "nvcr.io/nvidia/cuda:12.0.0-devel-ubuntu22.04")
  • --namespace: Kubernetes namespace (default: $KUBE_NAMESPACE)
  • --queue-name: Kueue queue name (default: $KUBE_QUEUE_NAME)
  • --interactive: Run in interactive mode (default: False)
  • --command: Command to run in the container [required if not interactive]
  • --cpu-request: CPU request (default: "1")
  • --ram-request: RAM request (default: "8Gi")
  • --gpu-limit: GPU limit (default: 1)
  • --gpu-product: GPU product type (default: "NVIDIA-A100-SXM4-40GB")
    • Available options:
      • NVIDIA-A100-SXM4-80GB
      • NVIDIA-A100-SXM4-40GB
      • NVIDIA-A100-SXM4-40GB-MIG-3g.20gb
      • NVIDIA-A100-SXM4-40GB-MIG-1g.5gb
      • NVIDIA-H100-80GB-HBM3
  • --secrets-env-vars: List of secret environment variables (default: [])
  • --local-env-vars: List of local environment variables (default: [])
  • --load-dotenv: Load environment variables from .env file (default: True)
  • --nfs-server: NFS server address (default: set to environment variable $INFK8S_NFS_SERVER_IP)
  • --pvc-name: Persistent Volume Claim name (default: default_pvc if present in config.json)
  • --dry-run: Print job YAML without creating it (default: False)
  • --priority: Priority class name (default: "default")
    • Available options: default, batch, short
  • --vscode: Install VS Code CLI in container (default: False)
  • --tunnel: Start VS Code SSH tunnel on startup (requires $SLACK_WEBHOOK and --vscode flag)
  • --startup-script: Path to startup script to run in container

Monitor command options:

  • --namespace: Kubernetes namespace (default: $KUBE_NAMESPACE)

Monitoring Commands

The kblaunch monitor command provides several subcommands to monitor cluster resources:

Displays aggregate GPU statistics for the cluster:

uvx kblaunch monitor gpus

Displays queued jobs (jobs which are waiting for GPUs):

uvx kblaunch monitor queue

Displays per-user statistics:

uvx kblaunch monitor users

Displays per-job statistics:

uvx kblaunch monitor jobs

Note that users and jobs commands will run nvidia-smi on pods to obtain GPU usage is not recommended for frequent use.

Features

  • Kubernetes job management
  • Environment variable handling from multiple sources
  • Kubernetes secrets integration
  • GPU job support
  • Interactive mode
  • Automatic job cleanup
  • Slack notifications (when configured)
  • Persistent Volume Claim (PVC) management
  • VS Code integration (with Code tunnelling support)
  • Monitoring commands

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kblaunch-0.3.6.tar.gz (117.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kblaunch-0.3.6-py3-none-any.whl (26.9 kB view details)

Uploaded Python 3

File details

Details for the file kblaunch-0.3.6.tar.gz.

File metadata

  • Download URL: kblaunch-0.3.6.tar.gz
  • Upload date:
  • Size: 117.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.20

File hashes

Hashes for kblaunch-0.3.6.tar.gz
Algorithm Hash digest
SHA256 963b7ef6aecf8a82e83ed39ea0be7b717a6f2bef8f51b28f111c29c1cdcd164c
MD5 16c3ed2f4d3471bc156649c1f14760ff
BLAKE2b-256 33fcceca6dbc4b26e9150c3d8367e8682c2015a18f1d57bc32b2fd7a27cf0277

See more details on using hashes here.

File details

Details for the file kblaunch-0.3.6-py3-none-any.whl.

File metadata

  • Download URL: kblaunch-0.3.6-py3-none-any.whl
  • Upload date:
  • Size: 26.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.20

File hashes

Hashes for kblaunch-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 fd8d2c7b40387a377756bb6ff142ce041dcca125b2d1ea8fcf8e38fd1232520a
MD5 d0bd33816f2156068bd817f656cbe598
BLAKE2b-256 e2cca55a6e7703452a3bc7056709b97c06262a3b184fd6f668caafa132528e37

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page