Skip to main content

CLI for running GPU workloads, managing remote workspaces, and evaluating/optimizing kernels

Project description

Wafer CLI

Run GPU workloads, optimize kernels, and query GPU documentation.

Getting Started

# Install
cd apps/wafer-cli && uv sync

# Use staging (workspaces and other features require staging)
wafer config set api.environment staging

# Login
wafer login

# Run a command on a remote GPU
wafer remote-run -- nvidia-smi

Commands

wafer login / wafer logout / wafer whoami

Authenticate with GitHub OAuth.

wafer login          # Opens browser for GitHub OAuth
wafer whoami         # Show current user
wafer logout         # Remove credentials

wafer remote-run

Run any command on a remote GPU.

wafer remote-run -- nvidia-smi
wafer remote-run --upload-dir ./my_code -- python3 train.py

wafer workspaces

Create and manage persistent GPU environments.

Available GPUs:

  • MI300X - AMD Instinct MI300X (192GB HBM3, ROCm)
  • B200 - NVIDIA Blackwell B200 (180GB HBM3e, CUDA) - default
wafer workspaces list
wafer workspaces create my-workspace --gpu B200 --wait   # NVIDIA B200
wafer workspaces create amd-dev --gpu MI300X             # AMD MI300X
wafer workspaces ssh <workspace-id>
wafer workspaces delete <workspace-id>

wafer agent

AI assistant for GPU kernel development. Helps with CUDA/Triton optimization, documentation queries, and performance analysis.

wafer agent "What is TMEM in CuTeDSL?"
wafer agent -s "optimize this kernel" < kernel.py

wafer evaluate

Evaluate kernel correctness and performance against a reference implementation.

Functional format (default):

# Generate template files
wafer evaluate make-template ./my-kernel

# Run evaluation
wafer evaluate --impl kernel.py --reference ref.py --test-cases tests.json --benchmark

The implementation must define custom_kernel(inputs), the reference must define ref_kernel(inputs) and generate_input(**params).

KernelBench format (ModelNew class):

# Extract a KernelBench problem as template
wafer evaluate kernelbench make-template level1/1

# Run evaluation
wafer evaluate kernelbench --impl my_kernel.py --reference problem.py --benchmark

The implementation must define class ModelNew(nn.Module), the reference must define class Model, get_inputs(), and get_init_inputs().

wafer wevin -t ask-docs

Query GPU documentation using the docs template.

wafer wevin -t ask-docs --json -s "What causes bank conflicts in shared memory?"

wafer corpus

Download documentation to local filesystem for agents to search.

wafer corpus list
wafer corpus download cuda-programming-guide

Customization

wafer remote-run options

wafer remote-run --image pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel -- python3 script.py
wafer remote-run --require-hwc -- ncu --set full python3 bench.py   # Hardware counters for NCU

wafer evaluate options

wafer evaluate --impl k.py --reference r.py --test-cases t.json \
    --target vultr-b200 \    # Specific GPU target
    --benchmark \            # Measure performance
    --profile                # Enable torch.profiler + NCU

wafer push for multi-command workflows

WORKSPACE=$(wafer push ./project)
wafer remote-run --workspace-id $WORKSPACE -- python3 test1.py
wafer remote-run --workspace-id $WORKSPACE -- python3 test2.py

Profile analysis

wafer nvidia ncu analyze profile.ncu-rep
wafer nvidia nsys analyze profile.nsys-rep

Advanced

Local targets

Bypass the API and SSH directly to your own GPUs:

wafer targets list
wafer targets add ./my-gpu.toml
wafer targets default my-gpu

Defensive evaluation

Detect evaluation hacking (stream injection, lazy evaluation, etc.):

wafer evaluate --impl k.py --reference r.py --test-cases t.json --benchmark --defensive

Other tools

wafer perfetto <trace.json> --query "SELECT * FROM slice"   # Perfetto SQL queries
wafer capture ./script.py                                    # Capture execution snapshot
wafer compiler-analyze kernel.ptx                            # Analyze PTX/SASS

ROCm profiling (AMD GPUs)

wafer rocprof-sdk ...
wafer rocprof-systems ...
wafer rocprof-compute ...

Shell Completion

Enable tab completion for commands, options, and target names:

# Install completion (zsh/bash/fish)
wafer --install-completion

# Then restart your terminal, or source your shell config:
source ~/.zshrc  # or ~/.bashrc

Now you can tab-complete:

  • Commands: wafer eva<TAB>wafer evaluate
  • Options: wafer evaluate --<TAB>
  • Target names: wafer evaluate --target v<TAB>wafer evaluate --target vultr-b200
  • File paths: wafer evaluate --impl ./<TAB>

AI Assistant Skills

Install the Wafer CLI skill to make wafer commands discoverable by your AI coding assistant:

# Install for all supported tools (Claude Code, Codex CLI, Cursor)
wafer skill install

# Install for a specific tool
wafer skill install -t cursor    # Cursor
wafer skill install -t claude    # Claude Code
wafer skill install -t codex     # Codex CLI

# Check installation status
wafer skill status

# Uninstall
wafer skill uninstall

Installing from GitHub (Cursor)

You can also install the skill directly from GitHub in Cursor:

  1. Open Cursor Settings (Cmd+Shift+J / Ctrl+Shift+J)
  2. Navigate to RulesAdd RuleRemote Rule (Github)
  3. Enter: https://github.com/wafer-ai/skills
  4. Cursor will automatically discover skills in .cursor/skills/

The skill provides comprehensive guidance for GPU kernel development, including documentation lookup, trace analysis, kernel evaluation, and optimization workflows.


Requirements

  • Python 3.10+
  • GitHub account (for authentication)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wafer_cli-0.2.50.tar.gz (259.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wafer_cli-0.2.50-py3-none-any.whl (241.0 kB view details)

Uploaded Python 3

File details

Details for the file wafer_cli-0.2.50.tar.gz.

File metadata

  • Download URL: wafer_cli-0.2.50.tar.gz
  • Upload date:
  • Size: 259.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for wafer_cli-0.2.50.tar.gz
Algorithm Hash digest
SHA256 9927a4d2c103c6b0ca78d7e66f1a6ede3f5edf9f7ddc5713b674b0da47f6ec02
MD5 74e4a3940d9858aa1a9ec5c0812b9148
BLAKE2b-256 a87eb98edf48ad4c2d1dc4f57b6c30a6eec15cc9bc0bf51802224db332fbf700

See more details on using hashes here.

File details

Details for the file wafer_cli-0.2.50-py3-none-any.whl.

File metadata

  • Download URL: wafer_cli-0.2.50-py3-none-any.whl
  • Upload date:
  • Size: 241.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for wafer_cli-0.2.50-py3-none-any.whl
Algorithm Hash digest
SHA256 b89cb56b97cbdf4fdf84b9b3702a532f9ee1138769942173532e0ecd00bc1b7b
MD5 d458a16d6f4d412a53746f5d6252453f
BLAKE2b-256 b3a924722ab479b8be8f8786983f663d551cf728e6561e9bd2a9f2128e11a407

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page