CLI for running GPU workloads, managing remote workspaces, and evaluating/optimizing kernels
Project description
Wafer CLI
Run GPU workloads, optimize kernels, and query GPU documentation.
Getting Started
# Install
cd apps/wafer-cli && uv sync
# Use staging (workspaces and other features require staging)
wafer config set api.environment staging
# Login
wafer login
# Run a command on a remote GPU
wafer remote-run -- nvidia-smi
Commands
wafer login / wafer logout / wafer whoami
Authenticate with GitHub OAuth.
wafer login # Opens browser for GitHub OAuth
wafer whoami # Show current user
wafer logout # Remove credentials
wafer remote-run
Run any command on a remote GPU.
wafer remote-run -- nvidia-smi
wafer remote-run --upload-dir ./my_code -- python3 train.py
wafer workspaces
Create and manage persistent GPU environments.
Available GPUs:
MI300X- AMD Instinct MI300X (192GB HBM3, ROCm)B200- NVIDIA Blackwell B200 (180GB HBM3e, CUDA) - default
wafer target workspace list
wafer target workspace create my-workspace --gpu B200 --wait # NVIDIA B200
wafer target workspace create amd-dev --gpu MI300X # AMD MI300X
wafer target workspace ssh <workspace-id>
wafer target workspace delete <workspace-id>
wafer agent
AI assistant for GPU kernel development. Helps with CUDA/Triton optimization, documentation queries, and performance analysis.
wafer agent "What is TMEM in CuTeDSL?"
wafer agent -s "optimize this kernel" < kernel.py
wafer tool eval
Evaluate kernel correctness and performance against a reference implementation.
Functional format (default):
# Generate template files
wafer tool eval make-template ./my-kernel
# Run evaluation
wafer tool eval gpumode --impl kernel.py --reference ref.py --test-cases tests.json --benchmark
The implementation must define custom_kernel(inputs), the reference must define ref_kernel(inputs) and generate_input(**params).
KernelBench format (ModelNew class):
# Extract a KernelBench problem as template
wafer tool eval kernelbench make-template level1/1
# Run evaluation
wafer tool eval kernelbench --impl my_kernel.py --reference problem.py --benchmark
The implementation must define class ModelNew(nn.Module), the reference must define class Model, get_inputs(), and get_init_inputs().
wafer agent -t ask-docs
Query GPU documentation using the docs template. Uses the ask_docs tool to search wafer's documentation corpus via the API.
wafer agent -t ask-docs -s "What causes bank conflicts in shared memory?"
Customization
wafer tool eval options
wafer tool eval gpumode --impl k.py --reference r.py --test-cases t.json \
--target vultr-b200 \ # Specific GPU target
--benchmark \ # Measure performance
--profile # Enable torch.profiler + NCU
Profile analysis
wafer tool ncu analyze profile.ncu-rep
wafer tool nsys analyze profile.nsys-rep
Advanced
Local targets
Bypass the API and SSH directly to your own GPUs:
wafer target config list
wafer target config add ./my-gpu.toml
wafer target config default my-gpu
Defensive evaluation
Detect evaluation hacking (stream injection, lazy evaluation, etc.):
wafer tool eval gpumode --impl k.py --reference r.py --test-cases t.json --benchmark --defensive
Other tools
wafer tool perfetto <trace.json> --query "SELECT * FROM slice" # Perfetto SQL queries
wafer tool capture ./script.py # Capture execution snapshot
wafer compiler-analyze kernel.ptx # Analyze PTX/SASS
ROCm profiling (AMD GPUs)
wafer tool rocprof-sdk ...
wafer tool rocprof-systems ...
wafer tool rocprof-compute ...
Shell Completion
Enable tab completion for commands, options, and target names:
# Install completion (zsh/bash/fish)
wafer --install-completion
# Then restart your terminal, or source your shell config:
source ~/.zshrc # or ~/.bashrc
Now you can tab-complete:
- Commands:
wafer tool ev<TAB>→wafer tool eval - Options:
wafer tool eval --<TAB> - Target names:
wafer tool eval --target v<TAB>→wafer tool eval --target vultr-b200 - File paths:
wafer tool eval gpumode --impl ./<TAB>
AI Assistant Skills
Install the Wafer CLI skill to make wafer commands discoverable by your AI coding assistant:
# Install for all supported tools (Claude Code, Codex CLI, Cursor)
wafer skill install
# Install for a specific tool
wafer skill install -t cursor # Cursor
wafer skill install -t claude # Claude Code
wafer skill install -t codex # Codex CLI
# Check installation status
wafer skill status
# Uninstall
wafer skill uninstall
Installing from GitHub (Cursor)
You can also install the skill directly from GitHub in Cursor:
- Open Cursor Settings (Cmd+Shift+J / Ctrl+Shift+J)
- Navigate to Rules → Add Rule → Remote Rule (Github)
- Enter:
https://github.com/wafer-ai/skills - Cursor will automatically discover skills in
.cursor/skills/
The skill provides comprehensive guidance for GPU kernel development, including documentation lookup, trace analysis, kernel evaluation, and optimization workflows.
Requirements
- Python 3.10+
- GitHub account (for authentication)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wafer_cli-0.2.63.tar.gz.
File metadata
- Download URL: wafer_cli-0.2.63.tar.gz
- Upload date:
- Size: 329.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c9c082d59bbb66b1cf6b1b07557e81615d576ec085ce884da816c7c71a7adff5
|
|
| MD5 |
7fa6f3de1042daec75faa1c8602e0ddc
|
|
| BLAKE2b-256 |
837f6b83f3fcc38d7eace647a3ff61fdc4465a9308cca5b6f35d2b416dcabf2c
|
File details
Details for the file wafer_cli-0.2.63-py3-none-any.whl.
File metadata
- Download URL: wafer_cli-0.2.63-py3-none-any.whl
- Upload date:
- Size: 281.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
860d6be910ada3ec4e82b5fc0a99abb2e5e085e17344075b6059929bedef1fae
|
|
| MD5 |
a511073f52e660e4b1a0e8600d3d2904
|
|
| BLAKE2b-256 |
4bf3d3577225d1496ffe23a40cc146785a9e3b849de04f79e3ea4f5af54882f7
|