Skip to main content

Minimal ML experiment platform wrapping Azure ML, Vertex AI, and Vercel Sandbox

Project description

NUCL

Minimal ML experiment platform. 2,000 lines of code.

NUCL wraps Azure ML, Vertex AI, and Vercel Sandbox behind a unified CLI and web dashboard. No servers to manage, no databases to maintain, no collectors to deploy. Every feature delegates to a managed service.

nucl run --name "vision/resnet-50-v2" --script train.py --gpu-type t4
nucl ps vision/
nucl log vision/resnet-50-v2 -f
nucl pull vision/resnet-50-v2

Architecture

graph LR
    subgraph Clients
        CLI["CLI (Python)"]
        Web["Web Dashboard"]
    end

    subgraph Vercel
        API["Next.js API Routes"]
        Auth["Clerk (auth + API keys)"]
    end

    subgraph Sandbox["Vercel Sandbox (Python 3.13)"]
        AzureSDK["azure-ai-ml SDK"]
        VertexSDK["vertex AI SDK"]
    end

    subgraph Platforms
        AzureML["Azure ML"]
        VertexAI["Vertex AI"]
        SandboxRun["Sandbox (CPU)"]
    end

    CLI -- HTTP --> API
    Web -- HTTP --> API
    API --> Auth
    API -- "job submission" --> Sandbox
    API -. "read ops (list, logs, cancel)" .-> Platforms
    AzureSDK --> AzureML
    VertexSDK --> VertexAI
    Sandbox --> SandboxRun

Job submission spins up a short-lived Vercel Sandbox with Python 3.13 and uses the official cloud SDKs to upload code and create training jobs. Read operations (list, logs, cancel) use direct REST API calls.

Three platforms:

Platform GPU Use case
Azure ML Yes Production training on Azure
Vertex AI Yes Production training on GCP
Sandbox No (CPU) Quick tests, no cloud account needed

Getting Started

Install the CLI

uv tool install nucl
nucl --help

For users: join a team and run jobs

Your team admin will have already configured cloud credentials. You just need to log in and start running jobs.

# 1. Log in (opens browser)
nucl auth login

# 2. See your teams and pick one
nucl team list
nucl team set <org-id>

# 3. Check what's configured
nucl team show

# 4. Run a job
nucl run --name "my-project/first-test" --script train.py --gpu-type t4

# 5. Monitor it
nucl ps
nucl log <job-id> -f
nucl pull <job-id> ./outputs

For admins: set up a team

You need az and/or gcloud CLI installed and authenticated.

# 1. Log in
nucl auth login

# 2. Pick your team
nucl team list
nucl team set <org-id>

# 3. Run the interactive setup wizard
nucl team setup

The wizard will:

  • List your Azure subscriptions and ML workspaces (or GCP projects)
  • Create a service principal (Azure) or service account (Vertex) for NUCL
  • If you lack Owner permissions for role assignment, it prints the exact command for an admin to run
  • Save encrypted credentials to NUCL (they never touch anyone's local machine)

You can also configure credentials manually:

nucl team config azure
nucl team config vertex

Running a sample job

Create a train.py:

import time

print("Starting training...")
for epoch in range(5):
    loss = 1.0 / (epoch + 1)
    print(f"Epoch {epoch}: loss={loss:.4f}")
    time.sleep(1)
print("Done!")

Run it on Sandbox (no GPU, no cloud account needed):

nucl run --name "test/hello-world" --script train.py
nucl ps
nucl log <job-id> -f

Run it on Azure ML with a T4 GPU:

nucl run --name "test/gpu-test" --script train.py --gpu-type t4

Experiment naming

Use / to organize experiments into folders:

lung-cancer/detection/yolov9-baseline
lung-cancer/detection/yolov9-augmented
breast-cancer/screening/resnet-50

Filter by prefix: nucl ps lung-cancer/detection/

In-job logging

NUCL does not ship a custom SDK. Use MLflow directly:

import mlflow

mlflow.log_param("learning_rate", 0.001)
mlflow.log_metric("accuracy", 0.95)
mlflow.log_artifact("model.pth")

Both Azure ML and Vertex AI natively support MLflow.

CLI Reference

nucl auth login|logout|status       Auth
nucl team list|show|set|setup       Teams
nucl team config azure|vertex       Manual credential entry
nucl run --name --script [--gpu-type]  Submit job
nucl ps [prefix]                    List jobs
nucl log <id> [-f]                  Stream logs
nucl stop <id>                      Cancel job
nucl pull <id> [target]             Download outputs
nucl model ls|pull                  Models
nucl hpo run <config.yaml>          HPO sweeps
nucl mcp serve                      MCP server for AI agents

MCP Server for AI Agents

NUCL ships an MCP server so AI agents (Claude, Cursor, etc.) can submit jobs, check status, and pull results.

Quick setup

bunx add-mcp "nucl mcp serve" --name nucl

This detects your installed agents (Claude Code, Cursor, etc.) and registers NUCL as an MCP server. Make sure you're logged in (nucl auth login) and have a team set (nucl team set <org-id>) first.

Available tools

The MCP server exposes all CLI operations: nucl_auth_status, nucl_team_list, nucl_team_show, nucl_team_set, nucl_run, nucl_ps, nucl_log, nucl_stop, nucl_pull, nucl_model_ls, nucl_model_pull, nucl_hpo_run, and team config tools.

Tech Stack

Layer Technology
CLI Python 3.11+, Click, httpx
Web Next.js 16, React 19, TypeScript 6
UI shadcn, Tailwind CSS 4, TanStack Table
Data fetching TanStack Query 5
Auth Clerk 7 (Organizations, API keys)
Job submission Vercel Sandbox (Python 3.13)
Encryption AES-256-GCM
Package management uv (Python), Bun (JS)

Deploying the Web Dashboard

cd web
bun install
bun dev

Environment variables (set in Vercel):

NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_...
CLERK_SECRET_KEY=sk_...
NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up
ENCRYPTION_KEY=<openssl rand -hex 32>
VERCEL_TOKEN=...

License

Internal use only.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nucl-0.13.0.tar.gz (20.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nucl-0.13.0-py3-none-any.whl (14.3 kB view details)

Uploaded Python 3

File details

Details for the file nucl-0.13.0.tar.gz.

File metadata

  • Download URL: nucl-0.13.0.tar.gz
  • Upload date:
  • Size: 20.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nucl-0.13.0.tar.gz
Algorithm Hash digest
SHA256 2c7ef45bd41f76c70d29499494ec7dde26a929cbaadc4c0643b7d05b160a698a
MD5 4889b1a1c4e3d7d26ad43bd802576aee
BLAKE2b-256 0453c8802548bbfdf3dd2b5cf98b86269ba52cdd9c488044d3491dcbb802f256

See more details on using hashes here.

Provenance

The following attestation bundles were made for nucl-0.13.0.tar.gz:

Publisher: publish-cli.yml on lunit-io/nucl

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file nucl-0.13.0-py3-none-any.whl.

File metadata

  • Download URL: nucl-0.13.0-py3-none-any.whl
  • Upload date:
  • Size: 14.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nucl-0.13.0-py3-none-any.whl
Algorithm Hash digest
SHA256 007dd498e8a11fe4b473d3ab1f8941044db7b682fd9527d59e824be7aac53e37
MD5 80dc4df7817e24a8272b62be4683f006
BLAKE2b-256 15760e9d2c441ae31e5a0fea363d9466fd87711018756d498ceecb26176e3222

See more details on using hashes here.

Provenance

The following attestation bundles were made for nucl-0.13.0-py3-none-any.whl:

Publisher: publish-cli.yml on lunit-io/nucl

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page