Skip to main content

Prime Intellect CLI + SDK

Project description

Prime Intellect


Prime Intellect CLI & SDKs


PyPI version Python versions Downloads

Command line interface and SDKs for Prime Lab, Hosted Training, GPU resources, sandboxes, and environments.

Overview

Prime is the official CLI and Python SDK for Prime Intellect, providing seamless access to Prime Lab workflows, Hosted Training, GPU compute infrastructure, remote code execution environments (sandboxes), and AI inference capabilities.

What can you do with Prime?

  • Deploy GPU pods with H100, A100, and other high-performance GPUs
  • Set up Lab workspaces for verifiers environments, evals, GEPA, and training
  • Discover and launch Hosted Training runs against verifiers environments
  • Create and manage isolated sandbox environments for running code
  • Access hundreds of pre-configured development environments
  • SSH directly into your compute instances
  • Manage team resources and permissions
  • Run OpenAI-compatible inference requests

Installation

Using uv (recommended)

First, install uv if you haven't already:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then install prime:

uv tool install prime

Using pip

pip install prime

Quick Start

Authentication

# Interactive login (recommended)
prime login

# Or set API key directly
prime config set-api-key

# Or use environment variable
export PRIME_API_KEY="your-api-key-here"

Get your API key from the Prime Intellect Dashboard.

Basic Usage

# Browse environments on the hub
prime env list

# Set up a Lab workspace
prime lab setup

# See available Hosted Training models, capacity, and pricing
prime train models

# Generate and launch a Hosted Training config
prime train init
prime train rl.toml

# List available GPUs
prime availability list

# Create a GPU pod
prime pods create --gpu A100 --count 1

# SSH into a pod
prime pods ssh <pod-id>

# Create a sandbox
prime sandbox create python:3.11

Features

Lab and Hosted Training

Prime Lab connects verifiers environments to evaluations, GEPA prompt optimization, and Hosted Training. Start with prime lab setup to create a local workspace with starter configs, then use prime train models to choose a Hosted Training model with current capacity and pricing.

# Set up a Lab workspace
prime lab setup

# List trainable models, capacity, and token pricing
prime train models

# Generate a Hosted Training config
prime train init

# Launch the run from the generated config
prime train rl.toml

# Inspect and manage Hosted Training runs
prime train list
prime train logs <run-id> -f
prime train metrics <run-id>
prime train checkpoints <run-id>

Environments Hub

Access hundreds of RL environments on our community hub with deep integrations with sandboxes, training, and evaluation stack.

# Browse available environments
prime env list

# View environment details
prime env info <environment-name>

# Inspect environment source without downloading the archive
prime env inspect <environment-name>

# Install an environment locally
prime env install <environment-name>

# Create and push your own environment
prime env init my-environment
prime env push my-environment

Environments provide pre-configured setups for machine learning, data science, and development workflows, tested and verified by the Prime Intellect community.

GPU Pod Management

Deploy and manage GPU compute instances:

# Browse available configurations
prime availability list --gpu-type H100_80GB

# Create a pod with specific configuration
prime pods create --id <config-id> --name my-training-pod

# Monitor pod status
prime pods status <pod-id>

# SSH access
prime pods ssh <pod-id>

# Terminate when done
prime pods terminate <pod-id>

Sandboxes

Isolated environments for running code remotely:

# Create a sandbox
prime sandbox create python:3.11

# Create a VM sandbox with GPUs
prime sandbox create user-1/vm-image:latest --vm --gpu-count 1 --gpu-type H100_80GB

# Create a CPU-only VM sandbox
prime sandbox create user-1/vm-image:latest --vm

# List sandboxes
prime sandbox list

# Execute commands
prime sandbox run <sandbox-id> -- python script.py

# Upload/download files
prime sandbox upload <sandbox-id> local_file.py /remote/path/
prime sandbox download <sandbox-id> /remote/file.txt ./local/

# Clean up
prime sandbox delete <sandbox-id>

Team Management

Manage resources across personal and team contexts:

# List your teams
prime teams list

# Switch context directly
prime switch
prime switch personal
prime switch <team-slug>
prime switch <team-id>  # fallback for teams without a slug

# All subsequent commands use the selected context
prime pods list

Configuration

API Key

Multiple ways to configure your API key:

# Option 1: Interactive (hides input)
prime config set-api-key

# Option 2: Direct
prime config set-api-key YOUR_API_KEY

# Option 3: Environment variable
export PRIME_API_KEY="your-api-key"

Configuration priority: CLI config > Environment variable

SSH Key

Configure SSH key for pod access:

prime config set-ssh-key-path ~/.ssh/id_rsa.pub

View Configuration

prime config view

Python SDK

Prime also provides a Python SDK for programmatic access:

from prime_sandboxes import APIClient, SandboxClient, CreateSandboxRequest

# Initialize client
client = APIClient(api_key="your-api-key")
sandbox_client = SandboxClient(client)

# Create a sandbox
sandbox = sandbox_client.create(CreateSandboxRequest(
    name="my-sandbox",
    docker_image="python:3.11-slim",
    cpu_cores=2,
    memory_gb=4,
))

# Wait for creation
sandbox_client.wait_for_creation(sandbox.id)

# Execute commands
result = sandbox_client.execute_command(sandbox.id, "python --version")
print(result.stdout)

# Clean up
sandbox_client.delete(sandbox.id)

Async SDK

import asyncio
from prime_sandboxes import AsyncSandboxClient, CreateSandboxRequest

async def main():
    async with AsyncSandboxClient(api_key="your-api-key") as client:
        sandbox = await client.create(CreateSandboxRequest(
            name="async-sandbox",
            docker_image="python:3.11-slim",
        ))

        await client.wait_for_creation(sandbox.id)
        result = await client.execute_command(sandbox.id, "echo 'Hello'")
        print(result.stdout)

        await client.delete(sandbox.id)

asyncio.run(main())

Use Cases

Machine Learning Training

# Deploy a pod with 8x H100 GPUs
prime pods create --gpu H100 --count 8 --name ml-training

# SSH and start training
prime pods ssh <pod-id>

Support & Resources

Related Packages

  • prime-sandboxes - Lightweight SDK for sandboxes only (if you don't need the full CLI)

License

MIT License - see LICENSE file for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prime-0.6.4.tar.gz (620.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prime-0.6.4-py3-none-any.whl (413.9 kB view details)

Uploaded Python 3

File details

Details for the file prime-0.6.4.tar.gz.

File metadata

  • Download URL: prime-0.6.4.tar.gz
  • Upload date:
  • Size: 620.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for prime-0.6.4.tar.gz
Algorithm Hash digest
SHA256 b2586ce8624a4a3fe494e4b0290b819c2b2f03aceba221146b31a10626d2d559
MD5 35270b349c16dd96ac5bc39e3d0ef1cd
BLAKE2b-256 6d657ba9a9f05ba5e062cb857d79280c1d1df0e81d68607ddaa15120507f96b1

See more details on using hashes here.

File details

Details for the file prime-0.6.4-py3-none-any.whl.

File metadata

  • Download URL: prime-0.6.4-py3-none-any.whl
  • Upload date:
  • Size: 413.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for prime-0.6.4-py3-none-any.whl
Algorithm Hash digest
SHA256 6a3777a4a25021da91b079bb70d99f79c56b5bb4f9ad5880a0a4493f578d4f28
MD5 dd76e3234cb2f06fc38311e04b223a0d
BLAKE2b-256 cbda4277b13bc24e849de0bed537e7d7e6097422af52bc44b4dc6b196c459937

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page