Skip to main content

Official Python SDK for Magpie Cloud - Launch ephemeral VMs and run batch jobs

Project description

Magpie Python SDK

The Magpie SDK is the official Python client for the MicroVM Platform orchestrator. It wraps the REST API with friendly helpers so you can ship ephemeral batch jobs, long-lived development VMs, or stateful workflows without wiring up HTTP calls yourself.


Installation

pip install magpie-cloud

The package targets Python 3.9+. Install it inside a virtual environment when possible.


Getting Started

from magpie import Magpie

client = Magpie(
    api_key="YOUR_API_KEY",
    base_url="http://localhost:8080",  # Override if you host the orchestrator elsewhere
)

All SDK methods return plain data classes (pydantic models) or Python dictionaries. The jobs resource is central for launching workloads and inspecting their lifecycle.


Job Modes at a Glance

Flag Default What it controls Typical use cases
persist False Keeps the VM alive after the script exits Interactive/dev shells, long-running services
stateful False Mounts a reusable /workspace disk between runs CI caches, incremental builds
ip_lease False Allocates a routable IPv6 address for SSH/http access Remote editors, port forwarding, health checks

Choosing the right combination

  • Ephemeral jobs (persist=False, stateful=False) — fast one-shot tasks such as CI steps, data transforms, smoke tests.
  • Stateful batches (stateful=True) — pipelines that reuse build artifacts or cached data stored in /workspace between runs.
  • Persistent VMs (persist=True) — development sandboxes or services that must continue running (often together with ip_lease=True for remote access).

You can mix stateful with persist if you want a long-lived VM and a durable disk.


Example: Ephemeral Job (default flags)

from magpie import Magpie

client = Magpie(api_key="...", base_url="http://localhost:8080")

response = client.jobs.create(
    name="hello-world",
    script="echo 'Hello from Magpie!'",
    vcpus=2,
    memory_mb=512,
)

request_id = response["request_id"]

# Poll until the job finishes
status = client.jobs.wait_for_completion(request_id, timeout=180)
print(status.status, status.exit_code)

# Fetch logs
logs = client.jobs.get_logs(request_id)
for entry in logs:
    print(entry.timestamp, entry.message)

# Summarise the result
result = client.jobs.get_result(request_id)
print(result.success, result.logs)

Use this mode for short jobs that can start fresh every time.


Example: Persistent VM with IPv6 + SSH

Persistent VMs keep running after the initial script exits. Combine persist=True and ip_lease=True to receive an IPv6 address and execute SSH commands via the orchestrator. The backend manages the password (TestTempPassword!) so your users only need the job ID.

from magpie import Magpie

client = Magpie(api_key="...", base_url="http://localhost:8080")

handle = client.jobs.create_persistent_vm(
    name="dev-shell",
    script="echo ready",
    poll_timeout=180,
    poll_interval=3,
)

print("Job ID:", handle.request_id)
print("Assigned IPv6:", handle.ip_address)

# Run ssh commands through the orchestrator
result = client.jobs.ssh(handle.request_id, "uname -a")
print("exit:", result.exit_code)
print("stdout:\n", result.stdout)

# Execute another command (commands can run concurrently from your client)
client.jobs.ssh(handle.request_id, "mkdir -p /workspace/app && ls -la /workspace")

When to use this pattern

  • Remote development environments
  • Always-on demos and staging servers
  • Long-running background jobs where you need to periodically run maintenance commands

client.jobs.ssh returns only after the remote command finishes (or the timeout is reached). Issue parallel SSH commands from separate coroutines/threads to run them concurrently.


Example: Stateful Workspace (persisting /workspace)

Stateful jobs attach a named block device mounted at /workspace. The disk survives between runs as long as you reuse the same workspace_id.

client = Magpie(api_key="...", base_url="http://localhost:8080")

# First run creates the workspace
initial = client.jobs.run_and_wait(
    name="init-cache",
    script="echo 'counter=0' > /workspace/state.txt",
    vcpus=2,
    memory_mb=512,
    stateful=True,
    workspace_size_gb=5,
)

workspace_id = initial.request_id

# Subsequent run reuses the disk
second = client.jobs.run_and_wait(
    name="increment",
    script="""
    source /workspace/state.txt
    counter=$((counter + 1))
    echo "counter=${counter}" > /workspace/state.txt
    echo "Counter is now ${counter}"
    """,
    stateful=True,
    workspace_id=workspace_id,
)
print(second.logs)

Use stateful workspaces for build caches, dependency layers, or any workflow that benefits from keeping files between job runs without holding the VM open.


Inspecting Jobs

  • client.jobs.get_status(request_id) — lightweight status poller.
  • client.jobs.get_vm_info(request_id) — returns VM metadata (IDs, IPv6/IPv4) for persistent jobs.
  • client.jobs.get_logs(request_id) — grabs the accumulated log buffer.
  • client.jobs.stream_logs(request_id) — generator that streams logs in real time (useful from a CLI or TUI).
  • client.jobs.cancel(request_id) — best-effort cancellation for running jobs.

Job Templates

Templates let you save a script definition, then launch runs with custom environments or parameters.

template = client.templates.create(
    name="csv-to-parquet",
    description="Convert uploaded CSV to Parquet",
    script="""
    python3 <<'PY'
    import pandas as pd
    df = pd.read_csv('/workspace/input.csv')
    df.to_parquet('/workspace/output.parquet')
    PY
    """,
    vcpus=4,
    memory_mb=2048,
)

run = client.templates.run(
    template.id,
    environment={"SOURCE_URL": "https://example.com/data.csv"},
)
print(run.id, run.status)

Templates are great for self-service portals or repeating scheduled workloads.


Advanced Tips

  • Environment variables — pass a dict via the environment field to inject secrets or parameters into the VM.
  • File uploads — upload artifacts using the /api/v1/jobs/files endpoints (see SDK helpers or Postman collection) and consume them inside the job.
  • Combining modes — you can run a persistent VM and make it stateful by setting both persist=True and stateful=True with a workspace_id.
  • Timeoutsjobs.wait_for_completion takes timeout and poll_interval; jobs.ssh accepts timeout per command.
  • Concurrency — the orchestrator processes API requests in parallel. Fire multiple jobs.ssh calls or even launch several jobs at once from your client code.

Where Magpie Fits In

  • CI / automation — ephemeral jobs with persist=False for isolated build or test steps.
  • Data pipelines — stateful jobs to reuse bulky datasets without re-downloading every run.
  • Interactive dev boxes — persistent, IP-leased VMs that you reach over SSH or Web IDEs.
  • Fleet management / maintenance — trigger commands on persistent VMs using jobs.ssh for backups, package upgrades, or ad-hoc diagnostics.

Further Reading

  • STATEFUL_JOBS.md — deeper walkthrough of stateful=True workloads.
  • API docs and the Postman collection in the repository for endpoints not yet wrapped by the SDK.
  • Examples under sdk/python/ (test_ssh.py, test_persistent_vm.py, modify_nextjs_app.py) for real-world scripts.

Have questions or ideas? Open an issue or pull request and we’ll keep improving the SDK together.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

magpie_cloud-0.1.0.tar.gz (17.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

magpie_cloud-0.1.0-py3-none-any.whl (13.4 kB view details)

Uploaded Python 3

File details

Details for the file magpie_cloud-0.1.0.tar.gz.

File metadata

  • Download URL: magpie_cloud-0.1.0.tar.gz
  • Upload date:
  • Size: 17.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.6

File hashes

Hashes for magpie_cloud-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4158c8ed042473332b8a2f2d5c314e4f6b2edb73702e7f772d8c76c0e61abdc2
MD5 fd8d287ad81cbc0a85742cb841260bb7
BLAKE2b-256 0dec0629912cbf6b6136b44ba0700371c1d323aeff92999e9443edcd9c5b0dba

See more details on using hashes here.

File details

Details for the file magpie_cloud-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: magpie_cloud-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.6

File hashes

Hashes for magpie_cloud-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dfab09f019b77649edb3f1319cd085d607d084cf42ee14f62f368110bc32f2fd
MD5 7168a4d494cf6fc2b044252579c8c981
BLAKE2b-256 69a4b43037d369db7cd5da0780a175536de23e94a8d331fb7f13dda0ef286c22

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page