Skip to main content

Python SDK for CMDOP agent interaction

Project description

cmdop

Python SDK for CMDOP browser automation and server control.

Architecture

Your Code ──── Cloud Relay ──── Agent (on server)
                    │
        Outbound only, works through any NAT/firewall

Install

pip install cmdop

Connection

from cmdop import CMDOPClient, AsyncCMDOPClient

# Local (direct IPC to running agent)
client = CMDOPClient.local()

# Remote (via cloud relay)
client = CMDOPClient.remote(api_key="cmd_xxx")

# Async
async with AsyncCMDOPClient.local() as client:
    await client.files.read("/etc/hostname")

Browser

from cmdop.services.browser.models import WaitUntil

with client.browser.create_session(headless=False) as s:
    s.navigate("https://shop.com", wait_until=WaitUntil.NETWORKIDLE)

    # Core methods
    s.click("button.buy", move_cursor=True)
    s.type("input[name=q]", "search term")
    s.wait_for(".results")
    s.execute_script("return document.title")
    s.screenshot()
    s.get_state()        # URL + title
    s.get_page_info()    # Full page info
    s.get_cookies()
    s.set_cookies([...])

# Fast mode: block images and media for faster page loads
with client.browser.create_session(
    block_images=True,
    block_media=True,
) as s:
    s.navigate("https://shop.com/catalog")
    items = s.dom.extract(".product-title")

create_session parameters:

Parameter Default Description
start_url None Initial URL to navigate to
provider "camoufox" Browser provider ("camoufox" or "rod")
profile_id None Profile ID for session persistence (cookies, localStorage)
headless False Run browser without UI
width 1280 Viewport width
height 800 Viewport height
block_images False Disable loading images (set at launch, cannot change at runtime)
block_media False Disable loading audio/video (set at launch, cannot change at runtime)

WaitUntil options:

Value Description
LOAD Wait for load event (default)
DOMCONTENTLOADED Wait for DOMContentLoaded
NETWORKIDLE Wait until network is idle (best for SPA)
COMMIT Return immediately (fastest)

Capabilities

s.scroll - Scrolling

s.scroll.js("down", 500)           # JS scroll (works on complex sites)
s.scroll.native("down", 500)       # Browser API scroll
s.scroll.to_bottom()               # Scroll to page bottom
s.scroll.to_element(".item")       # Scroll element into view
s.scroll.info()                    # Get scroll position/dimensions

# Smart infinite scroll with extraction
items = s.scroll.infinite(
    extract_fn=lambda: extract_new_items(),
    limit=100,
    max_scrolls=50,
    scroll_amount=800,
)

s.input - Input

s.input.click_js(".btn")           # JS click (reliable)
s.input.click_all("See more")      # Click all matching elements
s.input.key("Escape")              # Press key
s.input.key("Enter", ".input")     # Press key on element
s.input.hover(".tooltip")          # Native hover
s.input.hover_js(".tooltip")       # JS hover
s.input.mouse_move(500, 300)       # Move cursor to coordinates

s.timing - Delays

s.timing.wait(500)                 # Wait ms
s.timing.seconds(2)                # Wait seconds
s.timing.random(0.5, 1.5)          # Random delay
s.timing.timeout(fn, 10, cleanup)  # Run with timeout

s.dom - DOM operations

s.dom.html(".container")           # Get HTML
s.dom.text(".title")               # Get text
s.dom.extract(".items", "href")    # Get attr list
s.dom.select("#country", "US")     # Dropdown select
s.dom.close_modal()                # Close dialogs/popups

s.fetch - HTTP from browser (bypass CORS, inherit cookies)

s.fetch.json("/api/items")               # Fetch JSON
s.fetch.all(["/api/a", "/api/b"])        # Parallel fetch
s.fetch.execute("return fetch(...)")     # Custom JS

s.network - Traffic capture

s.network.enable(max_exchanges=1000)
s.navigate(url)

# Get exchanges
exchanges = s.network.get_all()
api = s.network.last("/api/data")
data = api.json_body()

# Filter
posts = s.network.filter(
    url_pattern="/api/posts",
    methods=["GET", "POST"],
    status_codes=[200],
    resource_types=["xhr", "fetch"],
)

# Convenience
s.network.api_calls("/api/")       # XHR/Fetch matching pattern
s.network.last_json("/api/data")   # JSON body directly
s.network.wait_for("/api/", 5000)  # Wait for request
s.network.export_har()             # Export to HAR
s.network.stats()                  # Capture statistics
s.network.clear()                  # Clear captured
s.network.disable()

s.visual - Browser overlay (requires CMDOP extension)

s.visual.toast("Loading...")           # Show toast
s.visual.clear_toasts()                # Clear all toasts
s.visual.countdown(30, "Click!")       # Countdown timer
s.visual.highlight(".element")         # Highlight element
s.visual.hide_highlight()              # Hide highlight
s.visual.click(100, 200)               # Show click effect
s.visual.move(0, 0, 100, 200)          # Show cursor trail
s.visual.set_state("busy")             # idle/active/busy

NetworkAnalyzer

Discover API endpoints by capturing traffic while user interacts.

from cmdop import CMDOPClient
from cmdop.helpers import NetworkAnalyzer

client = CMDOPClient.local()
with client.browser.create_session(headless=False) as b:
    analyzer = NetworkAnalyzer(b)

    snapshot = analyzer.capture(
        "https://example.com/cars",
        wait_seconds=30,
        countdown_message="Click pagination!",
        min_size=100,       # Ignore tracking pixels
        max_size=500_000,   # Ignore heavy assets
        same_origin=True,   # Only same domain
    )

    # Get best data API
    if snapshot.api_requests:
        best = snapshot.best_api()
        print(best.url)
        print(best.item_count)
        print(best.data_key)        # "data", "items", etc.
        print(best.item_fields)     # Field names
        print(best.to_curl())       # curl command
        print(best.to_httpx())      # Python httpx code

    # All captured
    for req in snapshot.api_requests:
        print(f"{req.method} {req.url}{req.item_count} items")

NetworkSnapshot:

  • api_requests - Requests with data arrays
  • json_requests - Other JSON responses
  • cookies - Session cookies
  • total_requests, total_bytes

RequestSnapshot:

  • url, method, headers, body, cookies
  • status, content_type, size
  • data_key, item_count, item_fields, sample_response
  • to_curl(), to_httpx()

Agent

Run AI tasks with typed output:

from pydantic import BaseModel

class Health(BaseModel):
    status: str
    cpu: float
    issues: list[str]

result = client.agent.run("Check server health", output_schema=Health)
health: Health = result.output  # Typed!

Terminal

session = client.terminal.create()
client.terminal.send_input(session.session_id, "ls -la\n")
output = client.terminal.get_history(session.session_id)
client.terminal.resize(session.session_id, 120, 40)
client.terminal.send_signal(session.session_id, "SIGINT")
client.terminal.close(session.session_id)

Files

# Local IPC - session_id not required
client.files.list("/var/log")
client.files.read("/etc/nginx/nginx.conf")
client.files.write("/tmp/config.json", b'{"key": "value"}')

# Remote - session_id required
session = client.terminal.get_active_session()
client.files.set_session_id(session.session_id)  # Set once
client.files.list("/var/log")
client.files.read("/etc/nginx/nginx.conf")

# Or pass session_id directly to each call
client.files.list("/var/log", session_id=session.session_id)

Files methods:

client.files.list("/path", include_hidden=True)  # List directory
client.files.read("/path/file.txt")              # Read file
client.files.write("/path/file.txt", b"data")    # Write file
client.files.delete("/path", recursive=True)     # Delete file/dir
client.files.copy("/src", "/dst")                # Copy
client.files.move("/old", "/new")                # Move/rename
client.files.mkdir("/new/dir")                   # Create directory
client.files.info("/path")                       # Get file info

SDKBaseModel

Auto-cleaning Pydantic model:

from cmdop import SDKBaseModel

class Product(SDKBaseModel):
    __base_url__ = "https://shop.com"
    name: str = ""    # "  iPhone 15  \n" → "iPhone 15"
    price: int = 0    # "$1,299.00" → 1299
    rating: float = 0 # "4.5 stars" → 4.5
    url: str = ""     # "/p/123" → "https://shop.com/p/123"

products = Product.from_list(raw["items"])  # Auto dedupe + filter

Download

Download files from URLs via remote server with chunked transfer.

Handles cloud relay limits (~30MB per session) automatically:

  • Small files (≤10MB): Direct chunked transfer
  • Large files (>10MB): Split on remote, download parts with reconnection
from pathlib import Path

# Async (recommended) - handles large files
async with AsyncCMDOPClient.remote(api_key="cmd_xxx") as client:
    client.download.configure(api_key="cmd_xxx")  # Required for files >10MB

    result = await client.download.url(
        url="https://example.com/large-file.zip",
        local_path=Path("./large-file.zip"),
    )

    if result.success:
        print(result)  # DownloadResult(ok, 139.2MB, 245.3s, 0.6MB/s)
        print(result.metrics.summary())
        # Size: 139.2 MB (145,981,234 bytes)
        # Total: 245.3s @ 0.6 MB/s
        #   └─ Curl: 12.1s
        #   └─ Transfer: 233.2s @ 0.6 MB/s
        # Parts: 28
        # Chunks: 140
    else:
        print(f"Failed: {result.error}")

# Sync - for small files only
result = client.download.url(
    url="https://example.com/small.csv",
    local_path=Path("./small.csv"),
)

# With progress callback
def on_progress(transferred: int, total: int) -> None:
    pct = (transferred / total) * 100
    print(f"\r{pct:.1f}%", end="", flush=True)

result = await client.download.url(url, local_path, on_progress=on_progress)

# Configure
client.download.configure(
    chunk_size=2 * 1024 * 1024,  # 2MB chunks (default: 1MB)
    download_timeout=600,        # 10 min (default: 5 min)
    api_key="cmd_xxx",           # Required for files >10MB
)

DownloadResult fields:

Field Type Description
success bool Whether download succeeded
local_path Path Path to downloaded file
size int Downloaded size in bytes
error str Error message if failed
metrics DownloadMetrics Timing and transfer stats

DownloadMetrics fields:

Field Type Description
total_time float Total time in seconds
curl_time float Remote curl time
transfer_time float File transfer time
remote_size int Size on remote
transferred_size int Bytes transferred
chunks_count int Number of chunks
parts_count int Split parts (large files)
retries_count int Retry attempts
transfer_speed_mbps float Transfer speed MB/s
total_speed_mbps float Overall speed MB/s

Logging

from cmdop import get_logger

log = get_logger(__name__)
log.info("Starting")  # Rich console + file output

Requirements

  • Python 3.10+
  • CMDOP agent running locally or API key for remote

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cmdop-0.1.31.tar.gz (181.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cmdop-0.1.31-py3-none-any.whl (295.3 kB view details)

Uploaded Python 3

File details

Details for the file cmdop-0.1.31.tar.gz.

File metadata

  • Download URL: cmdop-0.1.31.tar.gz
  • Upload date:
  • Size: 181.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for cmdop-0.1.31.tar.gz
Algorithm Hash digest
SHA256 0f8a47b8ba4da160f3cc6d5caab2e5cf91d6e10642bf3538090ef3d3ac0d2310
MD5 ed7e7c4b6e4001a79930f6cd0688fdc4
BLAKE2b-256 2096ec501c11c69edc35b591a854812609e9be0e313085512a3d960823da8432

See more details on using hashes here.

File details

Details for the file cmdop-0.1.31-py3-none-any.whl.

File metadata

  • Download URL: cmdop-0.1.31-py3-none-any.whl
  • Upload date:
  • Size: 295.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for cmdop-0.1.31-py3-none-any.whl
Algorithm Hash digest
SHA256 ff61874b3c05567dc5aec25712e950ee55bcf7f5ba2ed85110c03bf08c5334fb
MD5 a57d225debe5e0ca7b9447db22c9592e
BLAKE2b-256 67cd4f9dbcbcf6c41af18482f12b969d454d84422cb9c8ddd7d5531ce16c3617

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page