Skip to main content

Lightweight PTM API client for integration with external Python services

Project description

ptm-client

Lightweight Python client for the Prompt Test Manager (PTM) API. Single runtime dependency: requests.

Install

pip install ptm-client

Quick start

from ptm_client import PTMClient

client = PTMClient(base_url="https://ptm.example.com", token="your-api-token")

# List prompts, filtered by tag
prompts = client.list_prompts(tag="my_team")

# Fetch a prompt and its test cases
detail = client.get_prompt("my_team.summarizer")
tests = client.get_prompt_tests("my_team.summarizer")

# Run an eval against the library prompt
run = client.run_eval(
    prompt_ids=["my_team.summarizer"],
    provider_ids=["openai_gpt41_mini"],
)

# Or run a one-off manual eval
run = client.run_manual_eval({
    "prompt_text": "Summarize: {{text}}",
    "tests": [{"description": "smoke", "vars": {"text": "Hello, world."}}],
    "provider_profiles": ["openai_gpt41_mini"],
})

# Block until complete, then fetch a report
result = client.wait_for_run(run["run_key"], timeout=120)
html = client.run_report(run["run_key"])
json_report = client.run_report(run["run_key"], format="json")

Constructor

PTMClient(base_url, token, timeout=30)

token is a PTM personal access token or service account token. timeout is the HTTP request timeout in seconds.

Public methods

Prompts

  • list_prompts(tag=None, team=None, service=None, source=None, search=None, group=None)
  • get_prompt(prompt_id)
  • get_prompt_tests(prompt_id)
  • list_prompt_versions(prompt_id) (v0.3.0)
  • get_prompt_version(prompt_id, version_number) (v0.3.0)

Providers

  • list_providers()

Evaluations

  • run_eval(prompt_ids, provider_ids, **kwargs)
  • run_manual_eval(payload)
  • run_prompt_eval(prompt_id, provider_ids, *, inject_vars=None, extra_tests=None, visibility_scope="org_visible", label=None)

Runs

  • list_runs(limit=50, terminal_only=False, mine_only=False) (v0.3.0)
  • get_run(run_key)
  • wait_for_run(run_key, timeout=300, poll_interval=5)
  • run_report(run_key, format="html") - html / json / markdown / csv

Optimization (v0.3.0)

  • submit_optimization(prompt_id, provider_profiles=None, judge_profile=None, max_cycles=10, target_score=90.0, min_improvement=2.0, max_cost_usd=20.0, comparison_strategy=None, visibility_scope=None)
  • optimize_prompt(...) - deprecated alias for submit_optimization; emits DeprecationWarning; removed in v1.0.0
  • get_optimization_status(prompt_id)
  • get_optimization_history(prompt_id)
  • get_optimization_detail(optimization_id) (v0.3.0)
  • cancel_optimization(optimization_id)
  • wait_for_optimization(prompt_id, *, timeout=600, poll_interval=10)

Test-case shapes

PTM evaluates with three optional scoring layers. Use any combination.

Promptfoo assertions (deterministic)

Go in the assert array inside each test case:

{
    "description": "mention the topic with enough length",
    "vars": {"transcript": "..."},
    "assert": [
        {"type": "icontains", "value": "API migration"},
        {"type": "javascript", "value": "output.length >= 100"},
    ],
}

DeepEval metrics (semantic, judge-LLM)

Go in additional_metrics at the payload root:

{
    "additional_metrics": [
        {"name": "relevance", "criteria": "Output addresses the input topic.", "threshold": 0.7},
    ],
    "judge_profile": "openai_gpt41_mini",
}

KPI configs (custom weighted expressions)

Go in additional_kpis at the payload root:

{
    "additional_kpis": [
        {"name": "cost_ok", "description": "Under $0.05", "expression": "1 if cost < 0.05 else 0", "weight": 1.0},
    ],
}

Inline examples

run_manual_eval - full control

run = client.run_manual_eval({
    "label": "my_custom_eval",
    "prompt_text": "Summarize: {{text}}",
    "tests": [{"description": "short text", "vars": {"text": "The quick brown fox."}}],
    "provider_profiles": ["openai_gpt41_mini"],
    "cost_threshold": 1.0,
    "latency_threshold_ms": 30000,
})

run_prompt_eval - fetch from PTM + inject live data

run = client.run_prompt_eval(
    prompt_id="my_team.summarizer",
    provider_ids=["openai_gpt41_mini"],
    inject_vars={"transcript": real_transcript, "meeting_title": "Weekly 1:1"},
)
result = client.wait_for_run(run["run_key"], timeout=120)

Error handling

from ptm_client import PTMClient, PTMError, PTMTimeoutError

try:
    result = client.wait_for_run(run_key, timeout=60)
except PTMTimeoutError:
    print("Run did not complete in time")
except PTMError as e:
    print(f"PTM API error ({e.status_code}): {e}")

PTMError wraps all HTTP errors, ConnectionError, and requests.Timeout. Check e.status_code (0 for connection/timeout failures).

Compatibility

  • Python 3.12+
  • PTM backend compatible (some v0.3.0 methods require a recent backend release for full functionality; older backends work for all other methods)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ptm_client-0.5.0.tar.gz (24.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ptm_client-0.5.0-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file ptm_client-0.5.0.tar.gz.

File metadata

  • Download URL: ptm_client-0.5.0.tar.gz
  • Upload date:
  • Size: 24.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for ptm_client-0.5.0.tar.gz
Algorithm Hash digest
SHA256 1ce84790bfd23b4c2ee87d5ed5e54d3b57e8baad9581f1a19b8e04f4924aa7b5
MD5 6624b459e0c2fb6371d6db949f9e24fd
BLAKE2b-256 56b2dcbc4cfd1103f56446f774d31804018413eec8da64e1e3db193f4ebadad5

See more details on using hashes here.

File details

Details for the file ptm_client-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: ptm_client-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 14.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for ptm_client-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 628292c7cd9010b05d72bb1459165bfa4630abd23a860b72fa7396bab19ca6dc
MD5 6029835a95ebea998f8458ac72f47906
BLAKE2b-256 2e739643f79eb0ce4d9ed77e235b2023591acda2d8664726a648cc9875d153c5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page