Skip to main content

Lightweight PTM API client for integration with external Python services

Project description

ptm-client

Lightweight Python client for the Prompt Test Manager (PTM) API. Zero dependencies beyond requests.

Install

From PyPI (when published)

pip install ptm-client

From source (development)

pip install -e packages/ptm-client
# or with dev/test dependencies:
pip install -e "packages/ptm-client[dev]"

Docker mount (no install needed)

# docker-compose.override.yml
services:
  app:
    volumes:
      - /path/to/prompt-test-manager/packages/ptm-client/src:/opt/ptm-client-src:ro
    environment:
      PYTHONPATH: /opt/ptm-client-src:/app

Quick Start

from ptm_client import PTMClient

client = PTMClient(base_url="http://localhost:8010", token="your-api-token")

# List prompts
prompts = client.list_prompts(tag="my_team")

# Get prompt detail
detail = client.get_prompt("my_team.summarizer")

# Get prompt test cases
tests = client.get_prompt_tests("my_team.summarizer")

# Run a repository evaluation
run = client.run_eval(
    prompt_ids=["my_team.summarizer"],
    provider_ids=["openai_gpt41_mini"],
)

# Run a manual evaluation
run = client.run_manual_eval({
    "prompt_text": "...",
    "tests": [{"description": "test", "vars": {"name": "World"}}],
    "provider_profiles": ["openai_gpt41_mini"],
    "visibility_scope": "org_visible",
})

# Wait for completion
result = client.wait_for_run(run["run_key"], timeout=120)

# Get HTML report
html = client.run_report(run["run_key"])

# Get JSON report
json_report = client.run_report(run["run_key"], format="json")

API Reference

PTMClient(base_url, token, timeout=30)

Create a client. token is a PTM personal access token (ptm_u_...) or service account token (ptm_sa_...). timeout is the HTTP request timeout in seconds.

Prompts

  • list_prompts(tag=None) — list all prompts, optionally filtered by tag
  • get_prompt(prompt_id) — get full prompt detail (prompt_text, tags, metadata)
  • get_prompt_tests(prompt_id) — get test cases, deepeval metrics, KPIs

Providers

  • list_providers() — list available LLM provider profiles

Evaluations

  • run_eval(prompt_ids, provider_ids, **kwargs) — submit repository evaluation
  • run_manual_eval(payload) — submit manual evaluation with custom prompt + tests
  • run_prompt_eval(prompt_id, provider_ids, *, inject_vars=None, extra_tests=None, visibility_scope="org_visible", label=None) — fetch a prompt from PTM, merge runtime vars/tests, and submit (recommended for service integrations)

Runs

  • get_run(run_key) — get run status (includes score, passed_tests, total_tests)
  • wait_for_run(run_key, timeout=300, poll_interval=5) — block until terminal state
  • run_report(run_key, format="html") — get report (html, json, markdown, csv)

Test Cases and Scoring

PTM evaluates with up to three scoring layers. Use any combination.

Promptfoo assertions — deterministic pass/fail checks

Go in the assert array inside each test case:

{
    "description": "test case with assertions",
    "vars": {"transcript": "..."},
    "assert": [
        {"type": "javascript", "value": "/meeting purpose/i.test(output)", "description": "has_purpose"},
        {"type": "icontains", "value": "API migration", "description": "mentions_topic"},
        {"type": "javascript", "value": "output.length >= 100", "description": "min_length"},
    ],
}

DeepEval metrics — semantic quality scoring via judge LLM

Go in additional_metrics at the payload root:

{
    "additional_metrics": [
        {"name": "relevance", "criteria": "Output addresses the input topic with specific details.", "threshold": 0.7},
        {"name": "structure", "criteria": "Output has clear sections and logical flow.", "threshold": 0.7},
    ],
    "judge_profile": "openai_gpt41_mini",
}

KPI configs — custom weighted expressions

Go in additional_kpis at the payload root:

{
    "additional_kpis": [
        {"name": "cost_ok", "description": "Under $0.05", "expression": "1 if cost < 0.05 else 0", "weight": 1.0},
        {"name": "fast", "description": "Under 10s", "expression": "1 if latency_ms < 10000 else 0", "weight": 1.0},
    ],
}

Common patterns

# Promptfoo only (no judge LLM needed)
client.run_manual_eval({"tests": [{"vars": {...}, "assert": [...]}], ...})

# DeepEval only (semantic scoring, no deterministic checks)
client.run_manual_eval({"tests": [{"vars": {...}}], "additional_metrics": [...], ...})

# All three layers
client.run_manual_eval({"tests": [{"vars": {...}, "assert": [...]}], "additional_metrics": [...], "additional_kpis": [...], ...})

# No scoring (just run prompt, capture output)
client.run_manual_eval({"tests": [{"vars": {...}}], ...})

See docs/ptm-client-integration.md for the full test case reference with all assertion types, metric fields, and KPI variables.

Inline Test Cases

run_manual_eval — full control

run = client.run_manual_eval({
    "label": "my_custom_eval",
    "prompt_text": '[{"role": "system", "content": "Summarize."}, {"role": "user", "content": "{{text}}"}]',
    "tests": [
        {"description": "short text", "vars": {"text": "The quick brown fox."}},
    ],
    "provider_profiles": ["openai_gpt41_mini"],
    "visibility_scope": "org_visible",
    "cost_threshold": 1.0,
    "latency_threshold_ms": 30000,
})

run_prompt_eval — fetch prompt from PTM + inject live data

Recommended for service integrations:

run = client.run_prompt_eval(
    prompt_id="my_team.summarizer",
    provider_ids=["openai_gpt41_mini"],
    inject_vars={"transcript": real_transcript, "meeting_title": "Weekly 1:1"},
)
result = client.wait_for_run(run["run_key"], timeout=120)

With extra test cases:

run = client.run_prompt_eval(
    prompt_id="my_team.summarizer",
    provider_ids=["openai_gpt41_mini"],
    extra_tests=[
        {"description": "edge case", "vars": {"transcript": edge_case_text}},
    ],
    visibility_scope="private_only",
    label="meeting_recap_edge_cases",
)

Error Handling

from ptm_client import PTMClient, PTMError, PTMTimeoutError

try:
    result = client.wait_for_run(run_key, timeout=60)
except PTMTimeoutError:
    print("Run did not complete in time")
except PTMError as e:
    print(f"PTM API error ({e.status_code}): {e}")

More

  • Integration guide — install methods, test case types, scoring layers, Django/FastAPI examples, chained evals
  • Examples — runnable Python scripts for every use case

Dependencies

requests only. No FastAPI, SQLAlchemy, Streamlit, or other PTM server deps.

Development

pip install -e "packages/ptm-client[dev]"
cd packages/ptm-client
pytest tests/ -v
ruff check src/ tests/
ruff format src/ tests/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ptm_client-0.1.2.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ptm_client-0.1.2-py3-none-any.whl (7.5 kB view details)

Uploaded Python 3

File details

Details for the file ptm_client-0.1.2.tar.gz.

File metadata

  • Download URL: ptm_client-0.1.2.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for ptm_client-0.1.2.tar.gz
Algorithm Hash digest
SHA256 932b11ddc129e7445d7abced879518b9fe997b6e9bbeba50ff9b948799669d79
MD5 c573180ad5e29623432a03ef375f1b89
BLAKE2b-256 c98678de422644536f65774b86f1f40a1aa1671f2058cac1977c9b4728dd3a85

See more details on using hashes here.

File details

Details for the file ptm_client-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: ptm_client-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 7.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for ptm_client-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9964f22f09aee91ffbe35c3f8a3288abf03f20be749bdfba4cf1b3311a16aba3
MD5 576b3b3076717f7252aad899fecf4990
BLAKE2b-256 a1f56c577cda93daa0dcb7cdaccbb05863fdac05e3548e34c0d113173c6a5ec9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page