Skip to main content

Python RPC client for @tangle-network/agent-eval — judge content against rubrics over HTTP or stdio RPC. Eval logic runs in the Node runtime; this package is a thin wire client.

Project description

agent-eval-rpc — Python client

Python client for @tangle-network/agent-eval — a content/code judging framework written in TypeScript. This package is a thin transport adapter: every judgement runs in the Node runtime, marshalled over HTTP or stdio RPC. Two languages, one implementation. No drift.

What you get

A function-call interface to score any string against a rubric:

from agent_eval_rpc import Client

client = Client()  # auto-detects HTTP server, falls back to subprocess
result = client.judge(
    content="We just launched zero-copy IO between agents and their workdir",
    rubric_name="anti-slop",
)

print(result.composite)         # 0.0..1.0 — single number to gate on
print(result.dimensions)        # {"buyer_quality": 0.7, "voice": 0.8, "signal": 0.9}
print(result.failure_modes)     # [] or ["ai-cadence", "marketing-tone", ...]
print(result.wins)              # ["specific-component", "earned-detail", ...]
print(result.rationale)         # "The post names a real architectural detail..."

That's the entire surface for content judging. A self-contained runnable example with pytest invariants lives at examples/judge_anti_slop.py.

Install

cd clients/python
pip install -e .

To use it, one of:

  • npm install -g @tangle-network/agent-eval — gives you the agent-eval binary, used by the subprocess transport (works offline, slower per call due to Node startup ~500ms).
  • Run a server: agent-eval serve --port 5005 — gives you HTTP transport (~10ms per call once up).

The Python client picks whichever is available. Force one with Client(transport="http") or Client(transport="subprocess").

Why the architecture works this way

The TypeScript package is the source of truth for evaluation logic. We don't reimplement rubrics, scoring, or judges in Python — we marshal JSON to the canonical runtime over a versioned wire protocol (defined as Zod schemas, exported as OpenAPI, mirrored in this package as pydantic models).

Adding a new method to the API means: define a Zod schema in src/wire/schemas.ts, write the handler in src/wire/handlers.ts, and the Python client picks it up on the next regeneration. There is no separate Python implementation to maintain.

This is the same pattern as the Anthropic SDK, Stripe SDK, and gRPC: one canonical implementation, language-specific transport clients.

API

Client

Client(
    base_url: str | None = None,        # AGENT_EVAL_URL or http://127.0.0.1:5005
    cli_path: str | None = None,        # AGENT_EVAL_CLI or 'agent-eval'
    transport: "auto" | "http" | "subprocess" = "auto",
    timeout_s: float = 120.0,
)

client.judge(...)

Score a piece of content against a rubric.

def judge(
    *,
    content: str,                                  # the text being judged
    rubric_name: str | None = None,                # OR
    rubric: Rubric | dict | None = None,           # an inline rubric definition
    context: dict | None = None,                   # free-form metadata for the judge
    model: str | None = None,                      # override the judge LLM
) -> JudgeResult

Either rubric_name (use a built-in like "anti-slop") or rubric (an inline definition with your own dimensions/prompt). Not both.

Returns JudgeResult:

  • composite: float — weighted score in 0..1. The single number to gate on.
  • dimensions: dict[str, float] — per-axis scores (e.g. {"buyer_quality": 0.7}).
  • failure_modes: list[str] — ids of negative patterns detected.
  • wins: list[str] — ids of positive patterns detected.
  • rationale: str — plain-English explanation.
  • rubric_version: str — stable hash of the rubric used. Compare scores only when this matches.
  • model: str — LLM that produced the judgement.
  • duration_ms: int — wall-clock latency.

client.list_rubrics()

Return every rubric the server has registered, with their dimensions and stable rubric_version.

rubrics = client.list_rubrics()
for r in rubrics.rubrics:
    print(r.name, r.description, r.rubric_version)

client.version()

Return server + wire-protocol version. Match your pip install version to version; check wire_version for compatibility.

v = client.version()
assert v.version.startswith("0.20")
assert v.wire_version == "1.0.0"

Defining a custom rubric

Built-in anti-slop is tuned for technical-buyer audiences. For different scoring, pass a Rubric inline:

from agent_eval_rpc import Client, Rubric, RubricDimension, FailureMode

rubric = Rubric(
    name="my-rubric",
    description="Does this commit message explain WHY, not just what?",
    systemPrompt="You score commit messages. Score 0..1 on whether the WHY is clear...",
    dimensions=[
        RubricDimension(id="explains_why", description="Does the message say *why*?", weight=1.0),
    ],
    failureModes=[
        FailureMode(id="what-not-why", description="States the change but not the reason"),
    ],
)

result = client.judge(content="bumped the version", rubric=rubric)

Errors

Exception When
ValidationError Server (or pydantic) rejected the request as malformed. Fix your inputs.
RubricNotFoundError Unknown rubric_name. Call list_rubrics() to see what's registered.
TransportError HTTP unreachable or subprocess failed. Retry or check the server.
AgentEvalError Base class — catches everything above.

All errors carry .code and .details (the structured payload from the server).

Versioning

This package is version-locked to the npm package. agent-eval-rpc==0.21.0@tangle-network/agent-eval@0.21.0. CI verifies the npm package, Python package, runtime __version__, and release tag all agree before publish. If one registry publish fails after the other succeeds, retry the failed publish from the same tag or supersede with the next patch release.

wire_version is separate. It bumps only on breaking schema changes. Package versions can differ across releases as long as wire_version is the same.

Development

# install in editable mode
pip install -e ".[dev]"

# unit tests (no Node required)
pytest tests/test_models.py

# integration tests against the bundled CLI
cd ../.. && pnpm build         # build the agent-eval CLI in repo root
cd clients/python && pytest    # runs subprocess tests against dist/cli.js

Adding a new method

When the TS side adds a new endpoint (say evaluateScenario):

  1. Update src/wire/schemas.ts with EvaluateScenarioRequestSchema and EvaluateScenarioResponseSchema.
  2. Add a handler in src/wire/handlers.ts, route in src/wire/server.ts, and case in src/wire/rpc.ts.
  3. In this client, add the matching pydantic model in models.py and method on Client. The pattern is mechanical — copy the shape from judge.
  4. Test in both languages. Bump versions together.

A future iteration moves step 3 to datamodel-code-generator -i openapi.json so it's mechanical-and-automatic instead of mechanical-by-hand. Until the surface grows past ~10 endpoints, hand-written models are more readable.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_eval_rpc-0.24.0.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_eval_rpc-0.24.0-py3-none-any.whl (10.6 kB view details)

Uploaded Python 3

File details

Details for the file agent_eval_rpc-0.24.0.tar.gz.

File metadata

  • Download URL: agent_eval_rpc-0.24.0.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agent_eval_rpc-0.24.0.tar.gz
Algorithm Hash digest
SHA256 47e51eac60afebad04f16a0da97cb3ec7aac84161a3532ab9db09414c6c733b1
MD5 258adf0d9afaae5d8cdfe92bd809cbf0
BLAKE2b-256 a1c7b120b9cb9c5dbf7139eab9e9e6de22f5c450f6f7f8a02f10ed9a769c06a9

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_eval_rpc-0.24.0.tar.gz:

Publisher: publish.yml on tangle-network/agent-eval

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agent_eval_rpc-0.24.0-py3-none-any.whl.

File metadata

  • Download URL: agent_eval_rpc-0.24.0-py3-none-any.whl
  • Upload date:
  • Size: 10.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agent_eval_rpc-0.24.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8150847bd48020aaac8ac575a4d5400a0d9a2e8354b04e0f1519abe2acf2c17c
MD5 4a533c2ff77ec15e40679b1a83e17d5e
BLAKE2b-256 e632b3354abee3489a05bc5897869fb61e57c54e4bff4a9219b0da98fd2661d1

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_eval_rpc-0.24.0-py3-none-any.whl:

Publisher: publish.yml on tangle-network/agent-eval

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page