Skip to main content

Reliability infrastructure for AI agents — evaluation, observability, and regression testing

Project description

CortexOps

Reliability infrastructure for AI agents.
Evaluate · Observe · Operate — for LangGraph, CrewAI, and AutoGen.

PyPI version Python 3.10+ CI License: MIT


The problem

You deployed an agent. You have no idea if it regressed overnight.

No standard eval format. No failure traces. No CI gate before the next prompt change ships.
CortexOps fixes that.


Install

pip install cortexops

# With HTTP client (for pushing traces to hosted API):
pip install cortexops[http]

# With LLM judge support:
pip install cortexops[llm]

Quickstart

from cortexops import CortexTracer, EvalSuite

# Wrap your LangGraph app — zero refactor required
tracer = CortexTracer(project="payments-agent")
graph  = tracer.wrap(your_langgraph_app)

# Run evaluations against a golden dataset
results = EvalSuite.run(
    dataset="golden_v1.yaml",
    agent=graph,
)
print(results.summary())

Golden dataset (YAML)

version: 1
project: payments-agent

cases:
  - id: refund_lookup_01
    input: "What is the status of refund REF-8821?"
    expected_tool_calls: [lookup_refund]
    expected_output_contains: ["approved", "REF-8821"]
    max_latency_ms: 3000

  - id: open_ended_explanation_01
    input: "Why was my refund rejected?"
    judge: llm
    judge_criteria: >
      The response must explain the rejection reason clearly,
      be empathetic, and offer a concrete next step. No jargon.

CI gate

cortexops eval run \
  --dataset golden_v1.yaml \
  --fail-on "task_completion < 0.90"

Exits non-zero if the threshold is not met — blocks the PR.


Built-in metrics

Metric What it checks
task_completion Non-empty, non-error output with expected content
tool_accuracy Expected tool calls were actually made
latency Response within max_latency_ms budget
hallucination Fabrication signals in output
llm_judge GPT-4o scores against natural-language criteria

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cortexops-0.3.0.tar.gz (26.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cortexops-0.3.0-py3-none-any.whl (27.7 kB view details)

Uploaded Python 3

File details

Details for the file cortexops-0.3.0.tar.gz.

File metadata

  • Download URL: cortexops-0.3.0.tar.gz
  • Upload date:
  • Size: 26.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for cortexops-0.3.0.tar.gz
Algorithm Hash digest
SHA256 270004d0f23292764f122a2ae20b143b3c6172c75ede8c703caa1eaa2171df06
MD5 d69c2e229f825f6d61eb1b988fe3bbed
BLAKE2b-256 1c5df36cd07706e3dc79a288b9f74c8cb323bef0c990208445d3765c5ac3a33c

See more details on using hashes here.

File details

Details for the file cortexops-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: cortexops-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 27.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for cortexops-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1e7e3cfbf174aa7770fa7865ab5339cfa9dc0d61ffede0d6df7deeb8fc34d5d3
MD5 f7456caaa997fa670b427faecf47ffd3
BLAKE2b-256 86e7254075f139368d9aa4495996faee448c24cb4fa9013a124d8527fbfaf2c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page