Skip to main content

Quality observability for RAG and agents — Brazilian-Portuguese vertical with claim-by-claim hallucination diagnosis, trajectory quality, tool-use correctness, and goal completion metrics.

Project description

aferiq-eval

PyPI Python License

Quality observability for RAG and agents — vertical for the Brazilian market. PT-BR judge prompts hand-tuned for BR patterns (Lei nº, CNPJ/CPF, Receita Federal, INSS, BACEN, ANPD), claim-by-claim hallucination diagnosis, trajectory + tool-use evaluation for LangGraph / AgentExecutor / OpenAI Assistants.

The internal Python namespace is rageval (legacy from when the library was called RAG Eval BR). It will be renamed to aferiq_eval in v0.2.0 with a 90-day import alias. For now, pip install aferiq-eval then from rageval import ….

Install

pip install aferiq-eval

# Optional integrations:
pip install 'aferiq-eval[langchain]'      # LangChain (RAG + AgentExecutor)
pip install 'aferiq-eval[langgraph]'      # LangGraph
pip install 'aferiq-eval[openai]'         # OpenAI SDK + Assistants
pip install 'aferiq-eval[anthropic]'      # Anthropic SDK

RAG eval (single-turn)

from rageval import evaluate

result = evaluate(
    queries=["Qual o prazo do CDC pra arrependimento?"],
    contexts=[["Art. 49 do CDC: o consumidor pode desistir em 7 dias."]],
    answers=["O CDC garante 7 dias contados do recebimento."],
    metrics=["faithfulness", "hallucination"],
    language="pt",  # default
)

print(result)            # rich-rendered table
result.to_dict()         # JSON-friendly
result.aggregate         # {"faithfulness": 0.95, "hallucination": 0.92}

Decorator capture

from rageval import trace, register_trace_handler
from rageval.client import CloudClient

# Ship every captured trace to aferiq cloud:
register_trace_handler(CloudClient.from_env().send)  # uses RAGEVAL_API_KEY

@trace
def my_rag(query: str, context: list[str]) -> str:
    chunks = retriever.search(query)
    return llm.generate(query, chunks)

# every call is automatically captured + shipped
my_rag("Qual o aviso prévio mínimo na CLT?", ["..."])

Agent eval (multi-step)

For LangChain AgentExecutor, LangGraph state machines, or OpenAI Assistants — no decorator needed. Run your application under aferiq-trace-run, attach the agent callback, and full trajectories (LLM calls + tool calls + tokens + latency) get captured automatically.

aferiq-trace-run python my_agent.py
from rageval.integrations.agent_callback import AferiqAgentCallbackHandler

handler = AferiqAgentCallbackHandler(goal="Qual o prazo do CDC?")
agent_executor.invoke(
    {"input": "Qual o prazo do CDC?"},
    config={"callbacks": [handler]},
)
# handler.last_trace is an AgentTrace; also broadcast to registered handlers

The dashboard at https://aferiq.com.br auto-detects trace shape: agent runs render as a step-by-step tree; RAG traces render claim-by-claim with PT-BR categorisation (lei_inventada, cnpj_fabricado, etc.).

CLI

export OPENAI_API_KEY=sk-...

# RAG eval
aferiq-eval evaluate --dataset rag_traces.jsonl --metrics faithfulness,hallucination

# Agent eval
aferiq-eval evaluate-agent --dataset agent_traces.jsonl \
    --metrics trajectory_quality,goal_completion,cost_efficiency

Examples

Runnable demos under examples/:

  • langchain_agent_executor.py — calculator + search agent
  • langgraph_agent.py — state-machine agent
  • openai_assistants.py — Assistants API streaming

Supported judge models

Anything litellm supports — OpenAI, Anthropic, Google, Maritaca Sabiá, Ollama. Default: gpt-4o-mini (cheapest with acceptable quality on PT-BR judging).

Caching

By default, identical (prompt, model, temperature) tuples are cached on disk under ~/.cache/rageval. Repeated evals don't re-call the LLM.

from rageval.cache import DiskCache, MemoryCache, NullCache

DiskCache()                          # default location
DiskCache("/custom/path")            # custom location
MemoryCache()                        # in-process, tests
NullCache()                          # disable caching

Dev

git clone https://github.com/leo94pena/rag_eval
cd rag_eval/packages/lib
pip install -e ".[dev]"

pytest --cov=rageval         # tests
ruff check src tests         # lint
mypy --strict src            # type-check

Documentation

License

Proprietary — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aferiq_eval-0.1.0.tar.gz (74.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aferiq_eval-0.1.0-py3-none-any.whl (60.7 kB view details)

Uploaded Python 3

File details

Details for the file aferiq_eval-0.1.0.tar.gz.

File metadata

  • Download URL: aferiq_eval-0.1.0.tar.gz
  • Upload date:
  • Size: 74.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for aferiq_eval-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a9734f17945f3ebabeeb124cdceab6b0abd3718f55a54d5ac969d488f7072eba
MD5 1211258d74cc4e9be070766ade25e9f8
BLAKE2b-256 769ae9dac761388f8097e3eb9a3437b6e77a86cdc7e1dc3817ad06ed4f256636

See more details on using hashes here.

Provenance

The following attestation bundles were made for aferiq_eval-0.1.0.tar.gz:

Publisher: release-lib.yml on ileoh/aferiq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aferiq_eval-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: aferiq_eval-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 60.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for aferiq_eval-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ab7b6e479c720e92ac368a02b70016126a5eccf53307e3356c8389a1b6ffe126
MD5 38617734ab141945a18f86505af4b623
BLAKE2b-256 a0cf12b2929998893fc089b567e80a1f06bff3e658ccb45d80358eea604d4f4a

See more details on using hashes here.

Provenance

The following attestation bundles were made for aferiq_eval-0.1.0-py3-none-any.whl:

Publisher: release-lib.yml on ileoh/aferiq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page