Skip to main content

Official Python SDK for Wauldo — Verified AI answers from your documents

Project description

Wauldo Python SDK

Verified AI answers from your documents — or no answer at all.

Most RAG APIs guess. Wauldo verifies.

0% hallucination  |  83% accuracy  |  61 eval tasks  |  14 LLMs tested

PyPI  Downloads  Python  MIT

DemoDocsFree API KeyBenchmarks


Quickstart (30 seconds)

pip install wauldo

Guard — catch hallucinations in 3 lines

from wauldo import HttpClient

client = HttpClient(base_url="https://api.wauldo.com", api_key="YOUR_API_KEY")

result = client.guard(
    text="Returns are accepted within 60 days.",
    source_context="Our policy allows returns within 14 days.",
)
print(result.verdict)       # "rejected"
print(result.claims[0].reason)  # "numerical_mismatch"

Verified RAG — upload, ask, verify

client.rag_upload(content="Our refund policy allows returns within 60 days...", filename="policy.txt")

result = client.rag_query("What is the refund policy?")
print(result.answer)        # Verified answer with sources
print(result.sources)       # [RagSource(document_id='...', score=0.95)]

Try the demo | Get a free API key


Why Wauldo (and not standard RAG)

Typical RAG pipeline

retrieve → generate → hope it's correct

Wauldo pipeline

retrieve → extract facts → generate → verify → return or refuse

If the answer can't be verified, it returns "insufficient evidence" instead of guessing.

See the difference

Document: "Refunds are processed within 60 days"

Typical RAG:  "Refunds are processed within 30 days"     ← wrong
Wauldo:       "Refunds are processed within 60 days"     ← verified
              or "insufficient evidence" if unclear       ← safe

Examples

Upload a PDF and ask questions

result = client.upload_file("contract.pdf", title="Q3 Contract")
print(f"Extracted {result.chunks_count} chunks")

result = client.rag_query("What are the payment terms?")
print(f"Answer: {result.answer}")
print(f"Confidence: {result.get_confidence():.0%}")

Chat (OpenAI-compatible)

reply = client.chat_simple("auto", "Explain Python decorators")
print(reply)

Streaming

from wauldo import ChatRequest, HttpChatMessage

request = ChatRequest(model="auto", messages=[HttpChatMessage.user("Hello!")])
for chunk in client.chat_stream(request):
    print(chunk, end="", flush=True)

Features

  • Pre-generation fact extraction — numbers, dates, limits injected as constraints before the LLM call
  • Post-generation grounding check — every answer verified against sources
  • Guard API — verify any claim against any source (3 modes: lexical, hybrid, semantic)
  • Native PDF/DOCX upload — server-side extraction with quality scoring
  • Smart model routing — auto-selects cheapest model that meets quality
  • OpenAI-compatible — swap your base_url, keep your existing code
  • Sync — simple, synchronous API

Built For

  • Production RAG systems that need reliable answers
  • Teams where "confidently wrong" is unacceptable
  • Legal, finance, healthcare, support automation
  • Anyone replacing "hope-based" RAG

Benchmarks

Metric Result
Hallucination rate 0%
Accuracy 83% (17% = correct refusals)
Eval tasks 61
LLMs tested 14 models, 3 runs each
Avg latency ~1.2s

Error Handling

from wauldo import WauldoError, ServerError, AgentTimeoutError

try:
    response = client.chat(ChatRequest.quick("auto", "Hello"))
except ServerError as e:
    print(f"Server error: {e}")
except AgentTimeoutError:
    print("Request timed out")
except WauldoError as e:
    print(f"SDK error: {e}")

RapidAPI

client = HttpClient(
    base_url="https://api.wauldo.com",
    headers={
        "X-RapidAPI-Key": "YOUR_RAPIDAPI_KEY",
        "X-RapidAPI-Host": "smart-rag-api.p.rapidapi.com",
    },
)

Free tier (300 req/month): RapidAPI


Contributing

PRs welcome. Check the good first issues.

Contributors

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wauldo-0.8.2.tar.gz (41.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wauldo-0.8.2-py3-none-any.whl (39.8 kB view details)

Uploaded Python 3

File details

Details for the file wauldo-0.8.2.tar.gz.

File metadata

  • Download URL: wauldo-0.8.2.tar.gz
  • Upload date:
  • Size: 41.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for wauldo-0.8.2.tar.gz
Algorithm Hash digest
SHA256 a5d2f9eab8e06c322c7d5086365301bb69abdc6fa875f931a47c4c44d8c29d93
MD5 2f90340db33804c70f7dabead707d114
BLAKE2b-256 609f95e4f55015324492573858d84152f94ccd6bde8f9cd4a25cc9b03ae7ad1b

See more details on using hashes here.

File details

Details for the file wauldo-0.8.2-py3-none-any.whl.

File metadata

  • Download URL: wauldo-0.8.2-py3-none-any.whl
  • Upload date:
  • Size: 39.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for wauldo-0.8.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2b45a202add1f62dc34232ecc00118275bfb8110c93de368c8e6bc0c39ae65f4
MD5 ff841947039c6edeea2afee880cc3a16
BLAKE2b-256 ef5bfadd4b0a14364df42c5bbf7a4da998f7147b642b833bc95c7a8bdd71ac5c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page