Official Python SDK for Wauldo — Verified AI answers from your documents
Project description
Wauldo Python SDK
Verified AI answers from your documents — or no answer at all.
Most RAG APIs guess. Wauldo verifies.
0% hallucination | 83% accuracy | 61 eval tasks | 14 LLMs tested
Demo • Docs • Free API Key • Benchmarks
Quickstart (30 seconds)
pip install wauldo
from wauldo import HttpClient
client = HttpClient(base_url="https://api.wauldo.com", api_key="YOUR_API_KEY")
# Upload a document
client.rag_upload(content="Our refund policy allows returns within 60 days...", filename="policy.txt")
# Ask a question — answer is verified against the source
result = client.rag_query("What is the refund policy?")
print(result.answer)
print(result.sources)
Output:
Answer: Returns are accepted within 60 days of purchase.
Sources: policy.txt — "Our refund policy allows returns within 60 days"
Grounded: true | Confidence: 0.92
Try the demo | Get a free API key
Why Wauldo (and not standard RAG)
Typical RAG pipeline
retrieve → generate → hope it's correct
Wauldo pipeline
retrieve → extract facts → generate → verify → return or refuse
If the answer can't be verified, it returns "insufficient evidence" instead of guessing.
See the difference
Document: "Refunds are processed within 60 days"
Typical RAG: "Refunds are processed within 30 days" ← wrong
Wauldo: "Refunds are processed within 60 days" ← verified
or "insufficient evidence" if unclear ← safe
Examples
Upload a PDF and ask questions
# Upload — text extraction + quality scoring happens server-side
result = client.upload_file("contract.pdf", title="Q3 Contract")
print(f"Extracted {result.chunks_count} chunks, quality: {result.quality_label}")
# Query
result = client.rag_query("What are the payment terms?")
print(f"Answer: {result.answer}")
print(f"Confidence: {result.get_confidence():.0%}")
print(f"Grounded: {result.audit.grounded}")
Guard — fact-check any LLM output
result = client.guard(
text="Returns are accepted within 60 days.",
source_context="Our policy allows returns within 14 days.",
mode="lexical",
)
print(result.verdict) # "rejected"
print(result.action) # "block"
print(result.claims[0].reason) # "numerical_mismatch"
Chat (OpenAI-compatible)
reply = client.chat_simple("auto", "Explain Python decorators")
print(reply)
Streaming
from wauldo import ChatRequest, HttpChatMessage
request = ChatRequest(model="auto", messages=[HttpChatMessage.user("Hello!")])
for chunk in client.chat_stream(request):
print(chunk, end="", flush=True)
Features
- Pre-generation fact extraction — numbers, dates, limits injected as constraints before the LLM call
- Post-generation grounding check — every answer verified against sources
- Guard API — verify any claim against any source (3 modes: lexical, hybrid, semantic)
- Native PDF/DOCX upload — server-side extraction with quality scoring
- Smart model routing — auto-selects cheapest model that meets quality
- OpenAI-compatible — swap your
base_url, keep your existing code - Sync — simple, synchronous API
Built For
- Production RAG systems that need reliable answers
- Teams where "confidently wrong" is unacceptable
- Legal, finance, healthcare, support automation
- Anyone replacing "hope-based" RAG
Benchmarks
| Metric | Result |
|---|---|
| Hallucination rate | 0% |
| Accuracy | 83% (17% = correct refusals) |
| Eval tasks | 61 |
| LLMs tested | 14 models, 3 runs each |
| Avg latency | ~1.2s |
Error Handling
from wauldo import WauldoError, ServerError, AgentTimeoutError
try:
response = client.chat(ChatRequest.quick("auto", "Hello"))
except ServerError as e:
print(f"Server error: {e}")
except AgentTimeoutError:
print("Request timed out")
except WauldoError as e:
print(f"SDK error: {e}")
RapidAPI
client = HttpClient(
base_url="https://api.wauldo.com",
headers={
"X-RapidAPI-Key": "YOUR_RAPIDAPI_KEY",
"X-RapidAPI-Host": "smart-rag-api.p.rapidapi.com",
},
)
Free tier (300 req/month): RapidAPI
Contributing
PRs welcome. Check the good first issues.
Contributors
- @qorexdev — async client
License
MIT — see LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wauldo-0.8.1.tar.gz.
File metadata
- Download URL: wauldo-0.8.1.tar.gz
- Upload date:
- Size: 41.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1502f11d822950cda81251ab258316739d77c6e5ae0f5ca1f3015953e76917e3
|
|
| MD5 |
214d8bf70c3b924c8c55e5c393bfacdf
|
|
| BLAKE2b-256 |
069150249ea0096333a4fe10b8f73d158b486cd23181c3d8767b6561801912b0
|
File details
Details for the file wauldo-0.8.1-py3-none-any.whl.
File metadata
- Download URL: wauldo-0.8.1-py3-none-any.whl
- Upload date:
- Size: 39.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7267129f635cd059023f2dbf94554a7e409dcd6abcc34955b5734d6eb05e34fe
|
|
| MD5 |
366f71ecdf560ce714ed8c94dd37214c
|
|
| BLAKE2b-256 |
53c57e5786debe116822be3c6cb4993686a129816860afd48386c4fffc54d601
|