LangChain integration for Verathos — verified LLM inference on Bittensor
Project description
langchain-verathos
LangChain integration for Verathos — verified LLM inference on Bittensor.
Every inference response from Verathos is backed by cryptographic proofs (ZK sumcheck + Merkle commitments). Your LangChain chains can verify that the declared model was executed faithfully — no output substitution, no bait-and-switch.
Installation
pip install langchain-verathos
Quick Start
from langchain_verathos import ChatVerathos
# Set your API key (or export VERATHOS_API_KEY=...)
llm = ChatVerathos(api_key="vrt_sk_...")
# "auto" picks the best available model (this is the default)
llm = ChatVerathos(model="auto")
msg = llm.invoke("Explain zero-knowledge proofs in one paragraph.")
print(msg.content)
# Proof verification metadata is on every response
print(msg.response_metadata["proof_verified"]) # True
print(msg.response_metadata["timing"]) # {"inference_ms": ..., "prove_ms": ...}
print(msg.response_metadata["proof_details"]) # {"challenged_layers": [...], ...}
Why not just ChatOpenAI(base_url=...)?
The Verathos API is OpenAI-compatible, so you can use plain ChatOpenAI:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.verathos.ai/v1",
api_key="vrt_sk_...",
model="auto",
)
msg = llm.invoke("Hello!")
print(msg.content) # works fine
This works for basic chat, but you lose the proof metadata. The OpenAI SDK parses responses into typed Pydantic models, and while it preserves extra fields internally, ChatOpenAI doesn't extract them into response_metadata.
ChatVerathos adds:
| Feature | ChatOpenAI |
ChatVerathos |
|---|---|---|
| Chat completions | Yes | Yes |
| Streaming | Yes | Yes |
| Tool calling | Yes | Yes |
| Structured output | Yes | Yes |
Proof metadata in response_metadata |
No | Yes |
Proof metadata in additional_kwargs |
No | Yes |
VERATHOS_API_KEY env var |
No | Yes |
model="auto" default |
No | Yes |
list_models() discovery |
No | Yes |
LangSmith provider tag verathos |
No | Yes |
Features
Automatic model selection
Verathos supports model="auto" which automatically selects the best available model from the network. This is the default for ChatVerathos.
llm = ChatVerathos() # model="auto" by default
Proof verification metadata
Every response includes cryptographic proof metadata:
msg = llm.invoke("What is 2+2?")
# Top-level proof status
msg.response_metadata["proof_verified"] # True or False
# Detailed timing breakdown
msg.response_metadata["timing"]
# {
# "inference_ms": 1234.5,
# "prove_ms": 567.8,
# "verify_ms": 89.0,
# ...
# }
# Detailed proof information (when include_proof=True, the default)
msg.response_metadata["proof_details"]
# {
# "challenged_layers": [3, 17, 28, 41],
# "total_layers": 48,
# "beacon_valid": True,
# "detection_prob_single": 0.0816,
# "detection_prob_10": 0.5765,
# "detection_prob_100": 1.0,
# "proof_size_kb": 12.4,
# "sampling_active": True,
# "moe_info": {"is_moe": True, ...},
# ...
# }
The same metadata is also available in msg.additional_kwargs for convenient programmatic access:
if msg.additional_kwargs.get("proof_verified"):
print("Response is cryptographically verified!")
Streaming with proof metadata
In streaming mode, proof metadata arrives on the final chunk:
full = None
for chunk in llm.stream("Write a haiku about verification."):
print(chunk.text, end="", flush=True)
full = chunk if full is None else full + chunk
print()
# Proof metadata is available after accumulation
print(full.response_metadata.get("proof_verified"))
Model discovery
# List all available models
models = ChatVerathos.list_models()
for m in models:
print(f"{m['id']:40s} {m.get('owned_by', '')}")
# Just the model IDs
ids = ChatVerathos.list_model_ids()
# ['Qwen/Qwen3-30B-A3B', 'meta-llama/Llama-3.3-70B-Instruct', ...]
Async support
All async methods work out of the box:
msg = await llm.ainvoke("Hello!")
async for chunk in llm.astream("Hello!"):
print(chunk.text, end="")
Tool calling and structured output
Since ChatVerathos extends ChatOpenAI, all tool calling and structured output features work:
from pydantic import BaseModel
class Answer(BaseModel):
reasoning: str
answer: int
structured_llm = llm.with_structured_output(Answer)
result = structured_llm.invoke("What is 15 * 23?")
print(result.answer) # 345
Disabling proof details
To save bandwidth, you can disable the detailed proof metadata (you'll still get proof_verified and timing):
llm = ChatVerathos(include_proof=False)
Configuration
Environment variables
| Variable | Description |
|---|---|
VERATHOS_API_KEY |
API key for authentication |
Constructor parameters
| Parameter | Default | Description |
|---|---|---|
model |
"auto" |
Model name or "auto" for automatic selection |
api_key |
VERATHOS_API_KEY env |
API key |
base_url |
https://api.verathos.ai/v1 |
API base URL |
include_proof |
True |
Include detailed proof metadata |
temperature |
None |
Sampling temperature |
max_tokens |
None |
Maximum tokens to generate |
streaming |
False |
Enable streaming by default |
All standard ChatOpenAI parameters (timeout, max_retries, etc.) are also supported.
Getting an API key
- Visit verathos.ai to create an account
- Deposit TAO or USDC to get credits
- Generate an API key from the dashboard
Alternatively, Verathos supports x402 pay-per-request with USDC on Base — no API key or deposit needed.
Architecture
Verathos runs on Bittensor, a decentralized AI network. Miners serve open-weight models with cryptographic proofs. Validators verify proofs and set reputation scores. The Verathos proxy routes your requests to the highest-scoring miners.
Your App -> ChatVerathos -> api.verathos.ai -> Validator Proxy -> Miner (GPU)
|
Proof Verification
(ZK sumcheck + Merkle)
Every response proves:
- Model integrity: The exact declared model weights were used (Merkle commitment against on-chain root)
- Computation integrity: Matrix multiplications were executed correctly (ZK sumcheck protocol)
- Output binding: The returned text matches the proven computation (SHA-256 commitment)
- Input binding: The prompt you sent is the prompt that was executed (embedding proof)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_verathos-0.1.0.tar.gz.
File metadata
- Download URL: langchain_verathos-0.1.0.tar.gz
- Upload date:
- Size: 10.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1a43083d8b22e35841953b1d4a0bd6c4c84e9767ca9af73c6086a0b5292eade4
|
|
| MD5 |
9eb4c5de273b5963ac479223280055b4
|
|
| BLAKE2b-256 |
0737057a9e7251d10cea8d2cf6643f5fdf2183019ccf4b681638f801f26feb87
|
File details
Details for the file langchain_verathos-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langchain_verathos-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07718403f2fec0b4b4612d575026d128328afe3c0432d11dd2b52e30c4a6e6d4
|
|
| MD5 |
697f54d3c09de0fbc86bfaecc88de891
|
|
| BLAKE2b-256 |
6808ac770a3689b0f5038027c87db4633ae0bde2cd8596561d7e9a9ff06ef675
|