LiteLLM custom provider for Verathos -- verified LLM inference on Bittensor
Project description
litellm-verathos
LiteLLM custom provider for Verathos -- verified LLM inference on the Bittensor network.
Every response from Verathos is backed by cryptographic proofs (ZK sumcheck + Merkle commitments) that guarantee the output was produced by the declared model. No output substitution is possible.
Installation
pip install litellm-verathos
Quick Start
import litellm
from litellm_verathos import VerathosProvider
# Register the provider (once, at startup)
VerathosProvider.register()
# Use "verathos/auto" for automatic best-model selection
response = litellm.completion(
model="verathos/auto",
messages=[{"role": "user", "content": "Explain zero-knowledge proofs"}],
api_key="vrt_sk_...", # your Verathos API key
)
print(response.choices[0].message.content)
Model Names
Use the verathos/ prefix followed by any model identifier:
| Model string | What happens |
|---|---|
verathos/auto |
Verathos picks the best available model for you |
verathos/Qwen/Qwen3-30B-A3B |
Routes to a specific model |
verathos/meta-llama/Llama-3.3-70B-Instruct |
Routes to a specific model |
To discover available models, query the Verathos API directly:
curl https://api.verathos.ai/v1/models -H "Authorization: Bearer $VERATHOS_API_KEY"
Authentication
Pass the API key in any of these ways (checked in order):
api_key=parameter on each callVERATHOS_API_KEYenvironment variable
export VERATHOS_API_KEY="vrt_sk_..."
# Then no api_key= needed
response = litellm.completion(
model="verathos/auto",
messages=[{"role": "user", "content": "Hello!"}],
)
Getting an API Key
- Visit verathos.ai and sign up
- Fund your account with TAO or USDC deposits
- Generate an API key from the dashboard
Streaming
Streaming works out of the box:
response = litellm.completion(
model="verathos/auto",
messages=[{"role": "user", "content": "Write a haiku about cryptography"}],
api_key="vrt_sk_...",
stream=True,
)
for chunk in response:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Async
import asyncio
import litellm
from litellm_verathos import VerathosProvider
VerathosProvider.register()
async def main():
response = await litellm.acompletion(
model="verathos/auto",
messages=[{"role": "user", "content": "Hello!"}],
api_key="vrt_sk_...",
)
print(response.choices[0].message.content)
asyncio.run(main())
All Standard Parameters
Verathos is OpenAI-compatible, so all standard chat completion parameters work:
response = litellm.completion(
model="verathos/auto",
messages=[{"role": "user", "content": "Solve x^2 - 4 = 0"}],
api_key="vrt_sk_...",
temperature=0.0,
max_tokens=512,
top_p=0.95,
stop=["\n\n"],
seed=42,
)
Custom API Base
For self-hosted Verathos validators or local development:
response = litellm.completion(
model="verathos/auto",
messages=[{"role": "user", "content": "Hello!"}],
api_key="vrt_sk_...",
api_base="http://localhost:8080/v1",
)
Or via environment variable:
export VERATHOS_API_BASE="http://localhost:8080/v1"
LiteLLM Proxy (config.yaml)
You can also use the Verathos provider with the LiteLLM proxy server. Add to your config.yaml:
model_list:
- model_name: verathos-auto
litellm_params:
model: openai/auto
api_base: https://api.verathos.ai/v1
api_key: os.environ/VERATHOS_API_KEY
- model_name: verathos-qwen
litellm_params:
model: openai/Qwen/Qwen3-30B-A3B
api_base: https://api.verathos.ai/v1
api_key: os.environ/VERATHOS_API_KEY
Note: The LiteLLM proxy's
config.yamluses theopenai/prefix since it handles OpenAI-compatible endpoints natively. Thelitellm-verathosPython package is for programmatic usage where you want the cleanerverathos/prefix.
x402: Pay-Per-Request with USDC (No API Key Needed)
Verathos supports x402 -- a protocol for HTTP-native micropayments. With x402, you pay per request using USDC on Base L2, with no account or API key required.
x402 works at the HTTP level (the server returns HTTP 402 with payment instructions, you sign a USDC payment and resend). Since this operates below the LiteLLM abstraction layer, use the x402 client SDK directly for pay-per-request:
# x402 example (direct, not through LiteLLM)
from openai import OpenAI
from x402.client import create_x402_client
import httpx
x402_client = create_x402_client(
httpx.Client(),
wallet, # your Base wallet with USDC
)
client = OpenAI(
base_url="https://api.verathos.ai/v1",
api_key="x402", # any placeholder
http_client=x402_client,
)
response = client.chat.completions.create(
model="auto",
messages=[{"role": "user", "content": "Hello!"}],
)
Why Verathos?
Traditional LLM APIs are a black box -- you have no way to verify that the provider actually ran your prompt through the model they claim. Verathos changes this:
- Verified inference: Every response includes cryptographic proofs that the output was computed by the declared model
- No output substitution: SHA256 output commitments + Fiat-Shamir binding prevent response tampering
- Decentralized: Runs on Bittensor's incentive network -- miners compete to serve models, validators verify proofs
- OpenAI-compatible: Drop-in replacement for any OpenAI-compatible client
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file litellm_verathos-0.1.0.tar.gz.
File metadata
- Download URL: litellm_verathos-0.1.0.tar.gz
- Upload date:
- Size: 9.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8ec17a96e02eb442a48f2b3c8bd1af8c354af436994c290c2abeec912b44f165
|
|
| MD5 |
b9ec4ed1dbbc5922c272fd87de6558ba
|
|
| BLAKE2b-256 |
453292a4a7933f391895914589d894eed46e3af7a72bd039d988282f3193974b
|
File details
Details for the file litellm_verathos-0.1.0-py3-none-any.whl.
File metadata
- Download URL: litellm_verathos-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fc494ad18f8da9904e4d6547ae5e2f61201efc147fc641aabc30f477c606a4ce
|
|
| MD5 |
28ad70407069f5b5bf7da25b1134dc05
|
|
| BLAKE2b-256 |
342b20b9b8cb456a3d30b4768b450553d2e9b074ee651dda915d3b5e6410b7e9
|