Skip to main content

Universal LLM interfaces for multi-provider chat and utilities

Project description

vv-llm

Universal LLM interface layer for Python. One API, 16 backends, sync & async.

pip install vv-llm

Supported Backends

OpenAI | Anthropic | DeepSeek | Gemini | Qwen | Groq | Mistral | Moonshot | MiniMax | Yi | ZhiPuAI | Baichuan | StepFun | xAI | Ernie | Local

Also supports Azure OpenAI, Vertex AI, and AWS Bedrock deployments.

Quick Start

Configure

from vv_llm.settings import settings

settings.load({
    "VERSION": "2",
    "endpoints": [
        {
            "id": "openai-default",
            "api_base": "https://api.openai.com/v1",
            "api_key": "sk-...",
        }
    ],
    "backends": {
        "openai": {
            "models": {
                "gpt-4o": {
                    "id": "gpt-4o",
                    "endpoints": ["openai-default"],
                }
            }
        }
    }
})

Sync

from vv_llm.chat_clients import create_chat_client, BackendType

client = create_chat_client(BackendType.OpenAI, model="gpt-4o")
resp = client.create_completion([
    {"role": "user", "content": "Explain RAG in one sentence"}
])
print(resp.content)

Streaming

for chunk in client.create_stream([
    {"role": "user", "content": "Write a haiku"}
]):
    if chunk.content:
        print(chunk.content, end="")

Async

import asyncio
from vv_llm.chat_clients import create_async_chat_client, BackendType

async def main():
    client = create_async_chat_client(BackendType.OpenAI, model="gpt-4o")
    resp = await client.create_completion([
        {"role": "user", "content": "hello"}
    ])
    print(resp.content)

asyncio.run(main())

Features

  • Unified interface — same create_completion / create_stream API across all providers
  • Type-safe factorycreate_chat_client(BackendType.X) returns the correct client type
  • Multi-endpoint — configure multiple endpoints per backend with random selection and failover
  • Tool calling — normalized tool/function calling across providers
  • Multimodal — text + image inputs where supported
  • Thinking/reasoning — access chain-of-thought from Claude, DeepSeek Reasoner, etc.
  • Token counting — per-model tokenizers (tiktoken, deepseek-tokenizer, qwen-tokenizer)
  • Rate limiting — RPM/TPM controls with memory, Redis, or DiskCache backends
  • Context length control — automatic message truncation to fit model limits
  • Prompt caching — Anthropic prompt caching support
  • Retry with backoff — configurable retry logic for transient failures

Utilities

from vv_llm.chat_clients import format_messages, get_token_counts, get_message_token_counts
Function Description
format_messages Normalize multimodal/tool messages across formats
get_token_counts Count tokens for a text string
get_message_token_counts Count tokens for a message list

Optional Dependencies

pip install 'vv-llm[redis]'      # Redis rate limiting
pip install 'vv-llm[diskcache]'  # DiskCache rate limiting
pip install 'vv-llm[server]'     # FastAPI token server
pip install 'vv-llm[vertex]'     # Google Vertex AI
pip install 'vv-llm[bedrock]'    # AWS Bedrock

Project Structure

src/vv_llm/
  chat_clients/    # Per-backend clients + factory
  settings/        # Configuration management
  types/           # Type definitions & enums
  utilities/       # Rate limiting, retry, media processing, token counting
  server/          # Optional token counting server

tests/unit/        # Unit tests
tests/live/        # Live integration tests (requires real API keys)

Development

pdm install -d          # Install dev dependencies
pdm run lint            # Ruff linter
pdm run format-check    # Ruff format check
pdm run type-check      # Ty type checker
pdm run test            # Unit tests
pdm run test-live       # Live tests (needs real endpoints)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vv_llm-0.3.73.tar.gz (58.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vv_llm-0.3.73-py3-none-any.whl (69.8 kB view details)

Uploaded Python 3

File details

Details for the file vv_llm-0.3.73.tar.gz.

File metadata

  • Download URL: vv_llm-0.3.73.tar.gz
  • Upload date:
  • Size: 58.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.26.6 CPython/3.12.0 Windows/11

File hashes

Hashes for vv_llm-0.3.73.tar.gz
Algorithm Hash digest
SHA256 63057fdeac46086bda9bf3098b9d26e9160dc452d629c45fcb1c21ab31b6733f
MD5 77cd06e3a07f09e429989ba095ea41f2
BLAKE2b-256 da9c44fe5f7688dcdf5c6cb37a32e5fbbbe70d0dd222a88870a765f71badfb12

See more details on using hashes here.

File details

Details for the file vv_llm-0.3.73-py3-none-any.whl.

File metadata

  • Download URL: vv_llm-0.3.73-py3-none-any.whl
  • Upload date:
  • Size: 69.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.26.6 CPython/3.12.0 Windows/11

File hashes

Hashes for vv_llm-0.3.73-py3-none-any.whl
Algorithm Hash digest
SHA256 bfae6494551f7559019c1626a334aac7f2e49945cd6c627df3760459472af838
MD5 436b71fbe1d00c302af20f1e0d28cdea
BLAKE2b-256 6f09ff4ace2178355b5e07e86cec6e24b0f7000d0f15a11b65d92a1a15b713de

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page