Skip to main content

Sitellm= Litellm + Chinese + Security

Project description

SiteLLM = LiteLLM + 中文生态 + 安全

一站式开源AI治理中枢平台,安全基座(Security)、智能运营(Intelligence)、全链路追溯(Trace)与生态协同(Ecosystem)四大能力,让企业轻松接入并管理所有AI资产。

架构图

架构图

UI

Web UI

Use SiteLLM for

LLMs - Call 100+ LLMs (Python SDK + AI Gateway)

All Supported Endpoints - /chat/completions, /responses, /embeddings, /images, /audio, /batches, /rerank, /a2a, /messages and more.

AI Gateway (Proxy Server)

Getting Started - E2E Tutorial - Setup virtual keys, make your first request

pip install sitellm
sitellm --model gpt-4o
import openai

client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:4000")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

Docs: LLM Providers

Agents - Invoke A2A Agents (Python SDK + AI Gateway)

Supported Providers - LangGraph, Vertex AI Agent Engine, Azure AI Foundry, Bedrock AgentCore, Pydantic AI

Python SDK - A2A Protocol

from litellm.a2a_protocol import A2AClient
from a2a.types import SendMessageRequest, MessageSendParams
from uuid import uuid4

client = A2AClient(base_url="http://localhost:10001")

request = SendMessageRequest(
    id=str(uuid4()),
    params=MessageSendParams(
        message={
            "role": "user",
            "parts": [{"kind": "text", "text": "Hello!"}],
            "messageId": uuid4().hex,
        }
    )
)
response = await client.send_message(request)

AI Gateway (Proxy Server)

Step 1. Add your Agent to the AI Gateway

Step 2. Call Agent via A2A SDK

from a2a.client import A2ACardResolver, A2AClient
from a2a.types import MessageSendParams, SendMessageRequest
from uuid import uuid4
import httpx

base_url = "http://localhost:4000/a2a/my-agent"  # LiteLLM proxy + agent name
headers = {"Authorization": "Bearer sk-1234"}    # LiteLLM Virtual Key

async with httpx.AsyncClient(headers=headers) as httpx_client:
    resolver = A2ACardResolver(httpx_client=httpx_client, base_url=base_url)
    agent_card = await resolver.get_agent_card()
    client = A2AClient(httpx_client=httpx_client, agent_card=agent_card)

    request = SendMessageRequest(
        id=str(uuid4()),
        params=MessageSendParams(
            message={
                "role": "user",
                "parts": [{"kind": "text", "text": "Hello!"}],
                "messageId": uuid4().hex,
            }
        )
    )
    response = await client.send_message(request)

Docs: A2A Agent Gateway

MCP Tools - Connect MCP servers to any LLM (Python SDK + AI Gateway)

Python SDK - MCP Bridge

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from litellm import experimental_mcp_client
import litellm

server_params = StdioServerParameters(command="python", args=["mcp_server.py"])

async with stdio_client(server_params) as (read, write):
    async with ClientSession(read, write) as session:
        await session.initialize()

        # Load MCP tools in OpenAI format
        tools = await experimental_mcp_client.load_mcp_tools(session=session, format="openai")

        # Use with any LiteLLM model
        response = await litellm.acompletion(
            model="gpt-4o",
            messages=[{"role": "user", "content": "What's 3 + 5?"}],
            tools=tools
        )

AI Gateway - MCP Gateway

Step 1. Add your MCP Server to the AI Gateway

Step 2. Call MCP tools via /chat/completions

curl -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
  -H 'Authorization: Bearer sk-1234' \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Summarize the latest open PR"}],
    "tools": [{
      "type": "mcp",
      "server_url": "litellm_proxy/mcp/github",
      "server_label": "github_mcp",
      "require_approval": "never"
    }]
  }'

Use with Cursor IDE

{
  "mcpServers": {
    "LiteLLM": {
      "url": "http://localhost:4000/mcp/",
      "headers": {
        "x-litellm-api-key": "Bearer sk-1234"
      }
    }
  }
}

Docs: MCP Gateway


How to use LiteLLM

You can use LiteLLM through either the Proxy Server or Python SDK. Both gives you a unified interface to access multiple LLMs (100+ LLMs). Choose the option that best fits your needs:

LiteLLM AI Gateway LiteLLM Python SDK
Use Case Central service (LLM Gateway) to access multiple LLMs Use LiteLLM directly in your Python code
Who Uses It? Gen AI Enablement / ML Platform Teams Developers building LLM projects
Key Features Centralized API gateway with authentication and authorization, multi-tenant cost tracking and spend management per project/user, per-project customization (logging, guardrails, caching), virtual keys for secure access control, admin dashboard UI for monitoring and management Direct Python library integration in your codebase, Router with retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router, application-level load balancing and cost tracking, exception handling with OpenAI-compatible errors, observability callbacks (Lunary, MLflow, Langfuse, etc.)

LiteLLM Performance: 8ms P95 latency at 1k RPS (See benchmarks here)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Supported Providers (Website Supported Models | Docs)

Provider /chat/completions /messages /responses /embeddings /image/generations /audio/transcriptions /audio/speech /moderations /batches /rerank
Abliteration (abliteration)
AI/ML API (aiml)
AI21 (ai21)
AI21 Chat (ai21_chat)
Aleph Alpha
Amazon Nova
Anthropic (anthropic)
Anthropic Text (anthropic_text)
Anyscale
AssemblyAI (assemblyai)
Auto Router (auto_router)
AWS - Bedrock (bedrock)
AWS - Sagemaker (sagemaker)
Azure (azure)
Azure AI (azure_ai)
Azure Text (azure_text)
Baseten (baseten)
Bytez (bytez)
Cerebras (cerebras)
Clarifai (clarifai)
Cloudflare AI Workers (cloudflare)
Codestral (codestral)
Cohere (cohere)
Cohere Chat (cohere_chat)
CometAPI (cometapi)
CompactifAI (compactifai)
Custom (custom)
Custom OpenAI (custom_openai)
Dashscope (dashscope)
Databricks (databricks)
DataRobot (datarobot)
Deepgram (deepgram)
DeepInfra (deepinfra)
Deepseek (deepseek)
ElevenLabs (elevenlabs)
Empower (empower)
Fal AI (fal_ai)
Featherless AI (featherless_ai)
Fireworks AI (fireworks_ai)
FriendliAI (friendliai)
Galadriel (galadriel)
GitHub Copilot (github_copilot)
GitHub Models (github)
Google - PaLM
Google - Vertex AI (vertex_ai)
Google AI Studio - Gemini (gemini)
GradientAI (gradient_ai)
Groq AI (groq)
Heroku (heroku)
Hosted VLLM (hosted_vllm)
Huggingface (huggingface)
Hyperbolic (hyperbolic)
IBM - Watsonx.ai (watsonx)
Infinity (infinity)
Jina AI (jina_ai)
Lambda AI (lambda_ai)
Lemonade (lemonade)
LiteLLM Proxy (litellm_proxy)
Llamafile (llamafile)
LM Studio (lm_studio)
Maritalk (maritalk)
Meta - Llama API (meta_llama)
Mistral AI API (mistral)
Moonshot (moonshot)
Morph (morph)
Nebius AI Studio (nebius)
NLP Cloud (nlp_cloud)
Novita AI (novita)
Nscale (nscale)
Nvidia NIM (nvidia_nim)
OCI (oci)
Ollama (ollama)
Ollama Chat (ollama_chat)
Oobabooga (oobabooga)
OpenAI (openai)
OpenAI-like (openai_like)
OpenRouter (openrouter)
OVHCloud AI Endpoints (ovhcloud)
Perplexity AI (perplexity)
Petals (petals)
Predibase (predibase)
Recraft (recraft)
Replicate (replicate)
Sagemaker Chat (sagemaker_chat)
Sambanova (sambanova)
Snowflake (snowflake)
Text Completion Codestral (text-completion-codestral)
Text Completion OpenAI (text-completion-openai)
Together AI (together_ai)
Topaz (topaz)
Triton (triton)
V0 (v0)
Vercel AI Gateway (vercel_ai_gateway)
VLLM (vllm)
Volcengine (volcengine)
Voyage AI (voyage)
WandB Inference (wandb)
Watsonx Text (watsonx_text)
xAI (xai)
Xinference (xinference)

Read the Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sitellm-0.0.9-py3-none-any.whl (16.1 MB view details)

Uploaded Python 3

File details

Details for the file sitellm-0.0.9-py3-none-any.whl.

File metadata

  • Download URL: sitellm-0.0.9-py3-none-any.whl
  • Upload date:
  • Size: 16.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for sitellm-0.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 7f7306f55e5cfb0d181176298494a169ba5e7f29a1e741e1231190f1469a7f4a
MD5 7a1cb77c3f2b3918daba369c9c4e7fad
BLAKE2b-256 b4aa144a2b4dc18298149ded5907e15e236a22f557ac4419db3de02ffbd95f0d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page