Skip to main content

LLM Sniffer - OpenAI-compatible reverse proxy with request/response inspector

Project description

LLM Sniffer

LLM Sniffer is an OpenAI-compatible reverse proxy with request/response inspector.

Features

  • Reverse Proxy: OpenAI-compatible API proxy
  • Request/Response Inspector: Monitor and inspect all LLM requests and responses
  • SSE Support: Server-Sent Events for streaming responses
  • Modern UI: Clean web interface for inspecting traffic
  • Multi-Upstream Support: Configure multiple LLM backends
  • Dynamic Configuration: Switch between upstreams via command line or config

Installation

pip install llm-sniffer

Quick Start

Start the proxy server with default settings:

llm-sniffer

Configure your LLM client to use:

http://127.0.0.1:7654/v1

Then open http://127.0.0.1:7655 in your browser to inspect requests.

Command Line Options

Proxy Server Options

llm-sniffer [OPTIONS]

Options:
  --upstream-url URL     Direct upstream URL (highest priority)
  --upstream-name NAME   Use upstream from configuration file
  --upstream NAME/URL    Upstream name or URL (deprecated)
  --proxy-port PORT      Proxy service port (default: 7654)
  --ui-port PORT         UI service port (default: 7655)
  --max-records N        Maximum number of records to keep (default: 200)
  --think on|off         Enable/disable thinking mode (default: on)
  --host ADDRESS         Bind address (default: 127.0.0.1)
  --params JSON          Parameters to inject into each request body

Configuration Management

# Initialize default config file
llm-sniffer config init

# List all configured upstreams
llm-sniffer config list

# Set active upstream
llm-sniffer config set kimi

# Add new upstream
llm-sniffer config add myserver --url http://localhost:8080 --description "My Server"

# Remove upstream
llm-sniffer config remove myserver

# Print config file path
llm-sniffer config path

Configuration File

Default config path: ~/.llm_sniffer/config.yaml

upstreams:
  local:
    url: http://127.0.0.1:8000
    api_key: ""
    description: Local LLM server (vLLM, Ollama, etc.)
  openai:
    url: https://api.openai.com
    api_key: ""
    description: OpenAI API
  qwen:
    url: https://dashscope.aliyuncs.com/compatible-mode
    api_key: ""
    description: Qwen (Alibaba Cloud)
  kimi:
    url: https://api.moonshot.cn
    api_key: ""
    description: Kimi (Moonshot AI)

active_upstream: local
proxy_port: 7654
ui_port: 7655
max_records: 200
think: on
host: 127.0.0.1

Upstream Selection Priority

  1. --upstream-url (command line) - Highest priority
  2. --upstream-name (command line)
  3. --upstream (command line) - Deprecated
  4. active_upstream in config file - Lowest priority

Examples

# Use specific upstream URL
llm-sniffer --upstream-url http://localhost:8080

# Use upstream from config
llm-sniffer --upstream-name kimi

# Custom ports
llm-sniffer --proxy-port 8080 --ui-port 8081

# Start with custom upstream and inject parameters
llm-sniffer --upstream-name openai --params '{"temperature": 0.7}'

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_sniffer-0.2.0.tar.gz (21.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_sniffer-0.2.0-py3-none-any.whl (22.1 kB view details)

Uploaded Python 3

File details

Details for the file llm_sniffer-0.2.0.tar.gz.

File metadata

  • Download URL: llm_sniffer-0.2.0.tar.gz
  • Upload date:
  • Size: 21.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for llm_sniffer-0.2.0.tar.gz
Algorithm Hash digest
SHA256 1257d1f1f468f031ac38a6e333a98c26493ca6cf853e2883f28f957c077c91c0
MD5 27351921aa22b6c9fd1839281e40afd1
BLAKE2b-256 5107d97a1cf006b5139a399c273c088cd903398dd1c584bbc0afba2e91660a65

See more details on using hashes here.

File details

Details for the file llm_sniffer-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: llm_sniffer-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 22.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for llm_sniffer-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 420492d8a765d56b739d6c7b7ae4bc409fd54d37af884ce007848325f734808c
MD5 fa181a156a3f1f3781464ff4f5186e6e
BLAKE2b-256 27cae5d450c452e9c31c7d845dfdc613e805b1537745d350c593641517d37b7a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page