Skip to main content

LLM Sniffer - OpenAI-compatible reverse proxy with request/response inspector

Project description

LLM Sniffer

LLM Sniffer is an OpenAI-compatible reverse proxy with request/response inspector.

Features

  • Reverse Proxy: OpenAI-compatible API proxy
  • Request/Response Inspector: Monitor and inspect all LLM requests and responses
  • SSE Support: Server-Sent Events for streaming responses
  • Modern UI: Clean web interface for inspecting traffic
  • Multi-Upstream Support: Configure multiple LLM backends
  • Dynamic Configuration: Switch between upstreams via command line or config

Installation

pip install llm-sniffer

Quick Start

Start the proxy server with default settings:

llm-sniffer

Configure your LLM client to use:

http://127.0.0.1:7654/v1

Then open http://127.0.0.1:7655 in your browser to inspect requests.

Command Line Options

Proxy Server Options

llm-sniffer [OPTIONS]

Options:
  --upstream-url URL     Direct upstream URL (highest priority)
  --upstream-name NAME   Use upstream from configuration file
  --upstream NAME/URL    Upstream name or URL (deprecated)
  --proxy-port PORT      Proxy service port (default: 7654)
  --ui-port PORT         UI service port (default: 7655)
  --max-records N        Maximum number of records to keep (default: 200)
  --think on|off         Enable/disable thinking mode (default: on)
  --host ADDRESS         Bind address (default: 127.0.0.1)
  --params JSON          Parameters to inject into each request body

Configuration Management

# Initialize default config file
llm-sniffer config init

# List all configured upstreams
llm-sniffer config list

# Set active upstream
llm-sniffer config set kimi

# Add new upstream
llm-sniffer config add myserver --url http://localhost:8080 --description "My Server"

# Remove upstream
llm-sniffer config remove myserver

# Print config file path
llm-sniffer config path

Configuration File

Default config path: ~/.llm_sniffer/config.yaml

upstreams:
  local:
    url: http://127.0.0.1:8000
    api_key: ""
    description: Local LLM server (vLLM, Ollama, etc.)
  openai:
    url: https://api.openai.com
    api_key: ""
    description: OpenAI API
  qwen:
    url: https://dashscope.aliyuncs.com/compatible-mode
    api_key: ""
    description: Qwen (Alibaba Cloud)
  kimi:
    url: https://api.moonshot.cn
    api_key: ""
    description: Kimi (Moonshot AI)

active_upstream: local
proxy_port: 7654
ui_port: 7655
max_records: 200
think: on
host: 127.0.0.1

Upstream Selection Priority

  1. --upstream-url (command line) - Highest priority
  2. --upstream-name (command line)
  3. --upstream (command line) - Deprecated
  4. active_upstream in config file - Lowest priority

Examples

# Use specific upstream URL
llm-sniffer --upstream-url http://localhost:8080

# Use upstream from config
llm-sniffer --upstream-name kimi

# Custom ports
llm-sniffer --proxy-port 8080 --ui-port 8081

# Start with custom upstream and inject parameters
llm-sniffer --upstream-name openai --params '{"temperature": 0.7}'

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_sniffer-0.3.0.tar.gz (22.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_sniffer-0.3.0-py3-none-any.whl (23.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_sniffer-0.3.0.tar.gz.

File metadata

  • Download URL: llm_sniffer-0.3.0.tar.gz
  • Upload date:
  • Size: 22.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for llm_sniffer-0.3.0.tar.gz
Algorithm Hash digest
SHA256 1d7f95e3da01b9e285a87459a3c9a8c4383fc5316ebb8af038d78acba3fcb162
MD5 cc4ea2344f2657d3e62abaa5a612ea17
BLAKE2b-256 1d772895581a222c638e7024d7d8f6f65e562fc9b9061e3618b737ecfecbf96c

See more details on using hashes here.

File details

Details for the file llm_sniffer-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: llm_sniffer-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 23.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for llm_sniffer-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 af362f71090bffa950eb540feff92d9640508db5056bceedded6ef03ce897514
MD5 e897e47c1aef9cda97aa290a8d850664
BLAKE2b-256 c1940f86ea5cce0791f2cc1e19018f5c906dd62247f2573a377eb1b9c3699f34

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page