Skip to main content

Your Personal AI Cloud -- intelligent proxy, router, and cache for LLMs

Project description

LLMHosts.com

PyPI version Python 3.10+ License: MIT CI Tests PRs Welcome

Your hardware. Real AI infrastructure. From anywhere.

LLMHosts turns your local GPU into production AI infrastructure with intelligent routing, verified caching, and global access. The CLI proxy makes your Ollama/vLLM OpenAI-compatible. The SaaS platform at llmhosts.com provides cost tracking, plan management, and team features.

Two ways to use it:

  • Self-hosted CLIpip install llmhosts and run on your own hardware (free, open source)
  • SaaS Platform — Sign up at llmhosts.com for cloud cost tracking, API key management, and team features

SaaS Platform (llmhosts.com)

Track your AI spending, manage API keys, and get real-time savings projections.

Live at: https://llmhosts.com

Features:

  • 📊 Cost tracking across OpenAI, Anthropic, Google AI, AWS Bedrock, Azure
  • 🔑 API key management with plan-based limits
  • 📈 12-month spending projections with confidence scoring
  • 💰 Real-time savings estimates (35% with intelligent caching + routing)
  • 🎯 Gamified achievements for cost milestones
  • 💳 Stripe-powered billing (Pro $29/mo, Team $99/mo, Enterprise $299/mo)
  • 👥 Team management (coming soon)

Quick Start:

  1. Sign up at llmhosts.com
  2. Add your first cost entry
  3. Generate an API key for the CLI proxy
  4. Connect your self-hosted LLMHost proxy to track usage

Self-Hosted CLI

Run the intelligent proxy on your own hardware.

pip install llmhosts
llmhosts serve

Point any OpenAI-compatible tool at http://localhost:4000/v1. Your tools now use your local GPU. Cost: $0.


Why LLMHosts?

  • Cloud bills add up — Route Cursor, Claude Code, and Aider to your local GPU instead. Same tools, zero API spend.
  • Your hardware, your control — All inference runs on your machine. No data leaves your network unless you choose.
  • Works anywherellmhosts tunnel auto-detects Tailscale or Cloudflare and creates a secure tunnel. Your home GPU becomes your portable AI.

Features

Area Description
Proxy OpenAI + Anthropic compatible API on port 4000. Drop-in for any client.
Router Three-tier: rules first, then kNN similarity, then ModernBERT classifier. Routes each request to the right model.
Cache Three-tier vCache: exact hash, entity namespace, verified semantic. Cut repeat calls to zero.
Tunnel llmhosts tunnel — auto-detect Tailscale or Cloudflare. Your GPU on your laptop, anywhere.
Dashboard TUI (terminal) + web UI at /dashboard. Live request flow, cache stats, model health.
BYOK Bring your own cloud keys. Fallback to OpenAI/Anthropic when local models can't handle a request.

Quick Start

Install

Three tiers, pick what you need:

pip install llmhosts                    # Core (~50MB) — proxy, router, dashboard
pip install "llmhost[smart]"           # Smart (~150MB) — + ML router, semantic cache
pip install "llmhost[full]"             # Full (~2GB) — + PyTorch, full intelligence

Docker:

docker run -p 4000:4000 llmhost/llmhost
# GPU: docker run --gpus all -p 4000:4000 llmhost/llmhost

Start the Proxy

llmhosts serve

Starts the proxy on http://localhost:4000, auto-discovers Ollama, loads BYOK keys, and launches the TUI dashboard. Web dashboard at http://localhost:4000/dashboard.

Access from Anywhere

The differentiator: make your home GPU reachable from your laptop, phone, or office.

llmhosts tunnel

Auto-detects Tailscale or Cloudflare and creates a secure tunnel. Prints a URL — use it from any device. No VPN config, no port forwarding.

llmhosts tunnel --provider tailscale --funnel   # Public HTTPS via Tailscale Funnel
llmhosts tunnel status                          # Check tunnel status
llmhosts tunnel stop                             # Stop active tunnel

Works With Everything

Every tool that speaks OpenAI format works. Just set the base URL:

export OPENAI_API_BASE=http://localhost:4000/v1
# Some tools use: export OPENAI_BASE_URL=http://localhost:4000/v1
export OPENAI_API_KEY=anything   # LLMHosts accepts any key for local mode
Tool How
Cursor Settings > Models > Custom endpoint: http://localhost:4000/v1
Claude Code Set OPENAI_API_BASE or configure base URL in settings
Aider aider --api-base http://localhost:4000/v1
Continue.dev Add OpenAI-compatible provider, base URL: http://localhost:4000/v1
Open WebUI Set OpenAI API URL to http://localhost:4000/v1
Any OpenAI client base_url="http://localhost:4000/v1" in client config

Architecture

Request  →  Proxy (4000)  →  Router  →  vCache  →  Backend
                │              │          │
                │              ├─ Tier 1: Rules
                │              ├─ Tier 2: kNN (FAISS + all-MiniLM)
                │              └─ Tier 3: ModernBERT → Qwen-0.5B
                │
                ├─ Cache: exact hash → namespace → semantic (vCache)
                │
                └─ Backend: Ollama | Cloud API (BYOK)

Commands

Command Description
llmhosts serve Start proxy + dashboard
llmhosts tunnel Start secure tunnel (Tailscale/Cloudflare auto-detect)
llmhosts tunnel status Show tunnel status
llmhosts tunnel stop Stop active tunnel
llmhosts doctor Verify setup and dependencies
llmhosts setup Interactive first-run wizard
llmhosts keys add <provider> <key> Add BYOK API key
llmhosts keys list List configured providers
llmhosts keys validate Validate stored keys
llmhosts cache stats Cache hit rates and size
llmhosts cache clear Clear cache
llmhosts suggest-models Recommend models for your hardware

Dashboard

  • TUI — Built-in terminal UI when you run llmhosts serve. Live request flow, backends, cache activity.
  • Web — Browser dashboard at http://localhost:4000/dashboard. Request history, cache stats, model health.

Configuration

  • TOML~/.config/llmhost/config.toml or --config path/to/config.toml
  • EnvLLMHOST_* prefixed variables
  • CLI--host, --port, --no-tui, --log-level

Development

docker compose run --rm dev
pip install -e ".[dev]"
llmhost --version
pytest tests/ -v

Contributing

PRs welcome. Open an issue first for large changes. Run pytest tests/ and ruff check . before submitting.


License

MIT


Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmhosts-0.1.0.tar.gz (11.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmhosts-0.1.0-py3-none-any.whl (437.0 kB view details)

Uploaded Python 3

File details

Details for the file llmhosts-0.1.0.tar.gz.

File metadata

  • Download URL: llmhosts-0.1.0.tar.gz
  • Upload date:
  • Size: 11.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for llmhosts-0.1.0.tar.gz
Algorithm Hash digest
SHA256 188716ef329c76dd64e99383c78a45a1b150b9ff7fd0af0fd74e6fcf9de2a6e8
MD5 a003bd821abda62573795f4ffe483cfb
BLAKE2b-256 753d46d3e6485d947d7bd958fdb239b7f63a286e94d925c84c96813e85e01f1d

See more details on using hashes here.

File details

Details for the file llmhosts-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llmhosts-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 437.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for llmhosts-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2521968b7f3a68ca5cbcf5ceef55993199bd4d118c816b516724f74b86280e32
MD5 18a75c4886287de863d897949174b316
BLAKE2b-256 965cc8b92aa2a384cc2c63ffb40d15227f6c217880aa7c39b1d98ef512a116f0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page