Skip to main content

Intelligent AI proxy with multi-provider routing, semantic caching, and delta context buffers

Project description

Tworzenie projektu w proxym

img.png

Widok na liście projektów

img_1.png

Proxym — Intelligent Multi-Provider LLM Gateway

Lokalne proxy łączące 10 providerów, 15 modeli, NVIDIA Jetson Orin, i delta context buffer w jednym OpenAI-compatible API. Budżet $20–60/mies. zamiast $150+.

┌─────────────────────────────────────────────────┐
│  IDE (Roo Code / Cline / Continue.dev / Aider)  │
│           ↓ localhost:4000                      │
├─────────────────────────────────────────────────┤
│              Proxym (FastAPI)                   │
│  ┌─────────┐ ┌────────────┐ ┌───────────────┐   │
│  │Analyzer │→│  Router    │→│  LiteLLM      │   │
│  │(tier+   │ │(cost+      │ │(10 providers  │   │
│  │ caps)   │ │ fallbacks) │ │ 15 models)    │   │
│  └─────────┘ └────────────┘ └───────────────┘   │
│       ↑            ↑              ↑             │
│  Delta Buffer  Redis Cache  Budget Ledger       │
├─────────────────────────────────────────────────┤
│  Ollama (Jetson Orin / GPU / CPU)               │
└─────────────────────────────────────────────────┘

Features

  • Content-based routing — analyzes your prompt to pick the cheapest model that can handle the task (Opus 4.6 for architecture, Haiku 4.5 for typos)
  • 10 providers, 15 models — Anthropic, OpenAI, Google, DeepSeek, Groq, OpenRouter, Mistral, Together, Fireworks, Cerebras + local Ollama
  • Delta context buffer — watches code2llm output and sends only file diffs, not full context (saves 60–80% tokens)
  • Budget enforcement — daily/monthly USD limits with per-request caps
  • Fallback chains — if Anthropic is rate-limited, auto-fallback to OpenAI → DeepSeek → local
  • OpenAI-compatible API — drop-in replacement for any tool expecting OpenAI format
  • Voice Chat Interface — natural language management via DSL + LLM fallback, optional STT/TTS
  • MCP Self-Server — exposes proxym management as LLM tools at /mcp/self/tools/*
  • Docker + Podman + Quadlet — development, production, and systemd-native deployments
  • SQLite persistence — all dashboard data (tickets, accounts, projects, environments, users) persists to a single SQLite file; configurable via PROXYM_DATA_DIR
  • Tool dispatch & LLM fallback — assign AI tools (aider, claude-code, etc.) to tickets; when binaries are unavailable, auto-fallback to LLM execution via OpenRouter
  • Dashboard UI — 16-tab React dashboard at /dashboard-ui for projects, tickets, tools, environments, users, and observability

Quick Start

Option A: Local Python

git clone https://github.com/wronai/proxym && cd proxym
bash scripts/setup.sh
# Edit .env with your API keys
proxym serve

Option B: Docker Compose

cp .env.example .env
# Edit .env with your API keys
docker compose up -d

All dashboard data persists to the proxym-data Docker volume (/app/data/proxym.sqlite).

Option C: Jetson Orin

docker build -f Dockerfile.jetson -t proxym:jetson .
docker run --runtime nvidia --gpus all \
  -p 4000:4000 -p 11434:11434 \
  --env-file .env \
  proxym:jetson

Test it

curl http://localhost:4000/health

curl http://localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer sk-proxy-local-dev" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "balanced",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Model Routing Strategy

The proxy analyzes each prompt and picks the optimal model:

Task Type Tier Model Selected Cost/1M tokens
"Fix this typo" trivial Cerebras Llama 70B / DeepSeek V3 $0.27–$0.60
"What does this function do?" operational Haiku 4.5 / Gemini Flash $0.15–$1.00
"Implement a REST endpoint" standard Gemini 3 Flash Preview / GPT-4.1 $0.50–$3.00
"Refactor auth across 20 files" complex Gemini 3 Flash Preview / Gemini Pro $0.50–$10.00
"Debug this race condition step by step" deep Opus 4.6 / DeepSeek R1 $0.55–$5.00

Model Aliases

Use these as the model parameter for explicit routing:

Alias Routes To When to Use
cheap Haiku 4.5 Debug, validation, simple Q&A
balanced Gemini 3 Flash Preview Default coding, implementation
premium Opus 4.6 Complex refactoring, architecture
free Gemini 2.5 Flash Planning, analysis (free tier)
local Qwen 3B (Ollama) Offline, privacy, autocomplete

Configuring Model Aliases

Aliases are configurable via environment variables (see .env.example):

PROXYM_ALIAS_CHEAP=anthropic:claude-3-5-haiku-20241022
PROXYM_ALIAS_BALANCED=google:gemini-3-flash-preview
PROXYM_ALIAS_PREMIUM=anthropic:claude-3-opus-20240229
PROXYM_ALIAS_FREE=google:gemini-1.5-flash

Format: provider:model-name or just provider (uses provider default). The corresponding API key must also be set (e.g., ANTHROPIC_API_KEY for anthropic provider).

Automatic Routing

Without a model alias, the proxy analyzes your message:

# Automatically routes to cheap model
curl -X POST localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer $KEY" \
  -d '{"messages": [{"role": "user", "content": "What is a for loop?"}]}'

# Automatically routes to premium model
curl -X POST localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer $KEY" \
  -d '{"messages": [{"role": "user", "content": "Refactor the entire auth module to microservices"}]}'

# Force a tier with header
curl -X POST localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer $KEY" \
  -H "X-Task-Tier: deep" \
  -d '{"messages": [{"role": "user", "content": "Why does this deadlock?"}]}'

Delta Context Buffer

The proxy maintains a buffer of your project files (from code2llm output) and sends only diffs to the LLM, dramatically reducing token usage.

Setup

# Terminal 1: Generate code2llm output
pip install code2llm
code2llm ./ -f all -o ./project --no-chunk

# Terminal 2: Start the watcher
proxym-watch --watch ./project --proxy http://localhost:4000

# Terminal 3: Query with context injection
curl -X POST localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer $KEY" \
  -H "X-Inject-Context: true" \
  -d '{"messages": [{"role": "user", "content": "Explain the auth module"}]}'

How It Works

  1. code2llm generates project analysis files in ./project/
  2. proxym-watch watches the directory with watchfiles
  3. On change, it computes a unified diff against the last-sent snapshot
  4. Only changed portions are sent to the proxy as a <context_delta> block
  5. When you add X-Inject-Context: true, the delta is injected into the system prompt

Before (full context every request): ~120K tokens × $3/1M = $0.36/request After (delta only): ~5K tokens × $3/1M = $0.015/request → 96% savings

IDE Integration

Roo Code

Settings → Provider: OpenAI Compatible

  • API Base: http://localhost:4000
  • API Key: (your master key from .env)
  • Sticky Models per mode:
    • Architect → free
    • Code → balanced
    • Debug → cheap
    • Custom Opus → premium

Cline / Continue.dev / Aider

Same pattern: point API base to http://localhost:4000 with your master key.

Deployment

Docker Compose (Development)

docker compose up -d          # proxy + redis + ollama
docker compose logs -f proxy  # watch logs

Docker Compose + Traefik (Production)

docker compose -f docker-compose.prod.yml up -d
# Access via https://proxym.local

Podman Quadlet (Systemd-native)

# Copy quadlet files
mkdir -p ~/.config/containers/systemd
cp quadlet/*.container quadlet/*.network ~/.config/containers/systemd/

# Build and tag image
podman build -t localhost/proxym:latest .

# Create config dir
mkdir -p ~/.config/proxym
cp .env ~/.config/proxym/.env

# Enable and start
systemctl --user daemon-reload
systemctl --user start proxym
systemctl --user status proxym

Jetson Orin

The Jetson Dockerfile bundles Ollama + Proxym in a single container:

docker build -f Dockerfile.jetson -t proxym:jetson .
docker run --runtime nvidia --gpus all \
  -p 4000:4000 -p 11434:11434 \
  -v ~/ollama:/root/.ollama \
  --env-file .env \
  proxym:jetson

Models available on Jetson Orin 8GB:

  • qwen2.5-coder:1.5b — autocomplete (~1GB, ~30 tok/s)
  • qwen2.5-coder:3b — code generation (~2GB, ~18 tok/s)
  • phi3:3.8b — general tasks (~2.5GB, ~15 tok/s)

API Reference

POST /v1/chat/completions

OpenAI-compatible. Extra features:

Header Description
X-Task-Tier Force tier: trivial|operational|standard|complex|deep
X-Inject-Context true to inject latest code2llm delta

Response includes _proxy metadata:

{
  "choices": [...],
  "_proxy": {
    "model_id": "google/gemini-3-flash-preview",
    "tier": "standard",
    "cost_usd": 0.000045,
    "routing_reason": "tier=standard, cost=$0.0000",
    "elapsed_ms": 1234.5,
    "fallback_index": 0
  }
}

Dashboard Guides

  • docs/DASHBOARD_VOICE_NOVNC.md

GET /v1/models

List all available models with pricing and capabilities.

GET /v1/budget

Current spend vs. limits.

POST /v1/context/delta

Receive context delta from the watcher client.

GET /v1/context/stats

Delta buffer statistics.

CLI Reference

# Server
proxym serve                    # start the proxy server
proxym serve --port 4001        # custom port

# Status & models
proxym status                   # system overview (costs, budget, VMs)
proxym models                   # list all available models with pricing

# Projects
proxym project scan ~/github/wronai  # auto-detect and register projects
proxym project list                  # list registered projects
proxym project add /path/to/project  # add a single project

# Tickets & task management
proxym ticket add "fix the router bug" -p proxym
proxym ticket add "add tests" -p proxym -t aider --priority high --type test
proxym ticket list              # list all tickets
proxym ticket board             # kanban board view

# One-command task dispatch
proxym do "napraw crash w /health" --on proxym
proxym do "refaktoruj router" --on proxym --with claude-code
proxym do "add tests" --dry-run # preview plan without executing
proxym fix "bug description" --on proxym       # shortcut for --type bug
proxym refactor "split module" --on proxym     # shortcut for --type refactor

# Tool management & job queue
proxym tools list               # registered AI tools
proxym tools dispatch --ticket TKT-001 --tool aider
proxym q                        # job queue status (shortcut)
proxym tools jobs --status done # filter jobs by status

# Accounts
proxym accounts list            # list all accounts
proxym accounts add --name Work --provider anthropic --api-key sk-ant-...
proxym accounts costs           # cost breakdown per account

# VMs (requires pip install proxym[vm])
proxym vm list                  # list VMs
proxym vm create --tool windsurf --account work-anthropic
proxym vm start windsurf-work   # start a VM
proxym vm open windsurf-work    # open SPICE viewer
proxym vm ssh windsurf-work     # SSH into VM
proxym vm switch windsurf-work --project other-project
proxym vm stop windsurf-work
proxym vm snapshot windsurf-work

# Browser profiles
proxym browser list             # detect Firefox/Chrome profiles on host
proxym browser assign default-release --account abc123
proxym browser sync windsurf-work
proxym browser snapshot windsurf-work

# Observability
proxym obs logs                 # recent logs from ring buffer
proxym obs errors               # error summary
proxym obs health               # system health check
proxym obs repair --from-logs   # auto-diagnose & fix via LLM

# Interactive chat (DSL first, LLM fallback)
proxym chat                     # text mode
proxym chat --voice              # microphone input (Whisper STT)
proxym chat --tts                # text-to-speech responses
proxym chat --voice --tts        # full voice loop

# Dashboard
proxym web                      # open dashboard in browser

Chat DSL Examples

The chat command first tries to match your input against a built-in DSL (zero tokens, instant). Unmatched phrases are forwarded to the LLM with proxym MCP tools.

You: status                → GET /dashboard/system (DSL)
You: pokaż VM-y            → GET /dashboard/vms (DSL)
You: koszty                → GET /dashboard/costs (DSL)
You: start windsurf-work   → POST /dashboard/vms/windsurf-work/start (DSL)
You: dlaczego proxy jest wolne? → forwarded to LLM with tools

Customize DSL patterns: copy src/proxym/cli/dsl.yaml to ~/.config/proxym/dsl.yaml.

Testing

# Unit tests (no external services needed)
pytest tests/ --ignore=tests/test_e2e.py -v

# SQLite persistence tests (21 tests, all managers)
pytest tests/test_sqlite_persistence.py -v

# E2E tests (mock LiteLLM, no real API calls)
pytest tests/test_e2e.py -v -m e2e

# GUI tests (FastAPI TestClient)
pytest tests/gui/ -v

# All tests with coverage
pytest tests/ -v --cov=proxym --cov-report=html

Budget Examples

$25/month (casual, 4h/day)

DAILY_BUDGET_USD=1.5
MONTHLY_BUDGET_USD=25

Autocomplete: Ollama local ($0) → Planning: Gemini free ($0) → Coding: Gemini 3 Flash Preview ($3) → Complex: skip Opus, use DeepSeek R1 ($5)

$60/month (intensive, 8h/day)

DAILY_BUDGET_USD=3.0
MONTHLY_BUDGET_USD=60

Full model spectrum with Opus 4.6 for 2–3 complex tasks/week.

Project Structure

proxym/
├── src/proxym/
│   ├── main.py              # FastAPI app + OpenAI-compatible endpoint
│   ├── ctl.py               # Unified CLI (proxym command)
│   ├── config.py            # Pydantic settings from .env
│   ├── providers/__init__.py # Model registry (15 models, 10 providers)
│   ├── router/
│   │   ├── __init__.py      # Content analyzer (tier classification)
│   │   └── strategy.py      # Router + cost ledger + fallbacks
│   ├── storage.py           # SQLiteNamespaceStore (persistence backend)
│   ├── cache/__init__.py    # Delta context buffer
│   ├── middleware/__init__.py # Auth + cost tracking
│   ├── watch/__init__.py    # File watcher client
│   ├── accounts/__init__.py # Account manager (multi-provider vault)
│   ├── dashboard/           # REST API for dashboard + CLI
│   ├── tools/adapters/      # Tool adapters (aider, claude-code, LLM fallback)
│   ├── projects/__init__.py # Project registry with SQLite persistence
│   ├── cli/
│   │   ├── dsl.py           # DSL parser (regex pattern matching)
│   │   ├── chat.py          # Interactive chat command
│   │   ├── voice.py         # Whisper STT input
│   │   └── tts.py           # espeak/Piper TTS output
│   ├── mcp/
│   │   ├── self_server.py   # MCP tool endpoints for LLM
│   │   ├── registry.py      # MCP server registry
│   │   └── router.py        # MCP tool routing
│   └── virt/
│       ├── __init__.py      # VMOrchestrator (CloneBox + virsh)
│       ├── clonebox_adapter.py # CloneBox CLI wrapper
│       ├── browser_profiles.py # Firefox/Chrome profile detection
│       └── profiles/        # CloneBox YAML templates per tool
│           ├── windsurf.clonebox.yaml
│           ├── cursor.clonebox.yaml
│           ├── vscode.clonebox.yaml
│           ├── jetbrains.clonebox.yaml
│           └── browser.clonebox.yaml
├── tests/
│   ├── test_analyzer.py         # Tier classification tests
│   ├── test_router.py           # Router strategy + budget tests
│   ├── test_delta_buffer.py     # Delta computation tests
│   ├── test_clonebox_adapter.py # CloneBox adapter tests
│   ├── test_browser_profiles.py # Browser detection tests
│   ├── test_accounts.py         # Account management tests
│   ├── test_dsl.py              # DSL pattern matching tests (35)
│   ├── test_chat.py             # Chat formatter tests (8)
│   ├── test_self_mcp.py         # MCP self-server tests (8)
│   ├── test_sqlite_persistence.py # SQLite persistence tests (21)
│   ├── test_tool_health.py      # Tool health check tests
│   ├── test_dashboard*.py       # Dashboard API tests
│   └── test_e2e.py              # Full HTTP API tests
├── docker-compose.yml       # Development (proxy + redis + ollama)
├── docker-compose.prod.yml  # Production (+ traefik)
├── Dockerfile               # Standard build
├── Dockerfile.jetson        # Jetson Orin (ARM64 + CUDA)
├── quadlet/                 # Podman systemd integration
├── traefik/                 # Reverse proxy config
└── scripts/
    ├── setup.sh             # First-time setup
    └── jetson-entrypoint.sh # Jetson startup script

License

Apache License 2.0 - see LICENSE for details.

Author

Created by Tom Sapletta - tom@sapletta.com

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

proxym-0.1.103.tar.gz (6.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

proxym-0.1.103-py3-none-any.whl (456.6 kB view details)

Uploaded Python 3

File details

Details for the file proxym-0.1.103.tar.gz.

File metadata

  • Download URL: proxym-0.1.103.tar.gz
  • Upload date:
  • Size: 6.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for proxym-0.1.103.tar.gz
Algorithm Hash digest
SHA256 a029c7718511c402477249ae599cb168a532f70092c2fe024d831c342ade98e7
MD5 deaacd3a004383111233dea7d6b4059d
BLAKE2b-256 dc11ead926d7715ec13959cf3dc2870eaf14d1c682f3148aea2d0dc75ebd37fd

See more details on using hashes here.

File details

Details for the file proxym-0.1.103-py3-none-any.whl.

File metadata

  • Download URL: proxym-0.1.103-py3-none-any.whl
  • Upload date:
  • Size: 456.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for proxym-0.1.103-py3-none-any.whl
Algorithm Hash digest
SHA256 2b3310c72374f5cfa3a7edcd5af6159ef795efe9e467825c9d321ae16bf4594d
MD5 a39c1ba79fe0dcd0e152725384c1e0f5
BLAKE2b-256 e6cbf9b96f514675d02014d1df6a29d733039040c44a1948e86bbdeec13b6d46

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page