Skip to main content

Free-AI gateway: OpenAI-compatible local proxy that orchestrates free-tier inference across multiple providers

Project description

FreeRide

Free AI for everyone. A local OpenAI-compatible gateway that orchestrates free-tier inference across multiple providers — OpenRouter, NVIDIA NIM, and more — and routes around outages and rate limits transparently.

[any agent] ──HTTP──> [FreeRide on localhost] ──HTTPS──> OpenRouter
                                                    └──> NVIDIA NIM
                                                    └──> (more providers)

Point any OpenAI-compatible client at http://localhost:11343/v1 with API key any and you get free AI. When one provider rate-limits or fails, FreeRide invisibly fails over to the next. Streaming, tool calls, vision, and structured outputs all pass through.

Why

You can already get free models from OpenRouter, NIM, Groq, etc. — but each has different rate limits, different free-detection rules, and different rate-limit semantics. Hitting one's daily cap means your agent stalls until tomorrow. FreeRide unifies them behind one OpenAI-shaped endpoint and rotates across providers and keys so your agent never sees a 429.

Crucially:

  • Local-first. The gateway runs on your machine. Your prompts and completions never touch any FreeRide-operated server.
  • Free-only by religion. No paid fallback paths. No upsells.
  • BYO keys. You bring your own free-tier keys for each provider; FreeRide just routes.
  • Telemetry off by default. Optional, audit-friendly aggregate beacon (token counts, no content) — opt-in only via freeride telemetry on.

Install

pip install freeride-gateway            # latest stable (after 0.3.0 final)
pip install --pre freeride-gateway      # alpha / pre-release (current)

The PyPI distribution is named freeride-gateway; the CLI binary it installs is freeride. Python ≥ 3.10.

For local development, clone and pip install -e . from the repo root.

Quick start

1. Get free API keys

Provider Sign-up Required env var
OpenRouter https://openrouter.ai/keys OPENROUTER_API_KEY
NVIDIA NIM https://build.nvidia.com/explore/discover NVIDIA_API_KEY

You only need one to get started; more = better failover.

2. Start the gateway

export OPENROUTER_API_KEY="sk-or-v1-..."
export NVIDIA_API_KEY="nvapi-..."  # optional

freeride serve
# freeride gateway listening on http://127.0.0.1:11343
#   providers: openrouter, nvidia_nim
#   point any OpenAI-compatible agent at:
#     OPENAI_API_BASE=http://127.0.0.1:11343/v1
#     OPENAI_API_KEY=any

3. Point your agent at it

The fastest way is via a built-in binder:

freeride bind aider       # writes ~/.aider.conf.yml
freeride bind continue    # writes ~/.continue/config.yaml
freeride bind hermes      # writes ~/.hermes/cli-config.yaml
freeride bind openclaw    # writes ~/.openclaw/openclaw.json

Or point any OpenAI-shaped client manually:

export OPENAI_API_BASE=http://localhost:11343/v1
export OPENAI_API_KEY=any

That's it. Your agent now uses free AI with cross-provider failover.

4. (Optional) Multi-key rotation

# JSON-array form to register multiple keys per provider
export OPENROUTER_API_KEY='["sk-or-v1-key1", "sk-or-v1-key2"]'

When one key hits 429, FreeRide marks it cooling and uses the next on the next request. Cooldowns persist across restarts.

How it works

Cross-provider failover

When you call chat/completions, FreeRide tries providers in registration order. For each provider it walks the available (non-cooling) keys; on RATE_LIMIT or AUTH it marks the key cooling and tries the next. On MODEL_NOT_FOUND it advances to the next provider. Once a provider produces a successful response (or a streaming response's first chunk), FreeRide commits and returns it to the client.

The client never sees the failures — the response includes a _freeride_provider field (or X-FreeRide-Provider header on streaming responses) so you can audit which provider actually served any given request.

Streaming

Streaming uses buffer-first-chunk failover: FreeRide holds the first SSE event from upstream until it confirms the stream started successfully. If upstream errors before producing the first chunk, FreeRide tries the next (provider, key) tuple. Once the first chunk has shipped to the client, mid-stream errors propagate as a truncated stream (rare in practice; documented limitation).

Telemetry

Off by default. When you opt in (freeride telemetry on), FreeRide POSTs an aggregate beacon hourly with:

{
  "installation_id": "uuid-v4",
  "version": "0.3.0",
  "os": "darwin",
  "tokens_served": 412034,
  "request_count": 187,
  "providers_active": ["openrouter", "nvidia_nim"],
  "uptime_hours": 8
}

Never sent: prompts, completions, model IDs, API keys, hostnames. Run freeride telemetry (no args) to inspect the exact payload before deciding.

Commands

Command What it does
freeride serve Start the gateway on localhost:11343
freeride bind <agent> Write the gateway URL into the agent's config (atomic; preserves unrelated keys)
freeride telemetry [on|off] Manage opt-in beacon (default OFF)
freeride list List available free models, ranked (v2 behavior)
freeride status Show current OpenClaw config + cache age (v2 behavior)
freeride auto Auto-configure best free model for OpenClaw (v2 behavior)
freeride rotate Live-test current primary; swap if it fails (v2 behavior)
freeride-watcher Background daemon that probes and rotates on failure (v2 behavior)

The v2 commands keep working for existing OpenClaw users; the new commands (serve, bind, telemetry) are the v3 surface.

Supported providers

  • OpenRouter ✅ — chat, streaming, tools, vision, structured outputs
  • NVIDIA NIM ✅ — chat, streaming (curated free-model allowlist; NVIDIA_NIM_FREE_MODELS_OVERRIDE env var to expand)
  • Groq, Cloudflare Workers AI, HuggingFace Inference Providers: Provider Protocol fits all three (see knowledge/providers/SURVEY.md); plugin implementations welcome.

Supported agents

Agent freeride bind <agent> Hot reload
OpenClaw restart needed
Aider ✅ (--scope home/cwd/git) restart needed
Continue yes
Hermes (NousResearch/hermes-agent) restart needed
OpenCode extended; not yet shipped

Or any other OpenAI-compatible client via OPENAI_API_BASE + OPENAI_API_KEY=any.

Project documents

Contributing

The Provider Protocol is freeride.core.provider.Provider with api_version = 1. To add a new provider:

  1. Implement the Protocol in freeride/providers/<name>.py
  2. Register your class in tests/conformance/test_provider_conformance.py's CONFORMANT_PROVIDERS list
  3. Add freeride/providers/<name>_model_metadata.py if the catalog endpoint doesn't expose context length / capabilities

The conformance suite covers the load-bearing invariants automatically.

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

freeride_gateway-0.3.0a1.tar.gz (101.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

freeride_gateway-0.3.0a1-py3-none-any.whl (60.0 kB view details)

Uploaded Python 3

File details

Details for the file freeride_gateway-0.3.0a1.tar.gz.

File metadata

  • Download URL: freeride_gateway-0.3.0a1.tar.gz
  • Upload date:
  • Size: 101.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for freeride_gateway-0.3.0a1.tar.gz
Algorithm Hash digest
SHA256 1de99320208452573aad36d7f8021697cb1df1391468b8d7b4284e98bc5cd1a0
MD5 285dadacb1ff5965a48fe038fa0902a6
BLAKE2b-256 719ebf0dcb717a79484a236ece905445f92a4f198d591ba47173cec6e538aaae

See more details on using hashes here.

File details

Details for the file freeride_gateway-0.3.0a1-py3-none-any.whl.

File metadata

File hashes

Hashes for freeride_gateway-0.3.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 6f61b94534e5f79a318036494547d672537b31eca5ec97b1c5b9c573b8fd739f
MD5 34b7e065b6484bc1921e42ae9d26bda2
BLAKE2b-256 f6ea2cf5f01982476ee6347ffabdf90285b5437686191aec115e54a0a89510c6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page