Skip to main content

Local OpenAI-compatible gateway that routes across free-tier providers (OpenRouter, Groq, NVIDIA NIM, Cloudflare Workers AI, HuggingFace) with automatic failover.

Project description

FreeRide

One free AI endpoint. Five providers behind it. Your agents don't need to know.

$ curl -sSL https://api.free-ride.xyz/install.sh | sh
$ export OPENROUTER_API_KEY=sk-or-v1-...
$ freeride serve

freeride gateway listening on http://127.0.0.1:11343
  providers: openrouter        # add more by exporting their keys
  point any OpenAI-compatible agent at:
    OPENAI_API_BASE=http://127.0.0.1:11343/v1
    OPENAI_API_KEY=any

That's it. Aider, Continue, OpenClaw, Hermes, the OpenAI Python SDK — anything that speaks OpenAI now speaks every free tier you have a key for.

Demo

┌─ your agent ─────────┐         ┌─ freeride (localhost) ─┐         ┌─ providers ─┐
│                      │  POST   │                        │         │             │
│  chat.completions    │────────▶│  pick provider         │────────▶│  OpenRouter │ 429
│   .create(...)       │         │  pick key (not cooling)│  retry  │     ↓       │
│                      │         │  forward request       │────────▶│  Groq       │ ✓
│  ◀───────────────────│   200   │  ◀─────────────────────│         │             │
│                      │         │                        │         │  NIM, CF,   │
│                      │         │  X-FreeRide-Provider:  │         │  HF — only  │
│                      │         │   groq                 │         │  if needed  │
└──────────────────────┘         └────────────────────────┘         └─────────────┘

When OpenRouter rate-limits you, the next request goes to Groq. When Groq's daily token cap hits, the next goes to HuggingFace. Your agent never sees a 429.

Why this exists

You can already get a free tier from OpenRouter. And NVIDIA. And Groq. And Cloudflare Workers AI. And HuggingFace. They all have different limits, different free-detection rules, different ways of saying "you're done for today."

So you sign up for all of them and now you've got five API keys, five SDKs, and an agent that only knows about one. FreeRide is the small thing that sits between them and pretends to be one OpenAI endpoint.

  • Local-first. The gateway runs on your machine. Prompts and completions never touch a FreeRide server.
  • BYO keys. Bring your own free-tier keys. FreeRide doesn't issue any.
  • Free-only. No paid fallback. No upsell. If every provider is exhausted, the request fails — better that than a surprise bill.

Install

macOS / Linux:

curl -sSL https://api.free-ride.xyz/install.sh | sh

Windows (PowerShell):

powershell -ExecutionPolicy ByPass -c "irm https://api.free-ride.xyz/install.ps1 | iex"

The installer bootstraps uv if missing, then uv tool installs freeride-gateway. Binary lands at ~/.local/bin/freeride (Linux/macOS) or %USERPROFILE%\.local\bin\freeride.exe (Windows). Same shape as the bun.sh and astral.sh installers.

Or install manually
# uv (what the installer does)
uv tool install --prerelease=allow freeride-gateway

# pipx
pipx install --pip-args=--pre freeride-gateway

# pip + venv (the venv only — re-activate per shell)
python3 -m venv .venv && source .venv/bin/activate
pip install --pre freeride-gateway

# from source
git clone https://github.com/Shaivpidadi/FreeRideV3 && cd FreeRideV3
pip install -e .

PyPI distribution: freeride-gateway. CLI: freeride. Python ≥ 3.10.

Get keys (any one is enough; more = better failover)

Provider Where Env var
OpenRouter https://openrouter.ai/keys OPENROUTER_API_KEY
Groq https://console.groq.com/keys GROQ_API_KEY
NVIDIA NIM https://build.nvidia.com NVIDIA_API_KEY
Cloudflare Workers AI https://dash.cloudflare.com/profile/api-tokens CLOUDFLARE_API_TOKEN + CLOUDFLARE_ACCOUNT_ID
HuggingFace https://huggingface.co/settings/tokens HF_TOKEN

Set whichever you have, then freeride serve. The gateway picks them up and rotates between them.

Wire your agent

The fastest way is a binder:

freeride bind aider       # writes ~/.aider.conf.yml
freeride bind continue    # writes ~/.continue/config.yaml
freeride bind hermes      # writes ~/.hermes/config.yaml
freeride bind openclaw    # writes ~/.openclaw/openclaw.json

Or set the OpenAI vars yourself:

export OPENAI_API_BASE=http://localhost:11343/v1
export OPENAI_API_KEY=any

Anything OpenAI-shaped works. Tested with the openai-python SDK, Aider, Continue, Hermes, OpenClaw.

Multi-key rotation

Got several free keys for the same provider? Pass them as a JSON array:

export OPENROUTER_API_KEY='["sk-or-v1-key1","sk-or-v1-key2","sk-or-v1-key3"]'

When key 1 hits 429 it goes on cooldown for 120s; key 2 takes the next request. Cooldowns persist across restarts (~/.freeride/cooldown.json).

How failover works

Per request, FreeRide walks (provider, key) pairs in order:

  • RATE_LIMIT or AUTH → mark this key cooling, try the next key.
  • MODEL_NOT_FOUND → skip this provider, try the next provider.
  • Anything 5xx-ish → next pair.
  • First successful response → ship it; stamp X-FreeRide-Provider header (or _freeride_provider field on JSON) so you can tell who actually served it.

Streaming uses buffer-first-chunk failover: hold the first SSE event until upstream confirms the stream is real. If it fails before the first chunk, retry. After the first chunk has shipped, mid-stream errors propagate (rare; documented).

Telemetry

On by default. Hourly POST to https://telemetry.free-ride.xyz/v1/beacon:

{
  "installation_id": "random-uuid-v4",
  "version": "0.3.0",
  "os": "darwin",
  "tokens_served": 412034,
  "request_count": 187,
  "providers_active": ["openrouter", "groq"],
  "uptime_hours": 8
}

Prompts, completions, model IDs, API keys, hostnames, IPs — never sent. The Worker doesn't log cf-connecting-ip. The first time you run any freeride command a banner prints the exact payload.

freeride telemetry off    # turn it off
freeride telemetry        # show what would be sent

Embeddings

Same endpoint shape as OpenAI's /v1/embeddings. Failover across the 4 providers that support embeddings (Groq doesn't):

curl http://localhost:11343/v1/embeddings \
  -H 'Content-Type: application/json' \
  -d '{"model": "text-embedding-3-small", "input": "hello world"}'

The same X-FreeRide-Provider header tells you which provider served the embedding. Same multi-key rotation, same per-provider failover.

See what FreeRide is doing

freeride watch

Tails live failover events from a running gateway. Every request, every provider attempt, every rate-limit, every retry. Useful for seeing failover happen in real time, debugging "is my agent actually using FreeRide", or just demoing.

[14:23:01.412] req_a3f8e2c1  ▶ request model=openrouter/free stream
[14:23:01.421] req_a3f8e2c1  → openrouter[k0] openrouter/free
[14:23:01.833] req_a3f8e2c1  ← openrouter[k0] 412ms RATE_LIMIT ✗ (retry-after 47s)
[14:23:01.835] req_a3f8e2c1  → groq[k0] openrouter/free
[14:23:02.153] req_a3f8e2c1  ← groq[k0] 318ms OK ✓ first-chunk
[14:23:02.154] req_a3f8e2c1  ■ complete via groq

Events are written to ~/.freeride/events.jsonl. Opt out with FREERIDE_EVENTS=0 if you don't want them. File caps at 1 MiB with single-backup rotation.

Commands

freeride serve                  start the gateway
freeride bind <agent>           write gateway URL into agent config
freeride watch                  tail live failover events
freeride telemetry [on|off]     manage telemetry
freeride list                   list available free models
freeride status                 show OpenClaw config + cache age (v2)
freeride auto                   auto-configure OpenClaw (v2)
freeride rotate                 swap primary if it fails (v2)
freeride-watcher                background daemon that rotates on failure

The v2 commands keep working for existing OpenClaw users.

Providers

Provider Status Notes
OpenRouter shipped full surface — chat, streaming, tools, vision, structured outputs
NVIDIA NIM shipped curated free-model allowlist; NVIDIA_NIM_FREE_MODELS_OVERRIDE to expand
Groq shipped hardcoded allowlist (Llama 3.x, Gemma 2, Mixtral, DeepSeek-R1-distill); GROQ_FREE_MODELS_OVERRIDE to expand
Cloudflare Workers AI shipped curated allowlist of cheap-per-neuron chat models; needs CLOUDFLARE_ACCOUNT_ID
HuggingFace Inference shipped full HF router catalog; budget governs access ($0.10/mo Free, $2/mo PRO)

Adding a sixth: implement freeride.core.provider.Provider (api_version=1) in freeride/providers/<name>.py, register it in the conformance suite, done. See CONTRIBUTING.md.

Agents

Agent freeride bind Hot reload
OpenClaw yes needs restart
Aider yes (--scope home/cwd/git) needs restart
Continue yes yes
Hermes (NousResearch/hermes-agent) yes needs restart

Or anything else: OPENAI_API_BASE=http://localhost:11343/v1 + OPENAI_API_KEY=any.

Claude Code skill

If you use Claude Code, install the FreeRide skill so Claude knows how to detect, wire, and troubleshoot the gateway:

/plugin install https://github.com/Shaivpidadi/FreeRideV3

After install, Claude auto-invokes the skill when you mention FreeRide, have it running on localhost:11343, or ask about routing across free-tier providers. See skills/README.md for manual-install instructions.

Docs

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

freeride_gateway-0.4.0a1.tar.gz (137.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

freeride_gateway-0.4.0a1-py3-none-any.whl (91.6 kB view details)

Uploaded Python 3

File details

Details for the file freeride_gateway-0.4.0a1.tar.gz.

File metadata

  • Download URL: freeride_gateway-0.4.0a1.tar.gz
  • Upload date:
  • Size: 137.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for freeride_gateway-0.4.0a1.tar.gz
Algorithm Hash digest
SHA256 272128061467a8179860a58000116a8cf5c5bd23533f847bd03933198eaf95dc
MD5 403abfc6737a66986b31bdfc18f28ced
BLAKE2b-256 021bd581e0a09f36c9524ce80fdc428dbbdcd559ce0f2cb129b84da83169ec94

See more details on using hashes here.

Provenance

The following attestation bundles were made for freeride_gateway-0.4.0a1.tar.gz:

Publisher: release.yml on Shaivpidadi/FreeRideV3

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file freeride_gateway-0.4.0a1-py3-none-any.whl.

File metadata

File hashes

Hashes for freeride_gateway-0.4.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 2575dd5c69f18ec878c953bfeeb6d6bb52eb410faea9472460769e2479be31db
MD5 9c39e50cdef2ef45e45289ba1086fa62
BLAKE2b-256 2dacdaf200d0ad36a246710e314e0f2ad05b56e8cc3e399e9b3c4426ee71fe34

See more details on using hashes here.

Provenance

The following attestation bundles were made for freeride_gateway-0.4.0a1-py3-none-any.whl:

Publisher: release.yml on Shaivpidadi/FreeRideV3

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page