Skip to main content

High-performance rate limiter engine for MCP Gateway

Project description

Rate Limiter Plugin

Author: Mihai Criveti

Enforces rate limits per user, tenant, and tool across tool_pre_invoke and prompt_pre_fetch hooks. Supports pluggable counting algorithms (fixed window, sliding window, token bucket), an in-process memory backend (single-instance), and a Redis backend (shared across all gateway instances).

Hooks

Hook When it runs
tool_pre_invoke Before every tool call — checks by_user, by_tenant, by_tool
prompt_pre_fetch Before every prompt fetch — checks by_user, by_tenant, by_tool

If any configured dimension is exceeded, the plugin returns a violation with HTTP 429. All requests include X-RateLimit-* headers. The most restrictive active dimension is surfaced (e.g. if both user and tenant limits are active, the one closest to exhaustion is reported).

Configuration

- name: RateLimiterPlugin
  kind: cpex_rate_limiter.rate_limiter.RateLimiterPlugin
  hooks:
    - prompt_pre_fetch
    - tool_pre_invoke
  mode: enforce          # enforce | permissive | disabled
  config:
    by_user: "30/m"      # per-user limit across all tools
    by_tenant: "300/m"   # shared limit across all users in a tenant
    by_tool:             # per-tool overrides (applied on top of by_user)
      search: "10/m"
      summarise: "5/m"

    # Algorithm — choose one (default: fixed_window)
    algorithm: "fixed_window"    # fixed_window | sliding_window | token_bucket

    # Backend — choose one
    backend: "memory"    # default: single-process, resets on restart
    # backend: "redis"   # shared across all gateway instances

    # Redis options (required when backend: redis)
    redis_url: "redis://redis:6379/0"
    redis_key_prefix: "rl"

Configuration reference

Field Type Default Description
by_user string null Per-user rate limit, e.g. "60/m"
by_tenant string null Per-tenant rate limit, e.g. "600/m"
by_tool dict {} Per-tool overrides, e.g. {"search": "10/m"}
algorithm string "fixed_window" Counting algorithm: "fixed_window", "sliding_window", or "token_bucket"
backend string "memory" "memory" or "redis"
redis_url string null Redis connection URL (required when backend: redis)
redis_key_prefix string "rl" Prefix for all Redis keys

Rate string format: "<count>/<unit>" where unit is s/sec/second, m/min/minute, or h/hr/hour. Malformed strings raise ValueError at startup.

Omitting a dimension (e.g. no by_tenant) means that dimension is unlimited — no counter is tracked for it.

Response headers

Every request (allowed or blocked) includes:

Header Description
X-RateLimit-Limit Configured limit for the most restrictive active dimension
X-RateLimit-Remaining Requests remaining in the current window
X-RateLimit-Reset Unix timestamp when the current window resets
Retry-After Seconds until the window resets (blocked requests only)

Algorithms

Three counting algorithms are available, selected via the algorithm config field.

Algorithm Config value Best for Trade-off
Fixed window fixed_window General use, lowest overhead Up to 2× the limit at window boundaries
Sliding window sliding_window Smooth enforcement, no boundary burst Higher memory: stores one timestamp per request per key
Token bucket token_bucket Bursty workloads — allows short spikes up to capacity Slightly higher Redis overhead: stores {tokens, last_refill} hash per key

Fixed window (default)

Counts requests in a fixed time slot (e.g. "minute 14:03"). Resets at the slot boundary. Simple and fast. The 2× burst at a boundary (N requests at the end of slot T, N requests at the start of T+1) is a known trade-off; use by_user with headroom if this matters.

Sliding window

Stores a timestamp for every request in the current window. At each check, expired timestamps are discarded and the remaining count is compared against the limit. Prevents boundary bursts entirely. Memory usage grows with request volume — roughly one float per request per active key.

Token bucket

Each identity (user, tenant, tool) has a bucket that holds up to count tokens. Tokens refill at a steady rate of count/window. A request consumes one token. Bursts up to the bucket capacity are allowed; sustained rate above count/window is rejected. Useful for APIs where short spikes are acceptable but sustained overload is not.

Redis support: token_bucket with backend: redis is fully supported. The plugin stores {tokens, last_refill} in a Redis hash per key and uses an atomic Lua script to refill and consume tokens in a single round-trip — the same pattern as the other two algorithms. This means token_bucket enforces a true cluster-wide limit in multi-instance deployments.

Backends

Memory backend (default, single-instance only)

  • Counters are stored in a process-local MemoryStore (Rust, per-key RwLock — no single global lock)
  • An amortized sweep evicts expired keys every ~128 calls — for fixed_window, keys are evicted once the window elapses; for sliding_window, keys with empty timestamp deques are evicted; for token_bucket, keys inactive for >1 hour are evicted
  • Limitation: state is not shared across processes or hosts. In a multi-instance deployment (e.g. 3 gateway instances behind nginx), each instance tracks its own counter — the effective limit is N × configured_limit

Redis backend

  • fixed_window: atomic Lua INCR+EXPIRE — one Redis round-trip per check, no race condition
  • sliding_window: atomic Lua ZADD+ZREMRANGEBYSCORE+ZCARD+EXPIRE — one round-trip, no race condition
  • token_bucket: atomic Lua script — reads {tokens, last_refill} hash, refills proportionally, consumes 1 token, writes back — one round-trip, no race condition
  • All gateway instances share the same counter — the configured limit is the true cluster-wide limit
  • Requires redis_url to be set
  • If Redis is unavailable, the plugin fails open — the request is allowed through without rate limiting. This is a deliberate design choice: an infrastructure failure must never block legitimate traffic. Operators should monitor for rate-limiter error logs and treat them as high-priority alerts

Multi-instance deployment (important): The memory backend is local to a single gateway instance — rate limit counters are not shared across replicas. For multi-instance deployments (e.g., behind nginx or on OpenShift with multiple gateway pods), always use backend: redis to ensure rate limits are enforced correctly across all instances.

Examples

Single-instance (default config)

config:
  by_user: "60/m"
  by_tenant: "600/m"

Multi-instance with Redis

config:
  backend: "redis"
  redis_url: "redis://redis:6379/0"
  by_user: "30/m"
  by_tenant: "3000/m"
  by_tool:
    search: "10/m"

Sliding window (no boundary bursts)

config:
  algorithm: "sliding_window"
  by_user: "30/m"
  by_tenant: "300/m"

Token bucket — memory backend (default)

config:
  algorithm: "token_bucket"
  by_user: "30/m"   # bucket holds 30 tokens, refills at 30/min

Token bucket — Redis backend (multi-instance)

config:
  algorithm: "token_bucket"
  backend: "redis"
  redis_url: "redis://redis:6379/0"
  by_user: "30/m"

Permissive mode (observe without blocking)

mode: permissive
config:
  by_user: "60/m"

In permissive mode the plugin records violations and emits X-RateLimit-* headers but does not block requests. Useful for baselining traffic before switching to enforce.

Limitations

Limitation Severity Status
Memory backend not shared across processes HIGH Use Redis backend for multi-instance deployments
Fixed window allows up to 2× limit at window boundary LOW Use sliding_window algorithm, or use by_user with headroom
by_tool matching is case-sensitive LOW Fixed — tool names are normalised with .strip().lower()
Whitespace-only user identity bypasses anonymous bucket LOW Fixed — _extract_user_identity strips whitespace and falls back to 'anonymous'
No per-server limits (server_id dimension missing) LOW Not implemented
No config hot-reload — rate string changes require restart LOW Not implemented

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cpex_rate_limiter-0.0.1.tar.gz (65.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

cpex_rate_limiter-0.0.1-cp311-abi3-win_amd64.whl (689.2 kB view details)

Uploaded CPython 3.11+Windows x86-64

cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_x86_64.whl (726.9 kB view details)

Uploaded CPython 3.11+manylinux: glibc 2.34+ x86-64

cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_s390x.whl (805.4 kB view details)

Uploaded CPython 3.11+manylinux: glibc 2.34+ s390x

cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_ppc64le.whl (787.7 kB view details)

Uploaded CPython 3.11+manylinux: glibc 2.34+ ppc64le

cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_aarch64.whl (687.8 kB view details)

Uploaded CPython 3.11+manylinux: glibc 2.34+ ARM64

cpex_rate_limiter-0.0.1-cp311-abi3-macosx_11_0_arm64.whl (664.5 kB view details)

Uploaded CPython 3.11+macOS 11.0+ ARM64

File details

Details for the file cpex_rate_limiter-0.0.1.tar.gz.

File metadata

  • Download URL: cpex_rate_limiter-0.0.1.tar.gz
  • Upload date:
  • Size: 65.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cpex_rate_limiter-0.0.1.tar.gz
Algorithm Hash digest
SHA256 f9752c58e2758e2ecf332b2f87098b8a53afefaac56abce62f18f4247c575b69
MD5 41ae71dda7f2b8f8898364cbf4887ef3
BLAKE2b-256 fa40bb031a487defe75f014ac37d6ee430d3fec4341135fa7d2ff575e5370d8a

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1.tar.gz:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cpex_rate_limiter-0.0.1-cp311-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for cpex_rate_limiter-0.0.1-cp311-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 55e345b2536c7f092df4d27926d59f887718f49eb70f51832a5c1d244ab57795
MD5 41acd52b9522946580ec9f19f48a0357
BLAKE2b-256 36c6444a9ae7628c7ba6e4e3095ee8001472a249b4b641ed9d4875e2ee5099e1

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1-cp311-abi3-win_amd64.whl:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 accc8321799d13c7d2af50f01f6e5a71c0b354665b5f571497f45b0af3046552
MD5 034937fb136a21280e131f77e18fdeaa
BLAKE2b-256 be83baa64596068ccef9a4e57e04d04738a03cd7ed42521cf2e7be5d74633fd1

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_x86_64.whl:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_s390x.whl.

File metadata

File hashes

Hashes for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_s390x.whl
Algorithm Hash digest
SHA256 f8c98ae93da70b686ef82670dd515a12c1c6920b448e507beeffd4c8f364cbed
MD5 52260967714b9057a083b366b5a0f1dc
BLAKE2b-256 896c734ce9e1305c97bd3aab683c5f724b8c4ae11c3079a3894f214c33d55a99

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_s390x.whl:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_ppc64le.whl.

File metadata

File hashes

Hashes for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_ppc64le.whl
Algorithm Hash digest
SHA256 b52bb72049a6c1dc6e484c2b71044af77e9fef1de4a3eb0398406769eaa7d698
MD5 cecd17ebcd29ffbfccc1a772ec04b864
BLAKE2b-256 c91f06c0bc19dd868789a1a2c73aa4efcd2f081383c8928066d0623c5109c9a9

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_ppc64le.whl:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 5f32c888d71e236f16a3c2667160de9b2b88087d844dc728582aeb882593d6e5
MD5 87009c98942c5336e95dcdf11ebfea83
BLAKE2b-256 2411dcd4a8a7f3330be8b02b838f92c275c450eaa0578b5a6893ec873ec920ad

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1-cp311-abi3-manylinux_2_34_aarch64.whl:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cpex_rate_limiter-0.0.1-cp311-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for cpex_rate_limiter-0.0.1-cp311-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 580a525514efdadbe350a7034191e37c60d9622ffb239aa85473610b0a92128c
MD5 e5f056af3fd3c75b50f1dbe96802a888
BLAKE2b-256 6db7f2630fb74b2343f29b15abacf686170d95738f064ee8ef0724b4269c8453

See more details on using hashes here.

Provenance

The following attestation bundles were made for cpex_rate_limiter-0.0.1-cp311-abi3-macosx_11_0_arm64.whl:

Publisher: pypi-rate-limiter.yaml on IBM/cpex-plugins

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page