Skip to main content

Rate limiting utilities

Project description

docs ci codecov pypi pypi-versions license

limits is a python library for rate limiting via multiple strategies with commonly used storage backends (Redis, Memcached & MongoDB).

The library provides identical APIs for use in sync and async codebases.

Supported Strategies

All strategies support the follow methods:

  • hit: consume a request.

  • test: check if a request is allowed.

  • get_window_stats: retrieve remaining quota and reset time.

Fixed Window

Fixed Window

This strategy is the most memory‑efficient because it uses a single counter per resource and rate limit. When the first request arrives, a window is started for a fixed duration (e.g., for a rate limit of 10 requests per minute the window expires in 60 seconds from the first request). All requests in that window increment the counter and when the window expires, the counter resets.

Burst traffic that bypasses the rate limit may occur at window boundaries.

For example, with a rate limit of 10 requests per minute:

  • At 00:00:45, the first request arrives, starting a window from 00:00:45 to 00:01:45.

  • All requests between 00:00:45 and 00:01:45 count toward the limit.

  • If 10 requests occur at any time in that window, any further request before 00:01:45 is rejected.

  • At 00:01:45, the counter resets and a new window starts which would allow 10 requests until 00:02:45.

Moving Window

Moving Window

This strategy adds each request’s timestamp to a log if the nth oldest entry (where n is the limit) is either not present or is older than the duration of the window (for example with a rate limit of 10 requests per minute if there are either less than 10 entries or the 10th oldest entry is at least 60 seconds old). Upon adding a new entry to the log “expired” entries are truncated.

For example, with a rate limit of 10 requests per minute:

  • At 00:00:10, a client sends 1 requests which are allowed.

  • At 00:00:20, a client sends 2 requests which are allowed.

  • At 00:00:30, the client sends 4 requests which are allowed.

  • At 00:00:50, the client sends 3 requests which are allowed (total = 10).

  • At 00:01:11, the client sends 1 request. The strategy checks the timestamp of the 10th oldest entry (00:00:10) which is now 61 seconds old and thus expired. The request is allowed.

  • At 00:01:12, the client sends 1 request. The 10th oldest entry’s timestamp is 00:00:20 which is only 52 seconds old. The request is rejected.

Sliding Window Counter

Sliding Window Counter

This strategy approximates the moving window while using less memory by maintaining two counters:

  • Current bucket: counts requests in the ongoing period.

  • Previous bucket: counts requests in the immediately preceding period.

When a request arrives, the effective request count is calculated as:

weighted_count = current_count + floor(previous_count * weight)

The weight is based on how much time has elapsed in the current bucket:

weight = (bucket_duration - elapsed_time) / bucket_duration

If weighted_count is below the limit, the request is allowed.

For example, with a rate limit of 10 requests per minute:

Assume:

  • The current bucket (spanning 00:01:00 to 00:02:00) has 8 hits.

  • The previous bucket (spanning 00:00:00 to 00:01:00) has 4 hits.

Scenario 1:

  • A new request arrives at 00:01:30, 30 seconds into the current bucket.

  • weight = (60 - 30) / 60 = 0.5.

  • weighted_count = floor(8 + (4 * 0.5)) = floor(8 + 2) = 10.

  • Since the weighted count equals the limit, the request is rejected.

Scenario 2:

  • A new request arrives at 00:01:40, 40 seconds into the current bucket.

  • weight = (60 - 40) / 60 ≈ 0.33.

  • weighted_count = floor(8 + (4 * 0.33)) = floor(8 + 1.32) = 9.

  • Since the weighted count is below the limit, the request is allowed.

Storage backends

Dive right in

Initialize the storage backend

from limits import storage
backend = storage.MemoryStorage()
# or memcached
backend = storage.MemcachedStorage("memcached://localhost:11211")
# or redis
backend = storage.RedisStorage("redis://localhost:6379")
# or mongodb
backend = storage.MongoDbStorage("mongodb://localhost:27017")
# or use the factory
storage_uri = "memcached://localhost:11211"
backend = storage.storage_from_string(storage_uri)

Initialize a rate limiter with a strategy

from limits import strategies
strategy = strategies.MovingWindowRateLimiter(backend)
# or fixed window
strategy = strategies.FixedWindowRateLimiter(backend)
# or sliding window
strategy = strategies.SlidingWindowCounterRateLimiter(backend)

Initialize a rate limit

from limits import parse
one_per_minute = parse("1/minute")

Initialize a rate limit explicitly

from limits import RateLimitItemPerSecond
one_per_second = RateLimitItemPerSecond(1, 1)

Test the limits

import time
assert True == strategy.hit(one_per_minute, "test_namespace", "foo")
assert False == strategy.hit(one_per_minute, "test_namespace", "foo")
assert True == strategy.hit(one_per_minute, "test_namespace", "bar")

assert True == strategy.hit(one_per_second, "test_namespace", "foo")
assert False == strategy.hit(one_per_second, "test_namespace", "foo")
time.sleep(1)
assert True == strategy.hit(one_per_second, "test_namespace", "foo")

Check specific limits without hitting them

assert True == strategy.hit(one_per_second, "test_namespace", "foo")
while not strategy.test(one_per_second, "test_namespace", "foo"):
    time.sleep(0.01)
assert True == strategy.hit(one_per_second, "test_namespace", "foo")

Query available capacity and reset time for a limit

assert True == strategy.hit(one_per_minute, "test_namespace", "foo")
window = strategy.get_window_stats(one_per_minute, "test_namespace", "foo")
assert window.remaining == 0
assert False == strategy.hit(one_per_minute, "test_namespace", "foo")
time.sleep(window.reset_time - time.time())
assert True == strategy.hit(one_per_minute, "test_namespace", "foo")

Project details


Release history Release notifications | RSS feed

This version

5.8.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

limits-5.8.0.tar.gz (226.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

limits-5.8.0-py3-none-any.whl (61.0 kB view details)

Uploaded Python 3

File details

Details for the file limits-5.8.0.tar.gz.

File metadata

  • Download URL: limits-5.8.0.tar.gz
  • Upload date:
  • Size: 226.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for limits-5.8.0.tar.gz
Algorithm Hash digest
SHA256 c9e0d74aed837e8f6f50d1fcebcf5fd8130957287206bc3799adaee5092655da
MD5 855fb1793b3abdaff499560115966edb
BLAKE2b-256 7169826a5d1f45426c68d8f6539f8d275c0e4fcaa57f0c017ec3100986558a41

See more details on using hashes here.

Provenance

The following attestation bundles were made for limits-5.8.0.tar.gz:

Publisher: main.yml on alisaifee/limits

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file limits-5.8.0-py3-none-any.whl.

File metadata

  • Download URL: limits-5.8.0-py3-none-any.whl
  • Upload date:
  • Size: 61.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for limits-5.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ae1b008a43eb43073c3c579398bd4eb4c795de60952532dc24720ab45e1ac6b8
MD5 9658b0370f1858b10d11a5c2a17f70f6
BLAKE2b-256 b998cb5ca20618d205a09d5bec7591fbc4130369c7e6308d9a676a28ff3ab22c

See more details on using hashes here.

Provenance

The following attestation bundles were made for limits-5.8.0-py3-none-any.whl:

Publisher: main.yml on alisaifee/limits

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page