Skip to main content

Distributed async rate limiters, using Redis

Project description

PyPI test status coverage python version

Self limiters

A library for regulating traffic with respect to concurrency or time.

It implements a semaphore to be used when you need to limit the number of concurrent requests to an API (or other resources). For example if you can only send 5 concurrent requests.

It also implements the token bucket algorithm which can be used to limit the number of requests made in a given time interval. For example if you're restricted to 10 requests per second.

Both limiters are async, FIFO, and distributed using Redis. You should probably only use this if you need distributed queues.

This was written with rate-limiting in mind, but the semaphore and token bucket implementations can be used for anything.

Installation

pip install self-limiters

Usage

Both implementations are written as async context managers.

Semaphore

The Semaphore can be used like this:

from self_limiters import Semaphore


# 5 requests at the time
async with Semaphore(name="", capacity=5, max_sleep=60, redis_url=""):
      client.get(...)

We use blpop to wait for the semaphore to be freed up, under the hood, which is non-blocking.

If you specify a non-zero max_sleep, a MaxSleepExceededError will be raised if blpop waits for longer than that specified value.

Token bucket

The TokenBucket context manager is used the same way, like this:

from self_limiters import TokenBucket


# 1 requests per minute
async with TokenBucket(
        name="",
        capacity=1,
        refill_amount=1,
        refill_frequency=60,
        max_sleep=600,
        redis_url=""
):
    client.get(...)

The limiter first estimates when there will be capacity in the bucket - i.e., when it's this instances turn to go, then async sleeps until then.

If max_sleep is set and the estimated sleep time exceeds this, a MaxSleepExceededError is raised immediately.

As a decorator

The package doesn't ship any decorators, but if you would like to limit the rate at which a whole function is run, you can create your own, like this:

from self_limiters import Semaphore


# Define a decorator function
def limit(name, capacity):
  def middle(f):
    async def inner(*args, **kwargs):
      async with Semaphore(
              name=name,
              capacity=capacity,
              redis_url="redis://127.0.0.1:6389"
      ):
        return await f(*args, **kwargs)
    return inner
  return middle


# Then pass the relevant limiter arguments like this
@limit(name="foo", capacity=5)
def fetch_foo(id: UUID) -> Foo:

Implementation and performance breakdown

The library is written in Rust (for fun) and relies on Lua scripts and pipelining to improve the performance of each implementation.

Redis lets users upload and execute Lua scripts on the server directly, meaning we can write e.g., the entire token bucket logic in Lua. This present a couple of nice benefits:

  • Since they are executed on the redis instance, we can make 1 request to redis where we would otherwise have to make 3 or 4. The time saved by reducing the number of requests is significant.

  • Redis is single-threaded and guarantees atomic execution of scripts, meaning we don't have to worry about data races. In a prior iteration, when we had to make 4 requests to estimate the wake-up time for a token bucket instance, we had needed to use the redlock algorithm to ensure fairness. With Lua scripts, our implementations are FIFO out of the box.

So in summary they make our implementation faster, since we save several round-trips to the server and back and since we no longer need locks, and distributed locks are expensive. And they simultaneously make the code much, much simpler.

This is how each implementation has ended up looking:

The semaphore implementation

  1. Run a lua script to create a list data structure in redis, as the foundation of the semaphore.

    This script is idempotent, and skipped if it has already been created.

  2. Run BLPOP to non-blockingly wait until the semaphore has capacity, and pop from the list when it does.

  3. Then run a pipelined command to release the semaphore by adding back the capacity.

So in total we make 3 calls to redis, where we would have made 6 without the scripts, which are all non-blocking.

The token bucket implementation

The token bucket implementation is even simpler. The steps are:

  1. Run a lua script to estimate and return a wake-up time.
  2. Sleep until the given timestamp.

We make 1 call instead of 3, then sleep. Both are non-blocking.

In other words, the very large majority of time is spent waiting in a non-blocking way, meaning the limiters' impact on an application event-loop should be close to completely negligible.

Benchmarks

We run benchmarks in CI with Github actions. For a normal ubuntu-latest runner, we see runtimes for both limiters:

When creating 100 instances of each implementation and calling them at the same time, the average runtimes are:

  • Semaphore implementation: ~0.6ms per instance
  • Token bucket implementation: ~0.03ms per instance

Take a look at the benchmarking script if you want to run your own tests!

Implementation reference

The semaphore implementation

The semaphore implementation is useful when you need to limit a process to n concurrent actions. For example if you have several web servers, and you're interacting with an API that will only tolerate a certain amount of concurrent requests before locking you out.

The flow can be broken down as follows:

The initial lua script first checks if the redis list we will build the semaphore on exists or not. It does this by calling SETNX on the key of the queue plus a postfix (if the name specified in the class instantiation is "my-queue", then the queue name will be __self-limiters:my-queue and setnx will be called for __self-limiters:my-queue-exists). If the returned value is 1 it means the queue we will use for our semaphore does not exist yet and needs to be created.

It might strike you as weird to maintain a separate value, just to indicate whether a list exists, when we could just check the list itself. It would be nice if we could use EXISTS on the list directly, but unfortunately a list is considered not to exist when all elements are popped (i.e., when a semaphore is fully acquired), so I don't see another way of doing this. Contributions are very welcome if you do!

Then if the queue needs to be created we call RPUSH with the number of arguments equal to the capacity value used when initializing the semaphore instance. For a semaphore with a capacity of 5, we call RPUSH 1 1 1 1 1, where the values are completely arbitrary.

Once the list/queue has been created, we BLPOP to block until it's our turn. BLPOP is FIFO by default. We also make sure to specify the max_sleep based on the initialized semaphore instance setting. If nothing was passed we allow sleeping forever.

On __aexit__ we run three commands in a pipelined query. We RPUSH a 1 back into the queue to "release" the semaphore, and set an expiry on the queue and the string value we called SETNX on.

The expires are a half measure for dealing with dropped capacity. If a node holding the semaphore dies, the capacity might never be returned. If, however, there is no one using the semaphore for the duration of the expiry value, all values will be cleared, and the semaphore will be recreated at full capacity next time it's used. The expiry is 30 seconds at the time of writing, but could be made configurable.

The token bucket implementation

The token bucket implementation is useful when you need to limit a process by a time interval. For example, to 1 request per minute, or 50 requests every 10 seconds.

The implementation is forward-looking. It works out the time there would have been capacity in the bucket for a given client and returns that time. From there we can asynchronously sleep until it's time to perform our rate limited action.

The flow can be broken down as follows:

Call the schedule Lua script which first GETs the state of the bucket.

The bucket state contains the last time slot scheduled and the number of tokens left for that time slot. With a capacity of 1, having a tokens_left_for_slot variable makes no sense, but if there's capacity of 2 or more, it is possible that we will need to schedule multiple clients to the same time slot.

The script then works out whether to decrement the tokens_left_for_slot value, or to increment the time slot value wrt. the frequency variable.

Finally, we store the bucket state again using SETEX. This allows us to store the state and set expiry at the same time. The default expiry is 30 at the time of writing, but could be made configurable.

One thing to note, is that this would not work if it wasn't for the fact that redis is single threaded, so Lua scripts on Redis are FIFO. Without this we would need locks and a lot more logic.

Then we just sleep!

Contributing

Please do! Feedback on the implementation, issues, and PRs are all welcome. See CONTRIBUTING.md for more details.

Please also consider starring the repo to raise visibility.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

self_limiters-0.2.0-cp38-abi3-win_amd64.whl (594.0 kB view details)

Uploaded CPython 3.8+ Windows x86-64

self_limiters-0.2.0-cp38-abi3-win32.whl (572.0 kB view details)

Uploaded CPython 3.8+ Windows x86

self_limiters-0.2.0-cp38-abi3-musllinux_1_1_x86_64.whl (899.2 kB view details)

Uploaded CPython 3.8+ musllinux: musl 1.1+ x86-64

self_limiters-0.2.0-cp38-abi3-musllinux_1_1_aarch64.whl (852.1 kB view details)

Uploaded CPython 3.8+ musllinux: musl 1.1+ ARM64

self_limiters-0.2.0-cp38-abi3-manylinux_2_24_armv7l.whl (654.1 kB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.24+ ARMv7l

self_limiters-0.2.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (728.6 kB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.17+ x86-64

self_limiters-0.2.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (675.9 kB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.17+ ARM64

self_limiters-0.2.0-cp38-abi3-manylinux_2_12_i686.manylinux2010_i686.whl (792.9 kB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.12+ i686

self_limiters-0.2.0-cp38-abi3-macosx_11_0_arm64.whl (609.2 kB view details)

Uploaded CPython 3.8+ macOS 11.0+ ARM64

self_limiters-0.2.0-cp38-abi3-macosx_10_7_x86_64.whl (643.4 kB view details)

Uploaded CPython 3.8+ macOS 10.7+ x86-64

File details

Details for the file self_limiters-0.2.0-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 a77374cd90c5058d262be967d6b0be06197dcdeac94d9b051868f1529d9b8a90
MD5 5ec75a9c8be99a93fc069e2c834ccd8d
BLAKE2b-256 3aa604591338f21792fc5552b58d8c968541e5e7d9f9e3ad5b9ec7c7512f0de3

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-win32.whl.

File metadata

  • Download URL: self_limiters-0.2.0-cp38-abi3-win32.whl
  • Upload date:
  • Size: 572.0 kB
  • Tags: CPython 3.8+, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.8

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-win32.whl
Algorithm Hash digest
SHA256 811572782e7d59a6b376715ba4ec28e142c96260f0e3081fa74656f5c77019b3
MD5 c1aba0a4982f70f06f6c40b71ffe341d
BLAKE2b-256 7207531552f1a565f5b6e782b7949b40204814ef79cb821cd3ec32eb742e12f7

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-musllinux_1_1_x86_64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-musllinux_1_1_x86_64.whl
Algorithm Hash digest
SHA256 69abd49898d77add17ca2a2fa7d406964767bdef26f7d0ffac8fc2bd0f37916a
MD5 6c458ed6fecb57e634f14a343acad3cd
BLAKE2b-256 69abc70edb46b3fed311e958e40ef00f20773912c6eed72c2a99653f3f3d9795

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-musllinux_1_1_aarch64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-musllinux_1_1_aarch64.whl
Algorithm Hash digest
SHA256 a4c38a3248ad3f63a85e11599522bfe6daef7e52e81e3c3600a70c78154ceaae
MD5 02efd369ecb4d77a43372efd1bbed58e
BLAKE2b-256 f89a633184519ffe75689cd4d9ce759cd54202015bddc9011d3afddc6eceddf3

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-manylinux_2_24_armv7l.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-manylinux_2_24_armv7l.whl
Algorithm Hash digest
SHA256 1121395624819ebf7cdb2c05719beec6d2d282cf9bed4e094823750d9aac57a3
MD5 0554d8bc051d2b371688dda860c1b9b4
BLAKE2b-256 ef4db18a0a7d4dbc81d9380c9a1f13088007d4078f3d4848dd39bdb17104f8d5

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 8efdde0942cfc057cc78aac92d04c4eb0a6cb446fe28d886f4162ed336d1329e
MD5 584f9a15f8553c340cdda9f5340b47c6
BLAKE2b-256 638673b3f0f8376d94bc56880d0258d33be7d2cb1f0f04b735cd58f32c681d2b

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 88593444989d14534249e0434f2b41965a8c3fe475c37eb96d21e36909294534
MD5 178b7e8e163224153b831b720f0b4223
BLAKE2b-256 65024da7ca2ddca498108d33cc2961af0020bc9ad70ff1fd5e75cff05cae590a

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-manylinux_2_12_i686.manylinux2010_i686.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-manylinux_2_12_i686.manylinux2010_i686.whl
Algorithm Hash digest
SHA256 e37184facf181246ed60c9b3e34540dd5cf2857471220d2ed178c209e9bf2f7d
MD5 879cf1dd7f5346ebc71b523ec2616e03
BLAKE2b-256 3a2c79ebbbee8abc49493cc7f30fe7cd662a17a3cc09434bfb97f54b67ed353f

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 4348c1853a6a12416987c95e2420f07fa35cc5fd223283102f66ed193d4cf752
MD5 af052d85d3759ff18b26239679011c42
BLAKE2b-256 6e1961c22cc089cb9dbe17354b7e1a1f2c59f4bb11ce1342f84a37ad898c5b0f

See more details on using hashes here.

File details

Details for the file self_limiters-0.2.0-cp38-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for self_limiters-0.2.0-cp38-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 4a5364ab41e1c180a10b399b31af63a9ca04f368eb3653b137c0429daa9f1891
MD5 a9e26159c2133116844ee6aee9c9502a
BLAKE2b-256 8f5552d21d0cc13f43cdbd27579f5e9e89c3a5621dd36752052e9b273563bbe3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page