Skip to main content

Implement client side API rate limiting

Project description

API Rate Limiter

A simple Python object that allows you to implement client side API rate limiting.


Introduction

This project was created to address a specific issue – API rate limiting when scanning an AWS platform.

The problem with AWS applying rate limiting to the services is that it is indiscriminate in nature. So if you are running a multi-threaded scanner such as ScoutSuite, while for example an Auto Scaling Group is trying to scale something, the auto Scaling can fail because AWS does not distinguish between the third party scanner and its own services.

This is a cause for frustration amongst the platform support team and can also cause an outage, particularly in the early stages of development of a new service, where resilience and high availability are still but a twinkle in the architect's eye...

This is becoming more and more of an issue as consumers realise the necessity of scanning their cloud platforms.

NOTE!

Apologies for the lack of images if you are viewing this on PyPi. For some reason PyPi insist on changing the URLs to https://warehouse-test-camo.cmh1.psfhosted.org/..... I've no idea why. You can read this README correctly on the home project on GitHub.

History

I started looking into this issue when scanning my own client's platform and hit the rate limiting issue when scanning some 15k of snapshots.

Initially I managed to mitigate this using the Network Link Conditioner on the MAC I was using to run the scanner.

The NLC allows you to add latency to all outbound packets on a given network like so:

This blanket approach to delaying all outbound packets did resolve the issue of the rate limiting, but at a cost of causing the scanner to now run for two hours in order to complete a scan of all of our development environments.

Not only was this slow, but it's also not OS agnostic, so hardly a good solution.

It occurred to me that if a mechanism could be created within boto3 itself to queue outbound API calls at a configurable rate, then this might might prove to be a more general solution to the issue.

So I forked botocore here into a project that combined forks of botocore, boto3 and ScoutSuite.

This project has the queue implemented in it and boto3 has been refactored to pass through the value of the API rate in ms to botocore. ScoutSuite is included and has simply been hardcoded to apply the queue to ec2 clients.

This has enabled me to conduct comparative runs between this solution and using the Network Link Conditioner. With the former completing in 30 minutes and the latter in around 2 hours.

The queue was only applied to ec2 clients because I was only experiencing rate limiting on the scanning of the snapshots, around 15k +

At the time of writing botocore seem to be unwilling to accept the PR for this as they feel that it is beyond the scope of their project.

c'est la vie.

The Solution

Rather than cry over a missed opportunity I've now taken the rate limiter and packaged it up as a stand alone utility that anyone can consume in their own projects should they need to avoid server-side rate limiting:

In the diagram above, each thread needing to make an API call using a/the ec2 client calls a method that first enqueues the call in a FIFO queue and then waits for it to reach the head of the queue. Thereby translating the asynchronous calls from the multiple threads into a synchronous stream of calls at a configurable frequency.

This approach allows each thread to continue to leverage parallel processing of tasks while only waiting on the actual API call. So you still see the increase in efficiency of the multi threaded approach.

By instantiating individual queues for each AWS service, each can be individually configured with an appropriate rate for the consumer's platform, or not rate limited at all.

e.g. in my case I only had to limit the EC2 client because of the excessive number of snapshots being scanned.

Installation

In the usual Python fashion:

import ApiRateLimiter

Usage example

See "The Solution" above or look at the tests in the GitHub repo:

Tests on GitHub

Basic Usage

Setup:

  • Instantiate the rate limiter
  • Start it running in a background thread
  • Call the enqueue() method to join the queue
  • Poll the waiting state until false
  • Make your API call

On close:

  • Soft stop the rate limiter – Waits for background thread to exit or timeout

Example

    rate_limiter = ApiRateLimiter(100)
    rate_limiter.start()

    ...

    def some_method()
        waiter = self.rate_limiter.enqueue()
        while waiter.waiting is True:
            pass

        client.describe_instances()

    ...

    rate_limiter.stop(True) 


Release History

  • 0.1.0

    • CHANGE: Initial code commit
    • ADD LICENCE
    • ADD Detailed README
    • FIX Error handling in integration test
  • 0.1.1

    • FIX Remove requires for multiprocessing as now in Python3 main lib

Meta

Bradley Atkins – bradley.atkinz @ gmail.com

Distributed under the MIT license. See LICENSE for more information.

This Project on GitHub

Contributing

  1. Fork it (https://github.com/museadmin/api-rate-limiter)
  2. Create your feature branch (git checkout -b feature/fooBar)
  3. Commit your changes (git commit -am 'Add some fooBar')
  4. Push to the branch (git push origin feature/fooBar)
  5. Create a new Pull Request
  6. Email me if I don't notice!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

api-rate-limiter-0.1.1.tar.gz (6.0 kB view details)

Uploaded Source

Built Distribution

api_rate_limiter-0.1.1-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file api-rate-limiter-0.1.1.tar.gz.

File metadata

  • Download URL: api-rate-limiter-0.1.1.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.35.0 CPython/3.7.4

File hashes

Hashes for api-rate-limiter-0.1.1.tar.gz
Algorithm Hash digest
SHA256 6ba8642031d0d221482b498e409c6f78e0ddd54afacffbd6000e323c354e5d31
MD5 b8eb35f3932f6738f9fb107ae2099ea1
BLAKE2b-256 de9222a729a125e83e227fbb69fc6a9a012d9facb66ec0084a9e9b8b5a97124f

See more details on using hashes here.

File details

Details for the file api_rate_limiter-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: api_rate_limiter-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.35.0 CPython/3.7.4

File hashes

Hashes for api_rate_limiter-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f152e455e6e82d2ee81d0f1a5cfcc16fdeb30aba3c057b17e32b708c650ef92f
MD5 86e6ec03992fbec17366082c0da6d4bf
BLAKE2b-256 0a093215a017dd6dc5631a58cc2076fbad0037c8c8ce71a56cab13a45636791e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page