Skip to main content

SQS/S3 Broker for TaskIQ using Aiobotocore

Project description

TaskIQ SQS/S3 aiobotocore

PyPI Python Versions Ruff pre-commit Coverage Status

This library provides you with a fully asynchronous SQS broker and S3 backend for TaskIQ using aiobotocore. Inspired by the taskiq-sqs broker.

Besides the SQS broker, this library also provides an S3 backend for the results, this is useful when the results are too large for SQS. Addidionally, the broker itself can be configured to use S3 + SQS for messages that are too large for SQS, replicating the behaviour of the Amazon Extended Client Library.

Installation

pip install taskiq-aio-sqs

General Usage:

Here is an example of how to use the SQS broker with the S3 backend:

# broker.py
import asyncio
from taskiq_aio_sqs import SQSBroker, S3Backend

s3_result_backend = S3Backend(
    endpoint_url="http://localhost:4566",
    bucket_name="response-bucket",  # bucket must exist
)

broker = SQSBroker(
    endpoint_url="http://localhost:4566",
    result_backend=s3_result_backend,
    sqs_queue_name="my-queue",
)


@broker.task
async def i_love_aws() -> None:
    """I hope my cloud bill doesn't get too high!"""
    await asyncio.sleep(5.5)
    print("Hello there!")


async def main():
    task = await i_love_aws.kiq()
    print(await task.wait_result())


if __name__ == "__main__":
    asyncio.run(main())

Delayed Tasks:

Delayed tasks can be created in 3 ways:

  • by using the delay parameter in the task decorator
  • by using the kicker with the delay label
  • by setting the delay_seconds parameter in the broker, which will apply to all tasks processed by the broker.

Here's an example of how to use delayed tasks:

broker = SQSBroker(
    endpoint_url="http://localhost:4566",
    delay_seconds=3,
    sqs_queue_name="my-queue",
)

@broker.task()
async def general_task() -> int:
    return 1

@broker.task(delay=7)
async def delayed_task() -> int:
    return 1

async def main():
    await broker.startup()
    # This message will be received by workers after 3 seconds
    # delay using the delay_seconds parameter in the broker init.
    await general_task.kiq()

    # This message will be received by workers after 7 seconds delay.
    await delayed_task.kiq()

    # This message is going to be received after the delay in 4 seconds.
    # Since we overriden the `delay` label using kicker.
    await delayed_task.kicker().with_labels(delay=4).kiq()

Extended Messages with S3:

You can also use S3 to store messages that are too large for SQS. To do this, you need to set the s3_extended_bucket_name parameter in the broker configuration.

Here's an example of this behaviour:

pub_broker = SQSBroker(
    endpoint_url="http://localhost:4566",
    sqs_queue_name="my-queue",
    s3_extended_bucket_name="response-bucket",
)

sub_broker = SQSBroker(
    endpoint_url="http://localhost:4566",
    s3_extended_bucket_name="response-bucket",
)

LARGE_MESSAGE = b"x" * (256 * 1024 + 1)  # 256 KB is the limit for SQS

@pub_broker.task()
async def large_task() -> bytes:
    return LARGE_MESSAGE


async def main():
    await pub_broker.startup()
    await sub_broker.startup()
    # This message will store data in S3 and send a reference to SQS
    # This reference will include the S3 bucket and key.
    await large_task.kiq()

    async for msg in sub_broker.listen():
        message = msg
        break  # Stop after receiving one message

    # The message will be automatically retrieved from S3
    # and the full data will be available in the message.
    assert message.data == LARGE_MESSAGE

Configuration:

SQS Broker parameters:

  • endpoint_url - url to access sqs, this is particularly useful if running on ECS.
  • sqs_queue_name - name of the sqs queue.
  • region_name - region name, defaults to us-east-1.
  • aws_access_key_id - aws access key id (Optional).
  • aws_secret_access_key - aws secret access key (Optional).
  • use_task_id_for_deduplication - use task_id for deduplication, this is useful when using a Fifo queue without content based deduplication, defaults to False.
  • wait_time_seconds - wait time in seconds for long polling, defaults to 0.
  • max_number_of_messages - maximum number of messages to receive, defaults to 1 (max 10).
  • s3_extended_bucket_name - extended bucket name for the s3 objects, adding this will allow the broker to kick messages that are too large for SQS by using S3 as well, by default the listen function handles this behaviour, defaults to None.
  • task_id_generator - custom task_id generator (Optional).
  • result_backend - custom result backend (Optional).

S3 Result Backend parameters:

  • bucket_name - name of the s3 bucket.
  • base_path - base path for the s3 objects, defaults to "".
  • endpoint_url - url to access s3, this is particularly useful if running on ECS.
  • region_name - region name, defaults to us-east-1.
  • aws_access_key_id - aws access key id (Optional).
  • aws_secret_access_key - aws secret access key (Optional).
  • serializer - custom serializer, defaults to OrjsonSerializer.

Local Development:

We use make to handle the commands for the project, you can see the available commands by running this in the root directory:

make

Setup

To setup the project, you can run the following commands:

make install

This will install the required dependencies for the project just using pip.

Linting

We use pre-commit to do linting locally, this will be included in the dev dependencies. We use ruff for linting and formatting, and pyright for static type checking. To install the pre-commit hooks, you can run the following command:

pre-commit install

If you for some reason hate pre-commit, you can run the following command to lint the code:

make check

Testing

To run tests, you can use the following command:

make test

In the background this will setup localstack to replicate the AWS services, and run the tests. It will also generate the coverage report and the badge.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

taskiq_aio_sqs-0.2.0.tar.gz (15.4 kB view details)

Uploaded Source

Built Distribution

taskiq_aio_sqs-0.2.0-py3-none-any.whl (11.2 kB view details)

Uploaded Python 3

File details

Details for the file taskiq_aio_sqs-0.2.0.tar.gz.

File metadata

  • Download URL: taskiq_aio_sqs-0.2.0.tar.gz
  • Upload date:
  • Size: 15.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for taskiq_aio_sqs-0.2.0.tar.gz
Algorithm Hash digest
SHA256 6299d593adfd58ee54cef85419d37c9ea183d8a9925444348f0d374374656c1b
MD5 ef91321798db26c0517d4d0f60e09fdb
BLAKE2b-256 3a56412f566e101fc8f3a3fcdf0f0a90b98f1c06c81121422c235f4a28f43e2c

See more details on using hashes here.

File details

Details for the file taskiq_aio_sqs-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for taskiq_aio_sqs-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7223c498ff69745abf8c0b5a0c507628150284a71f8f5c61ef370126773a0408
MD5 c77b55bb029f44633478b98257c1554e
BLAKE2b-256 5fb370b1d7aecd98f2be375581b454b72cd27175bbdc364e0e390f76143458bf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page