Skip to main content

Apache Airflow Kafka provider containing Deferrable Operators & Sensors.

Project description

Kafka Airflow Provider

An airflow provider to:

  • interact with kafka clusters
  • read from topics
  • write to topics
  • wait for specific messages to arrive to a topic

This package currently contains

3 hooks :

  • airflow_provider_kafka.hooks.admin_client.KafkaAdminClientHook - a hook to work against the actual kafka admin client
  • airflow_provider_kafka.hooks.consumer.KafkaConsumerHook - a hook that creates a consumer and provides it for interaction
  • airflow_provider_kafka.hooks.producer.KafkaProducerHook - a hook that creates a producer and provides it for interaction

3 operators :

  • airflow_provider_kafka.operators.await_message.AwaitKafkaMessageOperator - a deferable operator (sensor) that awaits to encounter a message in the log before triggering down stream tasks.
  • airflow_provider_kafka.operators.consume_from_topic.ConsumeFromTopicOperator - an operator that reads from a topic and applies a function to each message fetched.
  • airflow_provider_kafka.operators.produce_to_topic.ProduceToTopicOperator - an operator that uses a iterable to produce messages as key/value pairs to a kafka topic.

1 trigger :

  • airflow_provider_kafka.triggers.await_message.AwaitMessageTrigger

Quick start

pip install airflow-provider-kafka

    # hello_kafka.py 
    
    from airflow_provider_kafka.operators.await_message import AwaitKafkaMessageOperator
    from airflow_provider_kafka.operators.consume_from_topic import ConsumeFromTopicOperator
    from airflow_provider_kafka.operators.produce_to_topic import ProduceToTopicOperator

    def producer_function():
        for i in range(20):
            yield (json.dumps(i), json.dumps(i + 1))



    consumer_logger = logging.getLogger("airflow")
    def consumer_function(message, prefix=None):
        key = json.loads(message.key())
        value = json.loads(message.value())
        consumer_logger.info(f"{prefix} {message.topic()} @ {message.offset()}; {key} : {value}")
        return


    def await_function(message):
        if json.loads(message.value()) % 5 == 0:
            return f" Got the following message: {json.loads(message.value())}"

    t1 = ProduceToTopicOperator(
        task_id="produce_to_topic",
        topic="test_1",
        producer_function="hello_kafka.producer_function",
        kafka_config={"bootstrap.servers": "broker:29092"},
    )

    t2 = ConsumeFromTopicOperator(
        task_id="consume_from_topic",
        topics=["test_1"],
        apply_function="hello_kafka.consumer_function",
        apply_function_kwargs={"prefix": "consumed:::"},
        consumer_config={
            "bootstrap.servers": "broker:29092",
            "group.id": "foo",
            "enable.auto.commit": False,
            "auto.offset.reset": "beginning",
        },
        commit_cadence="end_of_batch",
        max_messages=10,
        max_batch_size=2,
    )

    AwaitKafkaMessageOperator(
        task_id="awaiting_message",
        topics=["test_1"],
        apply_function="hello_kafka.await_function",
        kafka_config={
            "bootstrap.servers": "broker:29092",
            "group.id": "awaiting_message",
            "enable.auto.commit": False,
            "auto.offset.reset": "beginning",
        },
        xcom_push_key="retrieved_message",
    )

FAQs

Why confluent kafka and not (other library) ? A few reasons: the confluent-kafka library is guaranteed to be 1:1 functional with librdkafka, is faster, and is maintained by a company with a commercial stake in ensuring the continued quality and upkeep of it as a product.

Why not release this into airflow directly ? I could probably make the PR and get it through, but the airflow code base is getting huge and I don't want to burden the maintainers with code that they don't own for maintainence. Also there's been multiple attempts to get a Kafka provider in before and this is just faster.

Why is most of the configuration handled in a dict ? Because that's how confluen-kafka does it. I'd rather maintain interfaces that people already using kafka are comfortable with as a starting point - I'm happy to add more options/ interfaces in later but would prefer to be thoughtful about it to ensure that there difference between these operators and the actual client interface are minimal.

Development

Unit Tests

Unit tests are located at tests/unit, a kafka server isn't required to run these tests. execute with pytest

Setup on M1 Mac

Installing on M1 chip means a brew install of the librdkafka library before you can pip install confluent-kafka

brew install librdkafka
export C_INCLUDE_PATH=/opt/homebrew/Cellar/librdkafka/1.8.2/include
export LIBRARY_PATH=/opt/homebrew/Cellar/librdkafka/1.8.2/lib
pip install confluent-kafka

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

airflow-provider-kafka-0.1.1.tar.gz (15.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

airflow_provider_kafka-0.1.1-py3-none-any.whl (18.5 kB view details)

Uploaded Python 3

File details

Details for the file airflow-provider-kafka-0.1.1.tar.gz.

File metadata

  • Download URL: airflow-provider-kafka-0.1.1.tar.gz
  • Upload date:
  • Size: 15.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for airflow-provider-kafka-0.1.1.tar.gz
Algorithm Hash digest
SHA256 85a7bd1e9541607d1ed738d57d3990b4f0aafe50c9d68441efdd98f7aad83408
MD5 35c91a386b69ce006ce01210f24eaec6
BLAKE2b-256 3d53c4c30714319d1da17429fbb347201225cef8d2e6e2ad4bd91c3775edc46e

See more details on using hashes here.

File details

Details for the file airflow_provider_kafka-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for airflow_provider_kafka-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 059b75e06f3ad9bf239b2ec62c685601065d3965aa428ace9ee8378b7ef62e01
MD5 c75f5fa9fb80763654c67886a65828ff
BLAKE2b-256 a51f90beb326b70b95b7ce9dc249814c9ab684acfab8ad86af957185ba93b60c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page