Confluent's Kafka Python Client combined with Django
Project description
django-kafka
This library is using confluent-kafka-python which is a wrapper around the librdkafka (Apache Kafka C/C++ client library).
It helps to integrate kafka with Django.
Quick start
pip install django-kafka
Configure:
Considering you have locally setup kafka instance with no authentication. All you need is to define the bootstrap servers.
# ./settings.py
INSTALLED_APPS = [
# ...
"django_kafka",
]
DJANGO_KAFKA = {
"GLOBAL_CONFIG": {
"bootstrap.servers": "kafka1:9092",
},
}
Define a Topic:
Topics define how to handle incoming messages and how to produce an outgoing message.
from confluent_kafka.serialization import MessageField
from django_kafka.topic import Topic
class Topic1(Topic):
name = "topic1"
def consume(self, msg):
key = self.deserialize(msg.key(), MessageField.KEY, msg.headers())
value = self.deserialize(msg.value(), MessageField.VALUE, msg.headers())
# ... process values
Topic
inherits from the TopicProducer
and TopicConsumer
classes. If you only need to consume or produce messages, inherit from one of these classes instead to avoid defining unnecessary abstract methods.
Define a Consumer:
Consumers define which topics they take care of. Usually you want one consumer per project. If 2 consumers are defined, then they will be started in parallel.
Consumers are auto-discovered and are expected to be located under the consumers.py
.
# ./consumers.py
from django_kafka import kafka
from django_kafka.consumer import Consumer, Topics
from my_app.topics import Topic1
# register your consumer using `DjangoKafka` class API decorator
@kafka.consumers()
class MyAppConsumer(Consumer):
# tell the consumers which topics to process using `django_kafka.consumer.Topics` interface.
topics = Topics(
Topic1(),
)
config = {
"group.id": "my-app-consumer",
"auto.offset.reset": "latest",
"enable.auto.offset.store": False,
}
Start the Consumers:
You can use django management command to start defined consumers.
./manage.py kafka_consume
Or you can use DjangoKafka
class API.
from django_kafka import kafka
kafka.run_consumers()
Check Confluent Python Consumer for API documentation.
Produce:
Message are produced using a Topic instance.
from my_app.topics import Topic1
# this will send a message to kafka, serializing it using the defined serializer
Topic1().produce("some message")
Check Confluent Python Producer for API documentation.
Define schema registry:
The library is using Confluent's SchemaRegistryClient. In order to use it define a SCHEMA_REGISTRY
setting.
Find available configs in the SchemaRegistryClient docs.
DJANGO_KAFKA = {
"SCHEMA_REGISTRY": {
"url": "http://schema-registry",
},
}
Note: take django_kafka.topic.AvroTopic as an example if you want to implement a custom Topic with your schema.
Specialized Topics:
ModelTopicConsumer:
ModelTopicConsumer
can be used to sync django model instances from abstract kafka events. Simply inherit the class, set the model, the topic to consume from and define a few abstract methods.
from django_kafka.topic.model import ModelTopicConsumer
from my_app.models import MyModel
class MyModelConsumer(ModelTopicConsumer):
name = "topic"
model = MyModel
def is_deletion(self, model, key, value) -> bool:
"""returns if the message represents a deletion"""
return value.pop('__deleted', False)
def get_lookup_kwargs(self, model, key, value) -> dict:
"""returns the lookup kwargs used for filtering the model instance"""
return {"id": key}
Model instances will have their attributes synced from the message value. If you need to alter a message key or value before it is assigned, define a transform_{attr}
method:
class MyModelConsumer(ModelTopicConsumer):
...
def transform_name(model, key, value):
return 'first_name', value["name"].upper()
DbzModelTopicConsumer:
DbzModelTopicConsumer
helps sync model instances from debezium source connector topics. It inherits from ModelTopicConsumer
and defines default implementations for is_deletion
and get_lookup_kwargs
methods.
In Debezium it is possible to reroute records from multiple sources to the same topic. In doing so Debezium inserts a table identifier to the key to ensure uniqueness. When this key is inserted, you must instead define a reroute_model_map
to map the table identifier to the model class to be created.
from django_kafka.topic.debezium import DbzModelTopicConsumer
from my_app.models import MyModel, MyOtherModel
class MyModelConsumer(DbzModelTopicConsumer):
name = "debezium_topic"
reroute_model_map = {
'public.my_model': MyModel,
'public.my_other_model': MyOtherModel,
}
A few notes:
- The connector must be using the event flattening SMT to simplify the message structure.
- Deletions are detected automatically based on a null message value or the presence of a
__deleted
field. - The message key is assumed to contain the model PK as a field, which is the default behaviour for Debezium source connectors. If you need more complicated lookup behaviour, override
get_lookup_kwargs
.
Non-Blocking Retries:
Add non-blocking retry behaviour to a topic by using the retry
decorator:
from django_kafka import kafka
from django_kafka.topic import Topic
@kafka.retry(max_retries=3, delay=120, include=[ValueError])
class RetryableTopic(Topic):
name = "topic"
...
When the consumption of a message in a retryable topic fails, the message is re-sent to a topic with a name combined of the consumer group id, the original topic name, a .retry
suffix, and the retry number. Subsequent failed retries will then be sent to retry topics of incrementing retry number until the maximum attempts are reached, after which it will be sent to a dead letter topic suffixed by .dlt
. So for a failed message in topic topic
received by consumer group group
, the expected topic sequence would be:
topic
group.topic.retry.1
group.topic.retry.2
group.topic.retry.3
group.topic.dlt
When consumers are started using start commands, an additional retry consumer will be started in parallel for any consumer containing a retryable topic. This retry consumer will be assigned to a consumer group whose id is a combination of the original group id and a .retry
suffix. This consumer is subscribed to the retry topics, and manages the message retry and delay behaviour. Please note that messages are retried directly by the retry consumer and are not sent back to the original topic.
Settings:
Defaults:
DJANGO_KAFKA = {
"CLIENT_ID": f"{socket.gethostname()}-python",
"GLOBAL_CONFIG": {},
"PRODUCER_CONFIG": {},
"CONSUMER_CONFIG": {},
"POLLING_FREQUENCY": 1, # seconds
"SCHEMA_REGISTRY": {},
"ERROR_HANDLER": "django_kafka.error_handlers.ClientErrorHandler",
}
CLIENT_ID
Default: f"{socket.gethostname()}-python"
An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.
Note: This parameter is included in the config of both the consumer and producer unless client.id
is overwritten within PRODUCER_CONFIG
or CONSUMER_CONFIG
.
GLOBAL_CONFIG
Default: {}
Defines configurations applied to both consumer and producer. See configs marked with *
.
PRODUCER_CONFIG
Default: {}
Defines configurations of the producer. See configs marked with P
.
CONSUMER_CONFIG
Default: {}
Defines configurations of the consumer. See configs marked with C
.
POLLING_FREQUENCY
Default: 1 # second
How often client polls for events.
SCHEMA_REGISTRY
Default: {}
Configuration for confluent_kafka.schema_registry.SchemaRegistryClient.
ERROR_HANDLER
Default: django_kafka.error_handlers.ClientErrorHandler
This is an error_cb
hook (see Kafka Client Configuration for reference).
It is triggered for client global errors and in case of fatal error it raises DjangoKafkaError
.
Bidirectional data sync with no infinite event loop.
For example, you want to keep a User table in sync in multiple systems.
The idea is to send events from all systems to the same topic, and also consume events from the same topic, marking the record with kafka_skip=True
at the consumption time.
- Producer should respect
kafka_skip=True
and do not produce new events whenTrue
. - Any updates to the User table, which are happening outside the consumer, should set
kafka_skip=False
which will allow the producer to create an event again.
This way the chronology is strictly kept and the infinite events loop is avoided.
The disadvantage is that each system will still consume its own message.
There are 2 classes for django Model and for QuerySet:
KafkaSkipModel
Adds the kafka_skip
boolean field, defaulting to False
. This also automatically resets kafka_skip
to False
when updating model instances (if not explicitly set).
Usage:
from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin
from django_kafka.models import KafkaSkipModel
class User(KafkaSkipModel, PermissionsMixin, AbstractBaseUser):
# ...
KafkaSkipQueryset
If you have defined a custom manager on your model then you should inherit it from KafkaSkipQueryset
. It adds kafka_skip=False
when using update
method.
Note: kafka_skip=False
is only set when it's not provided to the update
kwargs. E.g. User.objects.update(first_name="John", kafka_skip=True)
will not be changed to kafka_skip=False
.
Usage:
from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.base_user import BaseUserManager
from django.contrib.auth.models import PermissionsMixin
from django_kafka.models import KafkaSkipModel, KafkaSkipQueryset
class UserManager(BaseUserManager.from_queryset(KafkaSkipQueryset)):
# ...
class User(KafkaSkipModel, PermissionsMixin, AbstractBaseUser):
# ...
objects = UserManager()
Making a new release
-
bump-my-version is used to manage releases.
-
Ruff linter is used to validate the code style. Make sure your code complies withg the defined rules. You may use
ruff check --fix
for that. Ruff is executed by GitHub actions and the workflow will fail if Ruff validation fails. -
Add your changes to the CHANGELOG, then run
docker compose run --rm app bump-my-version bump <major|minor|patch>
This will update version major/minor/patch version respectively and add a tag for release.
- Once the changes are approved and merged, push the tag to publish the release to pypi.
git push origin tag <tag_name>
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for django_kafka-0.4.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4c536cd7bd674bf9cd3bdc2895d3136f98a095eb24874634f722cee86c9c33d2 |
|
MD5 | 2ae5811779b62a4a72b2e167ee7df76b |
|
BLAKE2b-256 | c067fcd4455ee8a8774e592c43604b90e9d2770a72fc1c1748aa35050e07f45f |