Full-Featured Pure-Python Kafka Client
Project description
PyKafka
PyKafka is a cluster-aware Kafka 0.8.2 protocol client for Python. It includes Python implementations of Kafka producers and consumers, which are optionally backed by a C extension built on librdkafka, and runs under Python 2.7+, Python 3.4+, and PyPy.
PyKafka’s primary goal is to provide a similar level of abstraction to the JVM Kafka client using idioms familiar to Python programmers and exposing the most Pythonic API possible.
You can install PyKafka from PyPI with
$ pip install pykafka
Full documentation and usage examples for PyKafka can be found on readthedocs.
You can install PyKafka for local development and testing with
$ python setup.py develop
Getting Started
Assuming you have a Kafka instance running on localhost, you can use PyKafka to connect to it.
>>> from pykafka import KafkaClient
>>> client = KafkaClient(hosts="127.0.0.1:9092")
If the cluster you’ve connected to has any topics defined on it, you can list them with:
>>> client.topics
{'my.test': <pykafka.topic.Topic at 0x19bc8c0 (name=my.test)>}
>>> topic = client.topics['my.test']
Once you’ve got a Topic, you can create a Producer for it and start producing messages.
>>> with topic.get_sync_producer() as producer:
... for i in range(4):
... producer.produce('test message ' + i ** 2)
The example above would produce to kafka synchronously, that is, the call only returns after we have confirmation that the message made it to the cluster.
To achieve higher throughput however, we recommend using the Producer in asynchronous mode, so that produce() calls will return immediately and the producer may opt to send messages in larger batches. You can still obtain delivery confirmation for messages, through a queue interface which can be enabled by setting delivery_reports=True. Here’s a rough usage example:
>>> with topic.get_producer(delivery_reports=True) as producer:
... count = 0
... while True:
... count += 1
... producer.produce('test msg', partition_key='{}'.format(count))
... if count % 10**5 == 0: # adjust this or bring lots of RAM ;)
... while True:
... try:
... msg, exc = producer.get_delivery_report(block=False)
... if exc is not None:
... print 'Failed to deliver msg {}: {}'.format(
... msg.partition_key, repr(exc))
... else:
... print 'Successfully delivered msg {}'.format(
... msg.partition_key)
... except Queue.Empty:
... break
Note that the delivery-report queue is thread-local: it will only serve reports for messages which were produced from the current thread.
You can also consume messages from this topic using a Consumer instance.
>>> consumer = topic.get_simple_consumer()
>>> for message in consumer:
... if message is not None:
... print message.offset, message.value
0 test message 0
1 test message 1
2 test message 4
3 test message 9
This SimpleConsumer doesn’t scale - if you have two SimpleConsumers consuming the same topic, they will receive duplicate messages. To get around this, you can use the BalancedConsumer.
>>> balanced_consumer = topic.get_balanced_consumer(
... consumer_group='testgroup',
... auto_commit_enable=True,
... zookeeper_connect='myZkClusterNode1.com:2181,myZkClusterNode2.com:2181/myZkChroot'
... )
You can have as many BalancedConsumer instances consuming a topic as that topic has partitions. If they are all connected to the same zookeeper instance, they will communicate with it to automatically balance the partitions between themselves.
Using the librdkafka extension
PyKafka includes a C extension that makes use of librdkafka to speed up producer and consumer operation. To use the librdkafka extension, you need to make sure the header files and shared library are somewhere where python can find them, both when you build the extension (which is taken care of by setup.py develop) and at run time. Typically, this means that you need to either install librdkafka in a place conventional for your system, or declare C_INCLUDE_PATH, LIBRARY_PATH, and LD_LIBRARY_PATH in your shell environment.
After that, all that’s needed is that you pass an extra parameter use_rdkafka=True to topic.get_producer(), topic.get_simple_consumer(), or topic.get_balanced_consumer(). Note that some configuration options may have different optimal values; it may be worthwhile to consult librdkafka’s configuration notes for this.
We currently test against librdkafka 0.8.6 only. Note that use on pypy is not recommended at this time; the producer is certainly expected to crash.
Operational Tools
PyKafka includes a small collection of CLI tools that can help with common tasks related to the administration of a Kafka cluster, including offset and lag monitoring and topic inspection. The full, up-to-date interface for these tools can be fould by running
$ python cli/kafka_tools.py --help
or after installing PyKafka via setuptools or pip:
$ kafka-tools --help
What happened to Samsa?
This project used to be called samsa. It has been renamed PyKafka and has been fully overhauled to support Kafka 0.8.2. We chose to target 0.8.2 because the offset Commit/Fetch API is stabilized.
The Samsa PyPI package will stay up for the foreseeable future and tags for previous versions will always be available in this repo.
PyKafka or kafka-python?
These are two different projects. See the discussion here.
Support
If you need help using PyKafka or have found a bug, please open a github issue or use the Google Group.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file pykafka-2.2.0.tar.gz
.
File metadata
- Download URL: pykafka-2.2.0.tar.gz
- Upload date:
- Size: 89.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6f33ee406c46deaab3cd481ce850558bec3786161df2a35aa18e12c0c859f784 |
|
MD5 | d1056669172e1fce1cbb9586e6af234f |
|
BLAKE2b-256 | c19e023a544a4bcccf57a69a244148b1defa0808648767fa278ba4b92dc5584f |